KinectInteraction is a set of features, first introduced in Developer Toolkit 1.7, which allows Kinect-enabled applications to incorporate gesture-based interactivity. Developers can use KinectInteraction to create Windows Presentation Foundation (WPF) applications in which the movement of the user’s hand controls an on-screen hand, much like the movement of a mouse controls an on-screen cursor.
The course teaches how to develop apps that use hand gestures to control an on-screen hand.
András Velvart, a Kinect for Windows MVP, has created an online video course that provides step-by-step instructions on how to create such WPF applications and teaches students how to customize the look and feel of the controls provided by Microsoft. It even demonstrates how to completely control the interaction model or use KinectInteraction outside of WPF. The course, aptly named "KinectInteraction with WPF and Beyond,” is available through Pluralsight, an online training service for developers and IT professionals.
Kinect for Windows Team
In a pair of related blog posts, Zubair Ahmed, a Microsoft Most Valuable Professional nominee and a participant in the Kinect for Windows v2 developer preview program, put his new v2 Kinect for Windows sensor through its paces. In the first post, Zubair demonstrates how to use the body source data captured by the sensor to draw the bones, hands, and joints and overlay them on top of the color frame that comes from the sensor. The post includes the relevant code* and useful tips and tricks.
Zubair demonstrates the hand color frame received from the Kinect for Windows sensor.
Zubair’s second post continues his deep dive into the body tracking of the v2 Kinect for Windows sensor. He refines his methods to eliminate a hack he had employed in the original code. In addition, he explains how to merge two body-image color frames and use a single image control to render them. This post not only includes the relevant code* and helpful tips; it also provides a demonstration video.
*This is preliminary software and/or hardware and APIs are preliminary and subject to change.
That’s what you might be thinking as you scroll through the posts on this site. That’s because we’ve merged the Kinect for Windows developer and product blogs. This union creates a “one-stop shop” for news about Kinect for Windows: a single source for learning about cool product applications plus the latest developer information. So, yeah, we’re fatter now—just think of it as more to love!
Kinect for Windows Team
It’s not often that we get to tell the Kinect for Windows story to millions of people at the same time, so being featured in a commercial during Super Bowl XLVIII Sunday, February 2, was a thrill. That Microsoft Super Bowl ad showed how technology is #empowering lives, including how GestSure, a Kinect for Windows solution, is helping surgeons and their patients. Harnessing the power of Kinect for Windows to understand and respond to users’ movements, GestSure allows surgeons to use hand motions to study a patient’s medical images (X-rays as well as MRI and CT scans) on monitors in the operating room. This eliminates the need for the surgeon to physically manipulate the images using a mouse or keyboard, and thus allows the surgery to continue unimpeded without the doctor having to leave the operating room to view images and spend time scrubbing back in. The result is a better flow of surgery and better care for patients.
See more stories that celebrate what technology can do, including a short video that showcases GestSure.