• Kinect for Windows Product Blog

    Join Now, BUILD for Tomorrow

    • 3 Comments

    Today at Microsoft BUILD 2013, we made two important announcements for our Kinect for Windows developer community.

    First, starting today, developers can apply for a place in our upcoming developer kit program. This program will give participants exclusive early access to everything they need to start building applications for the recently-announced new generation Kinect for Windows sensor, including a pre-release version of the new sensor hardware and software development kit (SDK) in November, and a replacement unit of the final sensor hardware and firmware when it is publicly available next year. The cost for the program will be US$399 (or local equivalent). Applications must be received by July 31 and successful applicants will be notified and charged in August. Interested developers are strongly encouraged to apply early, as spots are very limited and demand is already great for the new sensor. Review complete program details and apply for the program.


    The upcoming Kinect for Windows SDK 1.8 will include more realistic color capture with Kinect Fusion.
    The upcoming Kinect for Windows SDK 1.8 will include more realistic color capture with Kinect Fusion.

    Additionally, in September we will again refresh the Kinect for Windows SDK with several exciting updates including:

    • The ability to extract the user from the background in real time
    • The ability to develop Kinect for Windows desktop applications by using HTML5/JavaScript
    • Enhancements to Kinect Fusion, including capture of color data and improvements to tracking robustness and accuracy

    The feature enhancements will enable even better Kinect for Windows-based applications for businesses and end users, and the convenience of HTML5 will make it easier for developers to build leading-edge touch-free experiences.

    This will be the fourth significant update to the Kinect for Windows SDK since we launched 17 months ago. We are committed to continuing to improve the existing Kinect for Windows platform as we prepare to release the new generation Kinect for Windows sensor and SDK.  If you aren’t already using Kinect for Windows to develop touch-free solutions, now is a great time to start. Join us as we continue to make technology easier to use and more intuitive for everyone.

    Bob Heddle
    Director, Kinect for Windows

    Key links 

  • Kinect for Windows Product Blog

    Using Kinect Interactions to Create a Slider Control

    • 8 Comments

    In the 1.7 release, the Kinect for Windows Toolkit added the "Interactions Framework" which makes it easy to create Kinect-enabled applications in WPF that use buttons and grip scrolling.  What may not be obvious from the Toolkit samples is creating new controls for this framework is easy and straightforward.  To demonstrate this, I’m going to introduce a slider control that can be used with Kinect for Windows to “scrub” video or for other things like turning the volume up to eleven.

    A solution containing the control code and a sample app is in the .zip file below.

    Look Before You Leap

    Before jumping right in and writing a brand new WPF control, it's good to see if other solutions will meet your needs.  Most WPF controls are designed to be look-less.  That is, everything about the visual appearance of the control is defined in XAML, as opposed to using C# code.  So if it's just the layout of things in the control, transitions, or animations you need to be different, changing the control template will likely suit your needs.  If you want the behavior of multiple controls combined into a reusable component then a UserControl may do what you want.

    Kinect HandPointers

    HandPointers are the abstraction that the Interactions Framework provides to tell the UI where the user's hands are and what state they are in.  In the WPF layer the API for HandPointers resembles the API for the mouse where possible.  Unlike the mouse, there is typically more than one hand pointer active at a time since more than one hand is visible by the Kinect sensor at a time.  In the controls that are in the toolkit (KinectCursorVisualizer, KinectTileButton, KinectScrollViewer, etc.) only the primary hand pointer of the primary user is used.  However, your control will still get events for all the other hand pointers.  As a result there is code in the event handlers to only respond to the primary user's primary hand.

    KinectRegion Events

    KinectRegion is the main component to look to when adding Kinect Interactions functionality to a WPF control.  All the WPF controls that are descendants of the KinectRegion will receive HandPointer* events as the HandPointers are used.  For example, when a hand pointer moves into the control's boundaries, the control will receive a KinectRegion.HandPointerEnter event.  If you've handled mouse events before, many of the KinectRegion events will feel familiar. 

    KinectRegion events - http://msdn.microsoft.com/en-us/library/microsoft.kinect.toolkit.controls.kinectregion_events.aspx

    Handling KinectRegion Events in the Slider

    The slider control handles KinectRegion events to allow the user to grip and drag the thumb of the slider.  When a control "captures" a hand pointer it means that all the events of the captured hand pointer will be sent to that control until capture is released.  A general guideline for implementing control interactions is that a control should always capture hand pointer input events while the user is interacting with it otherwise it will miss many of the events it needs to function properly

    The state diagram below gives the basic states of the control and what causes the state transitions.  The key thing to note is that the transitions in and out of dragging are caused by capture changing.  So that leads to the question, what causes capture to change?

    The control takes capture when it gets a grip event.  That will put the control into the dragging state until capture is released.  Capture can be released for a number of reasons.  Most commonly it is released when the control receives a GripRelease event indicating the user opened their hand.  It can also be released if we lose track of the hand.  This can happen when the hand moves too far outside the bounds of the KinectRegion.

    Expanding the Hit Area of the Control 

    This control was originally designed to control video playback.  The design of the UI was such that we wanted to put the control at the bottom of the UI but allow the user to grab anywhere to move the playback position.  The way the slider does this is to allow the app to specify a different WPF UIElement that will attach hover and grip handlers.  See the KinectSlider.GripEventTarget property.  This uses WPFs ability to register event handlers on controls other than yourself.

    Things Missing

    While this control works and could actually be used in a real application, it is far from complete in a WPF sense.  It does not implement an automation peer so accessibility is limited.  While touch and keyboard usage may work a little, it is not fully supported.  Focus visuals, visuals for all the Slider permutations, and support for multiple themes are missing.

    Resources for Building WPF Controls

    Books and other resources we use to build controls include:

    WPF 4 Unleashed http://www.informit.com/store/wpf-4-unleashed-9780672331190

    WPF Control Development Unleashed - http://www.informit.com/store/wpf-control-development-unleashed-building-advanced-9780672330339

    WPF source code - http://referencesource.microsoft.com/

    Retemplating WPF controls - http://msdn.microsoft.com/en-us/magazine/cc163497.aspx


Page 1 of 1 (2 items)