• Kinect for Windows Product Blog

    Kinect for Windows Fall Roadmap

    • 17 Comments

    The Kinect for Windows team has been hard at work this summer, and I have some very exciting developments to share with you regarding our roadmap between now and the end of the year.

    On October 8, Kinect for Windows is coming to China. China is a leader in business and technology innovation. We are very excited to make Kinect for Windows available in China so that developers and businesses there can innovate with Kinect for Windows and transform experiences through touch-free solutions.

    Kinect for Windows hardware will be available in seven additional markets later this fall: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico.

    Map: Kinect for Windows sensor availability

    In addition to making Kinect for Windows hardware available in eight new markets this fall, we will be releasing an update to the Kinect for Windows runtime and software development kit (SDK) on October 8. This release has numerous new features that deliver additional power to Kinect for Windows developers and business customers. We will share the full details when it’s released on October 8, but in the meantime here are a few highlights:

    • Enable businesses to do more with Kinect for Windows
      We are committed to opening up even more opportunities for the creation of new end user experiences. We’ll be adding features such as expanded sensor data access—including color camera settings and extended depth data—to continue to inspire innovative uses of the Kinect for Windows technology in new and different places.
    • Improve developer efficiency
      We continue to invest in making our platform easier and more powerful for developers. That’s why we’ll be releasing more tools and samples in October, such as a new sample that demonstrates a “best in class” UI based on the Kinect for Windows Human Interface Guidelines.
    • Extend our Windows tools and operating system support
      We want to make it easy for our customers to be able to build and deploy on a variety of Windows platforms. Our October update will include support for Windows 8 desktop applications, Microsoft .NET 4.5, and Microsoft Visual Studio 2012.

    It has been a little more than seven months since we first launched Kinect for Windows in 12 markets. By the end of the year, Kinect for Windows will be available in 38 markets and we will have shipped two significant updates to the SDK and runtime beyond the initial release—and this is this just the beginning. Microsoft has had a multi-decade commitment to natural user interface (NUI), and my team and I look forward to continuing to be an important part of that commitment. In coming years, I believe that we will get to experience an exciting new era where computing becomes invisible and all of us will be able to interact intuitively and naturally with the computers around us.

    Craig Eisler
    General Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    Near Mode: What it is (and isn’t)

    • 14 Comments

    There has been a lot of speculation on what near mode is since we announced it.  As I mentioned in the original post, the Kinect for Windows device has new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.

    The lenses on the Kinect for Windows sensor are the same as the Kinect for Xbox 360 sensor, so near mode does not change the field of view as some people have been speculating.  As some have observed, the Kinect for Xbox 360 sensor was already technically capable of seeing down 50 centimeters – but with the caveat “as long as the light is right”.

    That caveat turned out to be a pretty big caveat.  The Kinect for Windows team spent many months developing a way to overcome this so the sensor would properly detect close up objects in more general lighting conditions.  This resulted not only in the need for new firmware, but changes to the way the devices are tested on the manufacturing line. In addition to allowing the sensor to see objects as close as 40 centimeters, these changes make the sensor less sensitive to more distant objects: when the sensor is in near mode, it has full accuracy and precision for objects 2 meters away, with graceful degradation out to 3 meters. Here is a handy chart one of our engineers made that shows the types of depth values returned by the runtime:

    Kinect for Windows default and near modeIn Beta 2, for an object 800-4000 millimeters from the sensor  the runtime would return the depth value, and the runtime returned a 0 regardless if the detected depth was unknown, too near or too far.  Our version 1.0 runtime will return depth values if an object is in the above cyan zone.  If the object is in the purple, brown or white zones, the runtime will return a distinct value indicating the appropriate zone.

    Additionally, in version 1.0 of the runtime, near mode will have some skeletal support, although not full 20-joint skeletal tracking (ST).  The below table outlines the differences between the default mode and near mode:

    Table_Kinect for Windows default and near mode

    We believe that near mode, with its operational envelope of 40 centimeters to 3 meters, will enable many new classes of applications. While full 20-joint ST will not be supported in near mode with version 1.0 of the runtime, we will be working hard to support ST in near mode in the future!

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    The latest Kinect for Windows SDK is here

    • 14 Comments

    Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately. 

    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7:
    counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify
    primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.

    Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here.  Now you can untether yourself and move around a conference room naturally.
    Kinect Interactions are designed to let users intuitively do things like press their hand forward
    a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you
    can untether yourself and move around a conference room naturally.

    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the
    computer without leaving her patient’s side.

    Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
    Customers can virtually try on merchandise, such as sunglasses, by using business solutions
    created
    with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in
    stock, you can still
    see what they look like on you.

    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate
    3-D renderings of people and objects in real time.

    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the
    price it would cost with traditional high-end 3-D scanners.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including
    augmented reality, 3-D printing, interior and industrial design, and body scanning for
    things like custom fitting and improved clothes shopping.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality,
    3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .

    The Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows announces new version of SDK coming March 18

    • 12 Comments

    Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.

    On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:

    Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.

    Kinect for Windows Interactions transform how people interact with computers in settings ranging from retail to education, training, and physical therapy.
    Kinect for Windows Interactions transform how people interact with computers in
    settings ranging from retail to education, training, and physical therapy.

    Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.

    Kinect Fusion enables developers to create accurate 3-D renderings in real time.Kinect Fusion enables developers to create accurate 3-D renderings in real time.

    The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:

    • OpenCV and MATLAB Samples to allow developers to build advanced Kinect-enabled applications while using industry standards.
    • Kinect for Windows Code Samples on CodePlex, marking the first time that select Kinect for Windows code samples will be available through an open-source resource, enabling existing and potential partners to explore and develop new scenarios.

    Seeing is believing
    We demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!

    As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.

    The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.

    Bob Heddle, Director
    Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    Using Kinect Interactions to Create a Slider Control

    • 11 Comments

    In the 1.7 release, the Kinect for Windows Toolkit added the "Interactions Framework" which makes it easy to create Kinect-enabled applications in WPF that use buttons and grip scrolling.  What may not be obvious from the Toolkit samples is creating new controls for this framework is easy and straightforward.  To demonstrate this, I’m going to introduce a slider control that can be used with Kinect for Windows to “scrub” video or for other things like turning the volume up to eleven.

    A solution containing the control code and a sample app is in the .zip file below.

    Look Before You Leap

    Before jumping right in and writing a brand new WPF control, it's good to see if other solutions will meet your needs.  Most WPF controls are designed to be look-less.  That is, everything about the visual appearance of the control is defined in XAML, as opposed to using C# code.  So if it's just the layout of things in the control, transitions, or animations you need to be different, changing the control template will likely suit your needs.  If you want the behavior of multiple controls combined into a reusable component then a UserControl may do what you want.

    Kinect HandPointers

    HandPointers are the abstraction that the Interactions Framework provides to tell the UI where the user's hands are and what state they are in.  In the WPF layer the API for HandPointers resembles the API for the mouse where possible.  Unlike the mouse, there is typically more than one hand pointer active at a time since more than one hand is visible by the Kinect sensor at a time.  In the controls that are in the toolkit (KinectCursorVisualizer, KinectTileButton, KinectScrollViewer, etc.) only the primary hand pointer of the primary user is used.  However, your control will still get events for all the other hand pointers.  As a result there is code in the event handlers to only respond to the primary user's primary hand.

    KinectRegion Events

    KinectRegion is the main component to look to when adding Kinect Interactions functionality to a WPF control.  All the WPF controls that are descendants of the KinectRegion will receive HandPointer* events as the HandPointers are used.  For example, when a hand pointer moves into the control's boundaries, the control will receive a KinectRegion.HandPointerEnter event.  If you've handled mouse events before, many of the KinectRegion events will feel familiar. 

    KinectRegion events - http://msdn.microsoft.com/en-us/library/microsoft.kinect.toolkit.controls.kinectregion_events.aspx

    Handling KinectRegion Events in the Slider

    The slider control handles KinectRegion events to allow the user to grip and drag the thumb of the slider.  When a control "captures" a hand pointer it means that all the events of the captured hand pointer will be sent to that control until capture is released.  A general guideline for implementing control interactions is that a control should always capture hand pointer input events while the user is interacting with it otherwise it will miss many of the events it needs to function properly

    The state diagram below gives the basic states of the control and what causes the state transitions.  The key thing to note is that the transitions in and out of dragging are caused by capture changing.  So that leads to the question, what causes capture to change?

    The control takes capture when it gets a grip event.  That will put the control into the dragging state until capture is released.  Capture can be released for a number of reasons.  Most commonly it is released when the control receives a GripRelease event indicating the user opened their hand.  It can also be released if we lose track of the hand.  This can happen when the hand moves too far outside the bounds of the KinectRegion.

    Expanding the Hit Area of the Control 

    This control was originally designed to control video playback.  The design of the UI was such that we wanted to put the control at the bottom of the UI but allow the user to grab anywhere to move the playback position.  The way the slider does this is to allow the app to specify a different WPF UIElement that will attach hover and grip handlers.  See the KinectSlider.GripEventTarget property.  This uses WPFs ability to register event handlers on controls other than yourself.

    Things Missing

    While this control works and could actually be used in a real application, it is far from complete in a WPF sense.  It does not implement an automation peer so accessibility is limited.  While touch and keyboard usage may work a little, it is not fully supported.  Focus visuals, visuals for all the Slider permutations, and support for multiple themes are missing.

    Resources for Building WPF Controls

    Books and other resources we use to build controls include:

    WPF 4 Unleashed http://www.informit.com/store/wpf-4-unleashed-9780672331190

    WPF Control Development Unleashed - http://www.informit.com/store/wpf-control-development-unleashed-building-advanced-9780672330339

    WPF source code - http://referencesource.microsoft.com/

    Retemplating WPF controls - http://msdn.microsoft.com/en-us/magazine/cc163497.aspx


  • Kinect for Windows Product Blog

    Kinect Fusion Coming to Kinect for Windows

    • 11 Comments

    Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.

    Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K.  As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.

    In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming
    In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.

    Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.

    Onlookers experience the capabilities of Kinect Fusion as a member of the Kinect for Windows team performs a live demo during BUILD 2012. Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments.  The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point.  Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements.  You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.

    We look forward to seeing how our developer community and business partners will use the tool.

    Chris White
    Senior Program Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    It’s Official: Kinect for Windows is Coming Soon

    • 10 Comments

    Kimect for Windows

    To commemorate the one-year anniversary of Kinect and the Kinect Effect, I sent an email to my team earlier this week. I’d like to quote for you what I said to them, “It all started with a revolutionary sensor and amazing software that turned voice and movement into magic. With that magical combination, last year the Interactive Entertainment Business at Microsoft showed the world how to re-imagine gaming.  This year, we’re showing the world how to re-imagine entertainment.  Next year, with Kinect for Windows, we will help the world re-imagine everything else.”

    To mark the milestone, the Kinect for Windows team is celebrating with our own milestones: We’re starting up this blog, launching the official Kinect for Windows web site, and releasing beta 2 of the Kinect for Windows SDK. (And, yes, we will celebrate the anniversary more this evening– it’s been an amazing journey these past months!)

    I know many of you are eagerly awaiting the Kinect for Windows commercial program coming in early 2012. My team is working hard to deliver a great product and I’m confident that it will be worth the wait.

    We’ve already seen strong enthusiasm for Kinect among developers who have done amazing things with it in countless different ways, from education to healthcare, gaming to art installations, manufacturing to retail.

    Currently, we have more than 200 companies taking part in our pilot program. They are telling us how Kinect for Windows will help them transform their products, their processes, their brands, and their businesses. Putting the power of Kinect + Windows into the hands of business leaders and technical visionaries will give them the tools they need to develop novel solutions for everything from training employees to visualizing data, from configuring a car to managing an assembly line.

    The updated software development kit that we released today includes some great new features that help us get closer to realizing this vision, including faster skeletal tracking, better accuracy rate when it comes to skeletal tracking and joint recognition, and the ability to plug and unplug your Kinect without losing work/productivity.

    Every day, I come to work and learn about another amazing application that a partner or other developer is doing with Kinect for Windows. I look forward to next year, when the potential goes exponential and everyone’s ideas, including yours, are part of that equation.

    If you haven’t done so already, download the SDK and re-imagine the world with us.

     

    --Craig Eisler
    General Manager, Kinect for Windows


  • Kinect for Windows Product Blog

    Kinect for Windows releases SDK update and launches in China

    • 10 Comments

    I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.

    Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.

    Kinect for Windows availability: current and coming soon

     

    The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:

    Extended sensor data access

    • Data from the sensor's 3-axis accelerometer is now exposed in the API. This enables detection of the sensor's orientation.
    • Extended-range depth data now provides details beyond 4 meters. Extended-range depth data is data beyond the tested and certified ranges and is therefore lower accuracy. For those developers who want access to this data, it’s now available.
    • Color camera settings, such as brightness and exposure, can now be set explicitly by the application, allowing developers to tune a Kinect for Windows sensor’s environment.
    • The infrared stream is now exposed in the API. This means developers can use the infrared stream in many scenarios, such as calibrating other color cameras to the depth sensor or capturing grayscale images in low-light situations.
    • The updated SDK used with the Kinect for Windows sensors allows for faster toggling of IR to support multiple overlapping sensors.

    Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.

    One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows

    One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows
    Human Interface Guidelines called the Basic Interactions – WPF sample.


    Improved developer tools

    • Kinect Studio has been updated to support all new sensor data features.
    • The SDK ships with a German speech recognition language pack that has been optimized for the sensor's microphone array.
    • Skeletal tracking is now supported on multiple sensors within a single application.
    • New samples show how to use all the new SDK features. Additionally, a fantastic new sample has been released that demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.

    We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.

    Greater support for operating systems

    • Windows 8 compatibility. By using the updated Kinect for Windows SDK, you can develop a Kinect for Windows solution for Windows 8 desktop applications.
    • The latest SDK supports development with Visual Studio 2012 and the new Microsoft .NET Framework 4.5.
    • The Kinect for Windows sensor now works on Windows running in a virtual machine (VM) and has been tested with the following VM environments: Microsoft Hyper-V, VMWare, and Parallels. 

    Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.

    This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.

    As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.

    Craig Eisler
    General Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    Kinect Fusion demonstrated at Microsoft Research TechFest, coming soon to SDK

    • 9 Comments

    Revealed in November as a future addition to the Kinect for Windows SDK, Kinect Fusion made a big impression at the annual TechFest event hosted by Microsoft Research this week in Redmond, Washington.

    Kinect Fusion pulls depth data that is generated by the Kinect for Windows sensor and, from the sequence of frames, constructs a highly detailed 3-D map of objects or environments. The tool averages readings over hundreds or even thousands of frames to create a rich level of detail.

    The Kinect Fusion project, shown during TechFest 2013, enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor.
    Kinect Fusion, shown during TechFest 2013, enables high-quality scanning and reconstruction of
    3-D models using just a handheld Kinect for Windows sensor and the Kinect for Windows SDK.

    "The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly," said Shahram Izadi, senior researcher at Microsoft Research Cambridge. "Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them."

    When scanning smaller objects, you also have the option to simply move the object instead of the sensor.

    The Cambridge researchers and Kinect for Windows team collaborated closely on Kinect Fusion to construct a tool that can enable businesses and developers to devise new types of applications.

    "This has been a wonderful example of collaboration between Microsoft Research and our product group," said Kinect for Windows Senior Program Manager Chris White. "We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper."

    "This kind of collaboration is one of the unique strengths of Microsoft, where we can bring together world-class researchers and world-class engineers to deliver real innovation," White added. "Kinect Fusion opens up a wide range of development possibilities—everything from gaming and augmented reality to industrial design. We're really excited to be able to include it in a future release of the Kinect for Windows SDK."

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    Updated SDK, with HTML5, Kinect Fusion improvements, and more

    • 9 Comments

    I am pleased to announce that we released the Kinect for Windows software development kit (SDK) 1.8 today. This is the fourth update to the SDK since we first released it commercially one and a half years ago. Since then, we’ve seen numerous companies using Kinect for Windows worldwide, and more than 700,000 downloads of our SDK.

    We build each version of the SDK with our customers in mind—listening to what the developer community and business leaders tell us they want and traveling around the globe to see what these dedicated teams do, how they do it, and what they most need out of our software development kit.

    The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
    The new background removal API is useful for advertising, augmented reality gaming, training
     and simulation, and more.

    Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:

    • New background removal. An API removes the background behind the active user so that it can be replaced with an artificial background. This green-screening effect was one of the top requests we’re heard in recent months. It is especially useful for advertising, augmented reality gaming, training and simulation, and other immersive experiences that place the user in a different virtual environment.
    • Realistic color capture with Kinect Fusion. A new Kinect Fusion API scans the color of the scene along with the depth information so that it can capture the color of the object along with its three-dimensional (3D) model. The API also produces a texture map for the mesh created from the scan. This feature provides a full fidelity 3D model of a scan, including color, which can be used for full color 3D printing or to create accurate 3D assets for games, CAD, and other applications.
    • Improved tracking robustness with Kinect Fusion. This algorithm makes it easier to scan a scene. With this update, Kinect Fusion is better able to maintain its lock on the scene as the camera position moves, yielding a more reliable and consistent scanning.
    • HTML interaction sample. This sample demonstrates implementing Kinect-enabled buttons, simple user engagement, and the use of a background removal stream in HTML5. It allows developers to use HTML5 and JavaScript to implement Kinect-enabled user interfaces, which was not possible previously—making it easier for developers to work in whatever programming languages they prefer and integrate Kinect for Windows into their existing solutions.
    • Multiple-sensor Kinect Fusion sample. This sample shows developers how to use two sensors simultaneously to scan a person or object from both sides—making it possible to construct a 3D model without having to move the sensor or the object! It demonstrates the calibration between two Kinect for Windows sensors, and how to use Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail experiences and other public kiosks that do not include having an attendant available to scan by hand.
    • Adaptive UI sample. This sample demonstrates how to build an application that adapts itself depending on the distance between the user and the screen—from gesturing at a distance to touching a touchscreen. The algorithm in this sample uses the physical dimensions and positions of the screen and sensor to determine the best ergonomic position on the screen for touch controls as well as ways the UI can adapt as the user approaches the screen or moves further away from it. As a result, the touch interface and visual display adapt to the user’s position and height, which enables users to interact with large touch screen displays comfortably. The display can also be adapted for more than one user.

    We also have updated our Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample, including the following:

    Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
    Design a transition that reveals or hides additional information
    without obscuring the anchor points in the overall UI.

    Design UI where users can accomplish all tasks for each goal within a single range.
    Design UI where users can accomplish all tasks for each goal
    within a single range.

    My team and I believe that communicating naturally with computers means being able to gesture and speak, just like you do when communicating with people. We believe this is important to the evolution of computing, and are committed to helping this future come faster by giving our customers the tools they need to build truly innovative solutions. There are many exciting applications being created with Kinect for Windows, and we hope these new features will make those applications better and easier to build. Keep up the great work, and keep us posted!

    Bob Heddle, Director
    Kinect for Windows

    Key links

Page 2 of 11 (101 items) 12345»