The Kinect for Windows team has been hard at work this summer, and I have some very exciting developments to share with you regarding our roadmap between now and the end of the year.
On October 8, Kinect for Windows is coming to China. China is a leader in business and technology innovation. We are very excited to make Kinect for Windows available in China so that developers and businesses there can innovate with Kinect for Windows and transform experiences through touch-free solutions.
Kinect for Windows hardware will be available in seven additional markets later this fall: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico.
In addition to making Kinect for Windows hardware available in eight new markets this fall, we will be releasing an update to the Kinect for Windows runtime and software development kit (SDK) on October 8. This release has numerous new features that deliver additional power to Kinect for Windows developers and business customers. We will share the full details when it’s released on October 8, but in the meantime here are a few highlights:
It has been a little more than seven months since we first launched Kinect for Windows in 12 markets. By the end of the year, Kinect for Windows will be available in 38 markets and we will have shipped two significant updates to the SDK and runtime beyond the initial release—and this is this just the beginning. Microsoft has had a multi-decade commitment to natural user interface (NUI), and my team and I look forward to continuing to be an important part of that commitment. In coming years, I believe that we will get to experience an exciting new era where computing becomes invisible and all of us will be able to interact intuitively and naturally with the computers around us.
Craig EislerGeneral Manager, Kinect for Windows
There has been a lot of speculation on what near mode is since we announced it. As I mentioned in the original post, the Kinect for Windows device has new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.
The lenses on the Kinect for Windows sensor are the same as the Kinect for Xbox 360 sensor, so near mode does not change the field of view as some people have been speculating. As some have observed, the Kinect for Xbox 360 sensor was already technically capable of seeing down 50 centimeters – but with the caveat “as long as the light is right”.
That caveat turned out to be a pretty big caveat. The Kinect for Windows team spent many months developing a way to overcome this so the sensor would properly detect close up objects in more general lighting conditions. This resulted not only in the need for new firmware, but changes to the way the devices are tested on the manufacturing line. In addition to allowing the sensor to see objects as close as 40 centimeters, these changes make the sensor less sensitive to more distant objects: when the sensor is in near mode, it has full accuracy and precision for objects 2 meters away, with graceful degradation out to 3 meters. Here is a handy chart one of our engineers made that shows the types of depth values returned by the runtime:
In Beta 2, for an object 800-4000 millimeters from the sensor the runtime would return the depth value, and the runtime returned a 0 regardless if the detected depth was unknown, too near or too far. Our version 1.0 runtime will return depth values if an object is in the above cyan zone. If the object is in the purple, brown or white zones, the runtime will return a distinct value indicating the appropriate zone.
Additionally, in version 1.0 of the runtime, near mode will have some skeletal support, although not full 20-joint skeletal tracking (ST). The below table outlines the differences between the default mode and near mode:
We believe that near mode, with its operational envelope of 40 centimeters to 3 meters, will enable many new classes of applications. While full 20-joint ST will not be supported in near mode with version 1.0 of the runtime, we will be working hard to support ST in near mode in the future!
Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately.
A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you can untether yourself and move around a conference room naturally.
In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
The Kinect for Windows Team
Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.
On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:
Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.
Kinect for Windows Interactions transform how people interact with computers insettings ranging from retail to education, training, and physical therapy.
Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.
Kinect Fusion enables developers to create accurate 3-D renderings in real time.
The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:
Seeing is believingWe demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!
As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.
The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.
Bob Heddle, DirectorKinect for Windows
Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.
Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K. As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.
In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.
Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.
Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.
We look forward to seeing how our developer community and business partners will use the tool.
Chris WhiteSenior Program Manager, Kinect for Windows
To commemorate the one-year anniversary of Kinect and the Kinect Effect, I sent an email to my team earlier this week. I’d like to quote for you what I said to them, “It all started with a revolutionary sensor and amazing software that turned voice and movement into magic. With that magical combination, last year the Interactive Entertainment Business at Microsoft showed the world how to re-imagine gaming. This year, we’re showing the world how to re-imagine entertainment. Next year, with Kinect for Windows, we will help the world re-imagine everything else.”
To mark the milestone, the Kinect for Windows team is celebrating with our own milestones: We’re starting up this blog, launching the official Kinect for Windows web site, and releasing beta 2 of the Kinect for Windows SDK. (And, yes, we will celebrate the anniversary more this evening– it’s been an amazing journey these past months!)
I know many of you are eagerly awaiting the Kinect for Windows commercial program coming in early 2012. My team is working hard to deliver a great product and I’m confident that it will be worth the wait.
We’ve already seen strong enthusiasm for Kinect among developers who have done amazing things with it in countless different ways, from education to healthcare, gaming to art installations, manufacturing to retail.
Currently, we have more than 200 companies taking part in our pilot program. They are telling us how Kinect for Windows will help them transform their products, their processes, their brands, and their businesses. Putting the power of Kinect + Windows into the hands of business leaders and technical visionaries will give them the tools they need to develop novel solutions for everything from training employees to visualizing data, from configuring a car to managing an assembly line.
The updated software development kit that we released today includes some great new features that help us get closer to realizing this vision, including faster skeletal tracking, better accuracy rate when it comes to skeletal tracking and joint recognition, and the ability to plug and unplug your Kinect without losing work/productivity.
Every day, I come to work and learn about another amazing application that a partner or other developer is doing with Kinect for Windows. I look forward to next year, when the potential goes exponential and everyone’s ideas, including yours, are part of that equation.
If you haven’t done so already, download the SDK and re-imagine the world with us.
--Craig Eisler General Manager, Kinect for Windows
I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.
Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.
The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:
Extended sensor data access
Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.
One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.
Improved developer tools
We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.
Greater support for operating systems
Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.
This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.
As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.
Revealed in November as a future addition to the Kinect for Windows SDK, Kinect Fusion made a big impression at the annual TechFest event hosted by Microsoft Research this week in Redmond, Washington.
Kinect Fusion pulls depth data that is generated by the Kinect for Windows sensor and, from the sequence of frames, constructs a highly detailed 3-D map of objects or environments. The tool averages readings over hundreds or even thousands of frames to create a rich level of detail.
Kinect Fusion, shown during TechFest 2013, enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor and the Kinect for Windows SDK.
"The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly," said Shahram Izadi, senior researcher at Microsoft Research Cambridge. "Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them."
When scanning smaller objects, you also have the option to simply move the object instead of the sensor.
The Cambridge researchers and Kinect for Windows team collaborated closely on Kinect Fusion to construct a tool that can enable businesses and developers to devise new types of applications.
"This has been a wonderful example of collaboration between Microsoft Research and our product group," said Kinect for Windows Senior Program Manager Chris White. "We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper."
"This kind of collaboration is one of the unique strengths of Microsoft, where we can bring together world-class researchers and world-class engineers to deliver real innovation," White added. "Kinect Fusion opens up a wide range of development possibilities—everything from gaming and augmented reality to industrial design. We're really excited to be able to include it in a future release of the Kinect for Windows SDK."
Kinect for Windows team
Today at Microsoft BUILD 2014, Microsoft made it official: the Kinect for Windows v2 sensor and SDK are coming this summer (northern hemisphere). With it, developers will be able to start creating Windows Store apps with Kinect for the first time. The ability to build such apps has been a frequent request from the developer community. We are delighted that it’s now on the immediate horizon—with the ability for developers to start developing this summer and to commercially deploy their solutions and make their apps available to Windows Store customers later this summer.
The ability to create Windows Store apps with Kinect for Windows not only fulfills a dream of our developer community, it also marks an important step forward in Microsoft’s vision of providing a unified development platform across Windows devices, from phones to tablets to laptops and beyond. Moreover, access to the Windows Store opens a whole new marketplace for business and consumer experiences created with Kinect for Windows.
The Kinect for Windows v2 has been re-engineered with major enhancements in color fidelity, video definition, field of view, depth perception, and skeletal tracking. In other words, the v2 sensor offers greater overall precision, improved responsiveness, and intuitive capabilities that will accelerate your development of voice and gesture experiences.
Specifically, the Kinect for Windows v2 includes 1080p HD video, which allows for crisp, high-quality augmented scenarios; a wider field of view, which means that users can stand closer to the sensor—making it possible to use the sensor in smaller rooms; improved skeletal tracking, which opens up even better scenarios for health and fitness apps and educational solutions; and new active infrared detection, which provides better facial tracking and gesture detection, even in low-light situations.
The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:
Developers who have been part of the Kinect for Windows v2 Developer Preview program praise the new sensor’s capabilities, which take natural, human computing to the next level. We are in awe and humbled by what they’ve already been able to create.
Technologists from a few participating companies are on hand at BUILD, showing off the apps they have created by using the Kinect for Windows v2. See what two of them, Freak’n Genius and Reflexion Health, have already been able to achieve, and learn more about these companies.
The v2 sensor and SDK dramatically enhance the world of gesture and voice control that were pioneered in the original Kinect for Windows, opening up new ways for developers to create applications that transform how businesses and consumers interact with computers. If you’re using the original Kinect for Windows to develop natural voice- and gesture-based solutions, you know how intuitive and powerful this interaction paradigm can be. And if you haven’t yet explored the possibilities of building natural applications, what are you waiting for? Join us as we continue to make technology easier to use and more intuitive for everyone.
We are stoked to announce the immediate availability of our latest SDK and Developer Toolkit (v1.7)! Kinect Interactions (grip and press), Kinect Fusion, new samples for MATLAB, OpenCV and more.
Download the new hotness here.
Our product blog has all the details on what we believe is our biggest release since 1.0.