We’re thrilled at the sales we’ve seen for Kinect - 18 million sold in the past year – and we were honored to receive a Guinness World Record for the fastest selling consumer electronics device ever. As consumers, we may take devices like Kinect for granted, but in fact electronic devices are the fruit of a great deal of behind-the-scenes ingenuity and experimentation. Kinect is a shining example of this. Instead of mimicking the handheld motion-sensing controllers already on the market, Microsoft shattered the existing controller paradigm by inventing a new natural user interface system that enables advanced human tracking, gesture recognition, voice control and more. Our answer to the “wand” controller was no controller at all, or as we say, “YOU are the controller.”
Getting there wasn’t easy. Without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and the deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist. And as amazing a piece of hardware as Kinect is, it is much more than that. At the heart of the Kinect experience lies sophisticated software that meaningfully deciphers the images and gestures captured by the 3D sensor as well as the voice commands captured by the microphone array from someone much further away than someone using a headset or a phone. More importantly, Kinect software can understand what each user means by a particular gesture or command across a wide range of possible shapes, sizes, and actions of real people.
The incredible amount of innovation on Kinect for Xbox 360 this past year shows the potential for Kinect as a platform for developers and businesses to build new and innovative offerings. Along with many others, we have only begun to explore the potential of this amazing technology. This proliferation of creative and imaginative new ideas for Kinect, which we call the Kinect Effect, will expand even further with our commercial release of Kinect for Windows.
Today, we are announcing that the new Kinect for Windows hardware and accompanying software will be available on February 1st, 2012 in 12 countries (United States, Australia, Canada, France, Germany, Ireland, Italy, Japan, Mexico, New Zealand, Spain, United Kingdom), at a suggested retail price of US $249. Kinect for Windows hardware will be available, in limited quantities at first, through a variety of resellers and distributors. The price includes a one-year warranty, access to ongoing software updates for both speech and human tracking, and our continued investment in Kinect for Windows-based software advancements. Later this year, we will offer special academic pricing (planned at US $149) for Qualified Educational Users.
We love the innovation we have seen built using Kinect for Xbox 360 – this has been a source of inspiration and delight for us and compelled us to create a team dedicated to serving this opportunity. We are proud to bring technology priced in the tens of thousands of dollars just a few years ago to the mainstream at extremely low consumer prices. And although Kinect for Windows is still value-priced for the technology, some will ask us why it isn’t the same price as Kinect for Xbox.
The ability to sell Kinect for Xbox 360 at its current price point is in large part subsidized by consumers buying a number of Kinect games, subscribing to Xbox LIVE, and making other transactions associated with the Xbox 360 ecosystem. In addition, the Kinect for Xbox 360 was built for and tested with the Xbox 360 console only, which is why it is not licensed for general commercial use, supported or under warranty when used on any other platform.
With Kinect for Windows, we are investing in creating a platform that is optimized for scenarios beyond the living room, and delivering new software features on an ongoing basis, starting with “near mode” (see my earlier blog post for more about this). In addition to support for Windows 7 and the Windows 8 developer preview (desktop apps only), Kinect for Windows will also support gesture and voice on Windows Embedded-based devices and will enhance how data is captured and accessed within intelligent systems across manufacturing, retail and many more industries. We are building the Kinect for Windows platform in a way that will allow other companies to integrate Kinect into their offerings and we have invested in an approach that allows them to develop in ways that are dependable and scalable.
We have chosen a hardware-only business model for Kinect for Windows, which means that we will not be charging for the SDK or the runtime; these will be available free to developers and end-users respectively. As an independent developer, IT manager, systems integrator, or ISV, you can innovate with confidence knowing that you will not pay license fees for the Kinect for Windows software or the ongoing software updates, and the Kinect for Windows hardware you and your customers use is supported by Microsoft.
Although we encourage all developers to understand and take advantage of the additional features and updates available with the new Kinect for Windows hardware and accompanying software, those developers using our SDK and the Kinect for Xbox 360 hardware may continue to use these in their development activities if they wish. However, non-commercial deployments using Kinect for Xbox 360 that were allowed using the beta SDK are not permitted with the newly released software. Non-commercial deployments using the new runtime and SDK will require the fully tested and supported Kinect for Windows hardware and software platform, just as commercial deployments do. Existing non-commercial deployments using our beta SDK may continue using the beta and the Kinect for Xbox 360 hardware; to accommodate this, we are extending the beta license for three more years, to June 16, 2016.
We expect that as Kinect for Windows hardware becomes readily available, developers will shift their development efforts to Kinect for Windows hardware in conjunction with the latest SDK and runtime. The combination of Kinect for Windows hardware and software creates a superior development platform for Windows and will yield a higher quality, better performing experience for end users.
We are excited for the new possibilities that Kinect will enable on the Windows platform, and to see how businesses and developers reimagine their processes and their products, and the many different ways each Kinect could enrich lives and make using technology more natural for everyone.
Craig EislerGeneral Manager, Kinect for Windows
By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year.
Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.
Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.
Some of the key capabilities of the new Kinect sensor include:
The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.
A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.
Bob HeddleDirector, Kinect for Windows
Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images
Since announcing a few weeks ago that the Kinect for Windows commercial program will launch in early 2012, we’ve been asked whether there will also be new Kinect hardware especially for Windows. The answer is yes; building on the existing Kinect for Xbox 360 device, we have optimized certain hardware components and made firmware adjustments which better enable PC-centric scenarios. Coupled with the numerous upgrades and improvements our team is making to the Software Development Kit (SDK) and runtime, the new hardware delivers features and functionality that Windows developers and Microsoft customers have been asking for.
Simple changes include shortening the USB cable to ensure reliability across a broad range of computers and the inclusion of a small dongle to improve coexistence with other USB peripherals. Of particular interest to developers will be the new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters. “Near Mode” will enable a whole new class of “close up” applications, beyond the living room scenarios for Kinect for Xbox 360. This is one of the most requested features from the many developers and companies participating in our Kinect for Windows pilot program and folks commenting on our forums, and we’re pleased to deliver this, and more, at launch.
Another thing we’ve heard from our pilot customers is that companies exploring commercial uses of Kinect want to operate with the assurance of support and future innovation from Microsoft. As part of Microsoft’s deep commitment to NUI, we designed the Kinect for Windows commercial program to give licensed customers access to ongoing updates in both speech and human tracking (where Microsoft has been investing for years), in addition to providing fully supported Kinect hardware for Windows. We’ve been captivated by the countless creative ways companies worldwide envision how their businesses and industries can be revolutionized with Kinect, and are proud to be helping those companies to explore the profound implications NUI has for the future.
Microsoft also has just launched a new initiative, the Kinect Accelerator incubation project run by Microsoft BizSpark. I will be serving as a Mentor for this program, along with a number of other folks from Microsoft. BizSpark helps software startups through access to Microsoft software development tools, connection to key industry players (including investors) and by providing marketing visibility. The Kinect Accelerator will give 10 tech-oriented companies using Kinect (on either Windows or Xbox360) an investment of $20,000 each, plus a number of other great perks. Applications are being accepted now through January 25th, 2012. At the end of the program, each company will have an opportunity to present at an Investor Demo Day to angel investors, venture capitalists, Microsoft executives (including me), media and industry influentials. I can’t wait to see what they (and maybe you?) come up with!
I am pleased to announce that we released the Kinect for Windows software development kit (SDK) 1.8 today. This is the fourth update to the SDK since we first released it commercially one and a half years ago. Since then, we’ve seen numerous companies using Kinect for Windows worldwide, and more than 700,000 downloads of our SDK.
We build each version of the SDK with our customers in mind—listening to what the developer community and business leaders tell us they want and traveling around the globe to see what these dedicated teams do, how they do it, and what they most need out of our software development kit.
The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:
We also have updated our Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample, including the following:
Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
Design UI where users can accomplish all tasks for each goal within a single range.
My team and I believe that communicating naturally with computers means being able to gesture and speak, just like you do when communicating with people. We believe this is important to the evolution of computing, and are committed to helping this future come faster by giving our customers the tools they need to build truly innovative solutions. There are many exciting applications being created with Kinect for Windows, and we hope these new features will make those applications better and easier to build. Keep up the great work, and keep us posted!
Bob Heddle, DirectorKinect for Windows
On January 9th, Steve Ballmer announced at CES that we would be shipping Kinect for Windows on February 1st. I am very pleased to report that today version 1.0 of our SDK and runtime were made available for download, and distribution partners in our twelve launch countries are starting to ship Kinect for Windows hardware, enabling companies to start to deploy their solutions. The suggested retail price is $249, and later this year, we will offer special academic pricing of $149 for Qualified Educational Users.
In the three months since we released Beta 2, we have made many improvements to our SDK and runtime, including:
More details can be found here.
As I mentioned in an earlier blog post, without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist. Shipping Kinect for Windows was another cross-Microsoft effort: not only did the hardware and software teams work closely together to create an integrated solution, but our support, manufacturing, supply chain, reverse logistics, and account teams have all been working hard to prepare for today’s launch. As well, our research, speech, and Xbox NUI teams have contributed to making Kinect for Windows a better product. Microsoft’s ability to make these kinds of deep investments makes Kinect for Windows a product that companies can deploy with confidence, knowing you have our support and our ongoing commitment to make Kinect for Windows the best it can be.
Looking towards the future, we are planning on releasing updates to our SDK and runtime 2-3 times per year – in fact, the team is already hard at work on the next release. We are continuing to invest in programs like our Testing and Adoption Program and the Kinect Accelerator, and will work to create new programs in the future to help support our developer and partner ecosystem. We will also continue to listen to our developer community and business customers for the kinds of features and capabilities they need, as they re-imagine the future of computing using the power of Kinect.
As we continue the march toward the upcoming launch of Kinect for Windows v2, we’re excited to share the hardware’s final look.
The sensor closely resembles the Kinect for Xbox One, except that it says “Kinect” on the top panel, and the Xbox Nexus—the stylized green “x”—has been changed to a simple, more understated power indicator:
Kinect for Windows v2 sensor
Hub and power supplyThe sensor requires a couple other components to work: the hub and the power supply. Tying everything together is the hub (top item pictured below), which accepts three connections: the sensor, USB 3.0 output to PC, and power. The power supply (bottom item pictured below) does just what its name implies: it supplies all the power the sensor requires to operate. The power cables will vary by country or region, but the power supply itself supports voltages from 100–240 volts.
Kinect for Windows v2 hub (top) and power supply (bottom)
As this first look at the Kinect for Windows v2 hardware indicates, we're getting closer and closer to launch. So stay tuned for more updates on the next generation of Kinect for Windows.
Kinect for Windows Team
There has been a lot of speculation on what near mode is since we announced it. As I mentioned in the original post, the Kinect for Windows device has new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.
The lenses on the Kinect for Windows sensor are the same as the Kinect for Xbox 360 sensor, so near mode does not change the field of view as some people have been speculating. As some have observed, the Kinect for Xbox 360 sensor was already technically capable of seeing down 50 centimeters – but with the caveat “as long as the light is right”.
That caveat turned out to be a pretty big caveat. The Kinect for Windows team spent many months developing a way to overcome this so the sensor would properly detect close up objects in more general lighting conditions. This resulted not only in the need for new firmware, but changes to the way the devices are tested on the manufacturing line. In addition to allowing the sensor to see objects as close as 40 centimeters, these changes make the sensor less sensitive to more distant objects: when the sensor is in near mode, it has full accuracy and precision for objects 2 meters away, with graceful degradation out to 3 meters. Here is a handy chart one of our engineers made that shows the types of depth values returned by the runtime:
In Beta 2, for an object 800-4000 millimeters from the sensor the runtime would return the depth value, and the runtime returned a 0 regardless if the detected depth was unknown, too near or too far. Our version 1.0 runtime will return depth values if an object is in the above cyan zone. If the object is in the purple, brown or white zones, the runtime will return a distinct value indicating the appropriate zone.
Additionally, in version 1.0 of the runtime, near mode will have some skeletal support, although not full 20-joint skeletal tracking (ST). The below table outlines the differences between the default mode and near mode:
We believe that near mode, with its operational envelope of 40 centimeters to 3 meters, will enable many new classes of applications. While full 20-joint ST will not be supported in near mode with version 1.0 of the runtime, we will be working hard to support ST in near mode in the future!
I am pleased to announce that today we have released version 1.5 of the Kinect for Windows runtime and SDK. Additionally, Kinect for Windows hardware is now available in Hong Kong, South Korea, Singapore, and Taiwan. Starting next month, Kinect for Windows hardware will be available in 15 additional countries: Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, South Africa, Sweden, Switzerland and the United Arab Emirates. When this wave of expansion is complete, Kinect for Windows will be available in 31 countries around the world. Go to our Kinect for Windows website to find a reseller in your region.
We have added more capabilities to help developers build amazing applications, including:
We have continued to expand and improve our skeletal tracking capabilities in this release:
We have made performance and data quality enhancements, which improve the experience of all Kinect for Windows applications using the RGB camera or needing RGB and depth data to be mapped together (“green screen” applications are a common example):
New capabilities to enable avatar animation scenarios, which makes it easier for developers to build applications that control a 3D avatar, such as Kinect Sports.
Finally, as I mentioned in my Sneak Peek Blog post, we released four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we released new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain, and Spanish/Mexico.
As we have worked with customers large and small over the past months, we’ve seen the value in having a fully integrated approach: the Kinect software and hardware are designed together; audio, video, and depth are all fully supported and integrated; our sensor, drivers, and software work together to provide world class echo cancellation; our approach to human tracking, which is designed in conjunction with the Kinect sensor, works across a broad range of people of all shapes, sizes, clothes, and hairstyles, etc. And because we design the hardware and software together, we are able to make changes that open up exciting new areas for innovation, like Near Mode.
Furthermore, because Kinect for Windows is from Microsoft, our support, distribution, and partner network are all at a global scale. For example, the Kinect for Windows hardware and software are tested together and supported as a unit in every country we are in (31 countries by June!), and we will continue to add countries over time. Microsoft’s developer tools are world class, and our SDK is built to fully integrate with Visual Studio. Especially important for our global business customers is Microsoft’s ability to connect them to partners and experts who can help them use Kinect for Windows to re-imagine their brands, their products, and their processes.
It is exciting for us to have built and shipped such a significantly enhanced version of the Kinect for Windows SDK less than 16 weeks after launch. But we are even more excited about our plans for the future – both in country expansion for the sensor, and in enhanced capabilities of our runtime and SDK. We believe the best is yet to come, and we can’t wait to see what developers will build with this!
Today, we began shipping thousands of Kinect for Windows v2 sensors to developers worldwide. And more sensors will leave the warehouse in coming weeks, as we work to fill orders as quickly as possible. Additionally, Microsoft publicly released a preview version of the Kinect for Windows SDK 2.0 this morning—meaning that developers everywhere can now take advantage of Kinect’s latest enhancements and improved capabilities. The SDK is free of cost and there are no fees for runtime licenses of commercial applications developed with the SDK.
The new sensor can track as many as six complete skeletons and 25 joints per person.
We will be releasing a final version of the SDK 2.0 in a few months, but with so many of you eagerly awaiting access, we wanted to make the SDK available as early as possible. For those of you who were unable to take part in our developer preview program, now you can roll up your sleeves and start developing. And for anyone else out there who has been waiting—well, the wait is over!
The new sensor’s key features include:
With the ability to track new joints for hand tips and thumbs—as well as improved understanding of the soft connective tissue and body positioning—you get more anatomically correct positions for crisp interactions.
In addition to the new sensor’s key features, the Kinect for Windows SDK 2.0 includes:
When the final version of the SDK is available, people will be able to start submitting their apps to the Windows Store, and companies will be able to make their v2 solutions available commercially. We look forward to seeing what everyone does with the new NUI.
The new SDK 2.0 public preview includes Unity support for faster, cost-efficient, and high quality support for cross-platform development, enabling developers to build their apps for the Windows Store using tools they already know.
We’ve already shown you what several partners are working on, including Reflexion Health and Freak n’ Genius. Most recently, Walt Disney Studios Motion Pictures have developed an interactive experience to help promote their upcoming movie, Planes 2: Fire & Rescue. One of seven experience kiosks will debut in London at the end of the week in time for school holidays. Disney is confident it will receive an enthusiastic reception from users of all ages, creating an engaging experience associated with the Disney brand and, of course, sparking interest in the movie which releases nationwide from August 8. Read more.
We will showcase more partner solutions here in coming months, so stay tuned. In the meantime, order your new sensor, download the SDK 2.0 public preview, and start developing your NUI apps. And please join our Microsoft Virtual Academy to learn from our experts and jump start your development.
The Kinect for Windows Team
Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.
Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K. As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.
In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.
Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.
Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.
We look forward to seeing how our developer community and business partners will use the tool.
Chris WhiteSenior Program Manager, Kinect for Windows