Since our announcement of Kinect for Windows version 1.5 in “What’s Ahead: A Sneak Peek” there have been a few questions that have come up that I wanted to answer.
There have been some folks who have thought that 1.5 included new hardware. Version 1.5 is our new software release that is coming out in the same timeframe that we launch the current Kinect for Windows hardware in 19 additional countries. We will upgrade our software at a faster rate than we refresh our hardware.
We have built version 1.5 of our software with 1.0 compatibility at top of mind. Applications built using 1.0 will work on the same machine with an application built using 1.5 – this is something that we plan to do always, insuring that solutions built using older runtimes can always run side by side with solutions using new runtimes. Furthermore, we have maintained API compatibility for developers – applications that are currently being built using the 1.0 SDK can be recompiled using the 1.5 SDK without any changes required. No one has to wait for 1.5 to get a Kinect for Windows sensor or to start coding using the current SDK!
I love the enthusiasm for the 1.5 SDK and runtime, the new speech languages, and for the new countries we’re launching in – we can’t wait to deliver it to you.
Craig EislerGeneral Manager, Kinect for Windows
The momentum continues for Kinect for Windows. I am pleased to announce that we will be launching Kinect for Windows in nineteen more countries in the coming months. We will have availability in Hong Kong, South Korea, and Taiwan in late May. In June, Kinect for Windows will be available in Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, Singapore, South Africa, Sweden, Switzerland and the United Arab Emirates.
We are also hard at work on our 1.5 release, which will be available at the end of May. Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications. Also coming is what we call “seated” or “10-joint” skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user. What is extra exciting to me about this functionality is that it will work in both default and near mode!
Also included in our 1.5 release will be four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we will be releasing new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain and Spanish/Mexico.
In a future blog post, I’ll discuss the features and capabilities we are releasing in more detail. We are excited by the enthusiasm for Kinect for Windows, and will continue to work on bringing Kinect for Windows to more countries, supporting more languages with our speech engine, and continuing to evolve our human tracking capabilities.
Craig EislerGeneral Manager, Kinect for Windows
The writer Mark Twain once said “We are alike, on the inside.” On the outside, however, few people are the same. While two people might be the same height and wear the same size, the way their clothing fits their bodies can vary dramatically. As a result, up to 40% of clothing purchased both online and in person is returned because of poor fit.
Finding the perfect fit so clothing conforms to a person’s unique body shape is at the heart of the Bodymetrics Pod. Developed by Bodymetrics, a London-based pioneer in 3D body-mapping, the Bodymetrics Pod was introduced to American shoppers for the first time today during Women’s Denim Days at Bloomingdale’s in Century City, Los Angeles. This is the first time Kinect for Windows has been used commercially in the United States for body mapping in a retail clothing environment.
Bloomingdale’s, a leader in retail innovation, has one of the largest offerings in premium denim from fashion-forward brands like J Brand, 7 for all mankind, Citizens and Humanity, AG, and Paige. The Bodymetrics services allows customers to get their body mapped and find jeans that fit and flatter their unique shape from the hundreds of different jeans styles that Bloomingdale’s stocks.
During Bloomingdale’s Denim Days, March 15 – 18, customers will be able to get their body mapped, and also become a Bodymetrics member. This free service enables customers to access an online account and order jeans based on their body shape.
“We’re very excited about bringing Bodymetrics to US shoppers,” explains Suran Goonatilake, CEO of Bodymetrics. “Once we 3D map a customer’s body, we classify their shape into three categories - emerald, sapphire and ruby. A Bodymetrics Stylist will then find jeans that exactly match the body shape of the customer from jean styles that Bloomingdale’s stocks.”
The process starts with a customer creating a Bodymetrics account. They are then directed to the Bodymetrics Pod, a secure, private space, where their body is scanned by 8 Kinect for Windows sensors arranged in a circle. Bodymetrics’ proprietary software produces a 3D map of the customer’s body, and then calculates the shape of the person, taking hundreds of measurements and contours into account. The body-mapping process takes less than 5 seconds.
Helping women shop for best-fitting jeans in department stores is just the start of what Bodymetrics envisions for their body-mapping technologies. The company is working on a solution that can be used at home. Individuals will be able to scan their body, and then go online to select, virtually try on, and purchase clothing that match their body shape.
Goonatilake explains, “Body-mapping is in its infancy. We’re just starting to explore what’s possible in retail stores and at home. Stores are increasingly looking to provide experiences that entice shoppers into their stores, and then allow a seamless journey from stores to online. And we all want shopping experiences that are personalized to us – our size, shape and style.”
Even though people may not be identical on the outside, we desire clothing that fits well and complements our body shapes. The Kinect for Windows-enabled Bodymetrics Pod offers a retail-ready solution that makes the perfect fit beautifully simple.
Kinect for Windows Team
Students, teachers, researchers, and other educators have been quick to embrace Kinect’s natural user interface (NUI), which makes it possible to interact with computers using movement, speech, and gestures. In fact, some of the earliest Kinect for Windows applications to emerge were projects done by students, including several at last year’s Imagine Cup.
One project, from an Imagine Cup team in Italy, created an application for people with severe disabilities that enables them to communicate, learn, and play games on computers using a Kinect sensor instead of a traditional mouse or keyboard. Another innovative Imagine Cup project, done by university students in Russia, used the Kinect natural user interface to fold, rotate, and examine online origami models.
To encourage students, educators, and academic researchers to continue innovating with Kinect for Windows, special academic pricing on Kinect for Windows sensors is now available in the United States. The academic price is $149.99 through Microsoft Stores.
If you are an educator or faculty with an accredited school, such as a university, community college, vocational school, or K-12, you can purchase a Kinect for Windows sensor at this price.
Find out if you qualify, and then purchase online or visit a Microsoft store in your area.
Kinect for Windows team
This year’s Microsoft TechForum provided an opportunity for Craig Mundie, Microsoft Chief Research and Strategy Officer, to discuss the company’s vision for the future of technology as well as showcase two early examples of third-party Kinect for Windows applications in action.
Mundie was joined by Don Mattrick, President of the Microsoft Interactive Entertainment Business, and his Chief of Staff, Aaron Greenberg, who demonstrated both of the third-party Kinect for Windows applications, including the Pathfinder Kinect Experience. This application enables users to stand in front of a large monitor, and use movement, voice, and gestures to walk around the 2013 Nissan Pathfinder Concept, examining the exterior, bending down and inspecting the wheels, viewing the front and back, and then stepping inside to experience the upholstery, legroom, dashboard, and other details.
Nissan worked with IdentityMine and Critical Mass to create the Kinect-enabled virtual experience, which was initially shown at the Chicago Auto Show in early February. The application is continuing to be refined, taking advantage of the Kinect natural user interface to enable manufacturers to showcase their vehicles in virtual showrooms.
“Using motion, speech, and gestures, people will be able to get computers to do more for them,” explain Greenberg. “You can imagine this Pathfinder solution being applied in different ways in the future - at trade shows, online, or even at dealerships - where someone might be able to test drive a physical car, while also being able to visualize and experience different configurations of the car through its virtual twin, accessorizing it, changing the upholstery, et cetera.”
Also demonstrated at TechForum was a new kind of shopping cart experience, which was developed by mobile application studio Chaotic Moon. This application mounts a Kinect for Windows sensor on a shopping cart, enabling the cart to follow a shopper - stopping, turning, and moving where and when the shopper does.
Chaotic Moon has tested their solution at Whole Foods in Austin, Texas, but the application is an early experiment and no plans are in place for this application to be introduced in stores anytime soon. Conceivably, Kinect-enabled carts at grocery stores, shopping malls, or airports could make it easier for people to navigate and perform tasks hands free. “Imagine how an elderly shopper or a parent with a stroller might be assisted by something like this,” notes Greenberg.
“The Kinect natural user interface has the potential to revolutionize products and processes in the home, at work, and in public places, like retail stores,” continues Greenberg. “It’s exciting to see what is starting to emerge.”
On January 9th, Steve Ballmer announced at CES that we would be shipping Kinect for Windows on February 1st. I am very pleased to report that today version 1.0 of our SDK and runtime were made available for download, and distribution partners in our twelve launch countries are starting to ship Kinect for Windows hardware, enabling companies to start to deploy their solutions. The suggested retail price is $249, and later this year, we will offer special academic pricing of $149 for Qualified Educational Users.
In the three months since we released Beta 2, we have made many improvements to our SDK and runtime, including:
More details can be found here.
As I mentioned in an earlier blog post, without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist. Shipping Kinect for Windows was another cross-Microsoft effort: not only did the hardware and software teams work closely together to create an integrated solution, but our support, manufacturing, supply chain, reverse logistics, and account teams have all been working hard to prepare for today’s launch. As well, our research, speech, and Xbox NUI teams have contributed to making Kinect for Windows a better product. Microsoft’s ability to make these kinds of deep investments makes Kinect for Windows a product that companies can deploy with confidence, knowing you have our support and our ongoing commitment to make Kinect for Windows the best it can be.
Looking towards the future, we are planning on releasing updates to our SDK and runtime 2-3 times per year – in fact, the team is already hard at work on the next release. We are continuing to invest in programs like our Testing and Adoption Program and the Kinect Accelerator, and will work to create new programs in the future to help support our developer and partner ecosystem. We will also continue to listen to our developer community and business customers for the kinds of features and capabilities they need, as they re-imagine the future of computing using the power of Kinect.
There has been a lot of speculation on what near mode is since we announced it. As I mentioned in the original post, the Kinect for Windows device has new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.
The lenses on the Kinect for Windows sensor are the same as the Kinect for Xbox 360 sensor, so near mode does not change the field of view as some people have been speculating. As some have observed, the Kinect for Xbox 360 sensor was already technically capable of seeing down 50 centimeters – but with the caveat “as long as the light is right”.
That caveat turned out to be a pretty big caveat. The Kinect for Windows team spent many months developing a way to overcome this so the sensor would properly detect close up objects in more general lighting conditions. This resulted not only in the need for new firmware, but changes to the way the devices are tested on the manufacturing line. In addition to allowing the sensor to see objects as close as 40 centimeters, these changes make the sensor less sensitive to more distant objects: when the sensor is in near mode, it has full accuracy and precision for objects 2 meters away, with graceful degradation out to 3 meters. Here is a handy chart one of our engineers made that shows the types of depth values returned by the runtime:
In Beta 2, for an object 800-4000 millimeters from the sensor the runtime would return the depth value, and the runtime returned a 0 regardless if the detected depth was unknown, too near or too far. Our version 1.0 runtime will return depth values if an object is in the above cyan zone. If the object is in the purple, brown or white zones, the runtime will return a distinct value indicating the appropriate zone.
Additionally, in version 1.0 of the runtime, near mode will have some skeletal support, although not full 20-joint skeletal tracking (ST). The below table outlines the differences between the default mode and near mode:
We believe that near mode, with its operational envelope of 40 centimeters to 3 meters, will enable many new classes of applications. While full 20-joint ST will not be supported in near mode with version 1.0 of the runtime, we will be working hard to support ST in near mode in the future!
When we launched Kinect for Xbox 360 on November 4th, 2010, something amazing happened: talented Open Source hackers and enthusiasts around the world took the Kinect and let their imaginations run wild. We didn’t know what we didn’t know about Kinect on Windows when we shipped Kinect for Xbox 360, and these early visionaries showed the world what was possible. What we saw was so compelling that we created the Kinect for Windows commercial program.
Our commercial program is designed to allow our partners— companies like Toyota, Mattel, American Express, Telefonica, and United Health Group—to deploy solutions to their customers and employees. It is also designed to allow early adopters and newcomers alike to take their ideas and release them to the world on Windows, with hardware that’s supported by Microsoft. At the same time, we wanted to let our early adopters keep working on the hardware they’d previously purchased. That is why our SDK continues to support the Kinect for Xbox 360 as a development device.
As I reflect back on the past eleven months since Microsoft announced we were bringing Kinect to Windows, one thing is clear: The efforts of these talented Open Source hackers and enthusiasts helped inspire us to develop Kinect for Windows faster. And their continued ambition and drive will help the world realize the benefits of Kinect for Windows even faster still. From all of us on the Kinect for Windows team: thank you.
We’re thrilled at the sales we’ve seen for Kinect - 18 million sold in the past year – and we were honored to receive a Guinness World Record for the fastest selling consumer electronics device ever. As consumers, we may take devices like Kinect for granted, but in fact electronic devices are the fruit of a great deal of behind-the-scenes ingenuity and experimentation. Kinect is a shining example of this. Instead of mimicking the handheld motion-sensing controllers already on the market, Microsoft shattered the existing controller paradigm by inventing a new natural user interface system that enables advanced human tracking, gesture recognition, voice control and more. Our answer to the “wand” controller was no controller at all, or as we say, “YOU are the controller.”
Getting there wasn’t easy. Without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and the deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist. And as amazing a piece of hardware as Kinect is, it is much more than that. At the heart of the Kinect experience lies sophisticated software that meaningfully deciphers the images and gestures captured by the 3D sensor as well as the voice commands captured by the microphone array from someone much further away than someone using a headset or a phone. More importantly, Kinect software can understand what each user means by a particular gesture or command across a wide range of possible shapes, sizes, and actions of real people.
The incredible amount of innovation on Kinect for Xbox 360 this past year shows the potential for Kinect as a platform for developers and businesses to build new and innovative offerings. Along with many others, we have only begun to explore the potential of this amazing technology. This proliferation of creative and imaginative new ideas for Kinect, which we call the Kinect Effect, will expand even further with our commercial release of Kinect for Windows.
Today, we are announcing that the new Kinect for Windows hardware and accompanying software will be available on February 1st, 2012 in 12 countries (United States, Australia, Canada, France, Germany, Ireland, Italy, Japan, Mexico, New Zealand, Spain, United Kingdom), at a suggested retail price of US $249. Kinect for Windows hardware will be available, in limited quantities at first, through a variety of resellers and distributors. The price includes a one-year warranty, access to ongoing software updates for both speech and human tracking, and our continued investment in Kinect for Windows-based software advancements. Later this year, we will offer special academic pricing (planned at US $149) for Qualified Educational Users.
We love the innovation we have seen built using Kinect for Xbox 360 – this has been a source of inspiration and delight for us and compelled us to create a team dedicated to serving this opportunity. We are proud to bring technology priced in the tens of thousands of dollars just a few years ago to the mainstream at extremely low consumer prices. And although Kinect for Windows is still value-priced for the technology, some will ask us why it isn’t the same price as Kinect for Xbox.
The ability to sell Kinect for Xbox 360 at its current price point is in large part subsidized by consumers buying a number of Kinect games, subscribing to Xbox LIVE, and making other transactions associated with the Xbox 360 ecosystem. In addition, the Kinect for Xbox 360 was built for and tested with the Xbox 360 console only, which is why it is not licensed for general commercial use, supported or under warranty when used on any other platform.
With Kinect for Windows, we are investing in creating a platform that is optimized for scenarios beyond the living room, and delivering new software features on an ongoing basis, starting with “near mode” (see my earlier blog post for more about this). In addition to support for Windows 7 and the Windows 8 developer preview (desktop apps only), Kinect for Windows will also support gesture and voice on Windows Embedded-based devices and will enhance how data is captured and accessed within intelligent systems across manufacturing, retail and many more industries. We are building the Kinect for Windows platform in a way that will allow other companies to integrate Kinect into their offerings and we have invested in an approach that allows them to develop in ways that are dependable and scalable.
We have chosen a hardware-only business model for Kinect for Windows, which means that we will not be charging for the SDK or the runtime; these will be available free to developers and end-users respectively. As an independent developer, IT manager, systems integrator, or ISV, you can innovate with confidence knowing that you will not pay license fees for the Kinect for Windows software or the ongoing software updates, and the Kinect for Windows hardware you and your customers use is supported by Microsoft.
Although we encourage all developers to understand and take advantage of the additional features and updates available with the new Kinect for Windows hardware and accompanying software, those developers using our SDK and the Kinect for Xbox 360 hardware may continue to use these in their development activities if they wish. However, non-commercial deployments using Kinect for Xbox 360 that were allowed using the beta SDK are not permitted with the newly released software. Non-commercial deployments using the new runtime and SDK will require the fully tested and supported Kinect for Windows hardware and software platform, just as commercial deployments do. Existing non-commercial deployments using our beta SDK may continue using the beta and the Kinect for Xbox 360 hardware; to accommodate this, we are extending the beta license for three more years, to June 16, 2016.
We expect that as Kinect for Windows hardware becomes readily available, developers will shift their development efforts to Kinect for Windows hardware in conjunction with the latest SDK and runtime. The combination of Kinect for Windows hardware and software creates a superior development platform for Windows and will yield a higher quality, better performing experience for end users.
We are excited for the new possibilities that Kinect will enable on the Windows platform, and to see how businesses and developers reimagine their processes and their products, and the many different ways each Kinect could enrich lives and make using technology more natural for everyone.
Getting technology to do what you want can be challenging. Imagine building a remote-controlled robot in 6 weeks, from pre-defined parts, which can perform various tasks in a competitive environment. That’s the challenge presented to 2,500 teams of students who will be participating in the FIRST (For Inspiration and Recognition of Science and Technology) Robotics Competition.
The worldwide competition, open to students in grades 9-12, kicks off this morning with a NASA-televised event, including pre-recorded opening remarks from Presidents Clinton and G.W. Bush, Dean Kamen, founder of FIRST and inventor of the Segway, and Alex Kipman, General Manager, Hardware Incubation, Xbox.
Last year, several FIRST teams experimented with the Kinect natural user interface capabilities to control their robots. The difference this year is the Kinect hardware and software will be included in the parts kits teams receive to build their robots. Teams will be able to control their robots not only with joy sticks, but gestures and possibly spoken commands.
The first part of the competition is the autonomous period, in which robot can only be controlled by sensor input and commands. This is when the depth and speech capabilities of Kinect will prove extremely useful.
To help teams understand how to incorporate Kinect technologies into the design of their robot controls for the 2012 competition, workshops are being held around the country. Students will be using C# or C++ to program the drive stations of their robots to recognize and respond to gestures and poses.
In addition, Microsoft Stores across the country are teaming up with FIRST robotics teams to provide Kinect tools, technical support, and assistance.
While winning teams get bragging rights, all participants gain real-world experience by working with professional engineers to build their team’s robot, using sophisticated hardware and software, such as the Kinect for Windows SDK. Team members also gain design, programming, project management, and strategic thinking experience. Last but not least, over $15 million of college scholarships will be awarded throughout the competition.
“The ability to utilize Kinect technologies and capabilities to transform the way people interact with computers already has sparked the imagination of thousands of developers, students, and researchers from around the world,” notes FIRST founder Dean Kamen. “We look forward to seeing how FIRST students utilize Kinect in the design and manipulation of their robots, and are grateful to Microsoft for participating in the competition as well as offering their support and donating thousands of Kinect sensors.”
This morning’s kick-off of the 2012 FIRST Robotics Competition was a highly anticipated day. Approximately 2,500 teams worldwide were given a kit of 600-700 discrete parts including a Kinect sensor and the Kinect for Windows software development kit (SDK), along with the details and rules for this year’s game, Rebound Rumble. Learn how Kinect for Windows will play a role in this year’s game by watching the game animation.