Twelve weeks ago, I announced that the Microsoft Accelerator for Kinect had opened its doors and the 11 participating teams had arrived in Seattle. Yesterday, the program concluded with Demo Day—an all-day event attended by more than 150 investors and journalists—where each of the startups presented their business plans and applications.
From the beginning, we believed this program was going to be amazing: we had hoped to receive 100 to 150 applications, but ended up with nearly 500 from more than 60 countries. There were so many amazing, creative ideas from a whole range of talented, successful people. As I said in a previous post, getting to the finalists was super challenging.
The teams who came here to Seattle—leaving jobs, families, university, and the comforts of their daily lives—did not disappoint. Their energy, drive, and innovative thinking were a constant source of inspiration to me and the folks across Microsoft that worked with them.
There were a lot great moments at Demo Day; here are just a few of many:
I think all the Kinect Accelerator companies have done an outstanding job the past 12 weeks and have bright futures ahead. These 11 teams are helping accelerate and push the boundary of what’s possible with Kinect for Windows, and inspiring others to think creatively about what the future looks like when Kinect-enabled, touch-free NUI experiences are commonplace.
Thanks to all of the teams that participated in the Accelerator and to the many others who applied. Keep up the great work!
Craig EislerGeneral Manager, Kinect for Windows
Back in May, we released the Kinect for Windows SDK/Runtime v1.5 in a modular manner, to make it easier to refresh parts of the Developer Toolkit (tools, components, and samples) without the need to update the SDK (driver, runtime, and basic compilation support).
Today, we have realized that vision with the Developer Toolkit update v1.5.1. This update boosts Kinect Studio performance and stability, improves face tracking, and introduces offline documentation support. If you have already installed the SDK, simply download the new v1.5.1 Developer Toolkit Update. If you are new to Kinect for Windows, you will want to download both Kinect for Windows SDK v1.5 and Developer Toolkit v1.5.1.
Rob RelyeaProgram Manager, Kinect for Windows
I am pleased to announce that today we have released version 1.5 of the Kinect for Windows runtime and SDK. Additionally, Kinect for Windows hardware is now available in Hong Kong, South Korea, Singapore, and Taiwan. Starting next month, Kinect for Windows hardware will be available in 15 additional countries: Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, South Africa, Sweden, Switzerland and the United Arab Emirates. When this wave of expansion is complete, Kinect for Windows will be available in 31 countries around the world. Go to our Kinect for Windows website to find a reseller in your region.
We have added more capabilities to help developers build amazing applications, including:
We have continued to expand and improve our skeletal tracking capabilities in this release:
We have made performance and data quality enhancements, which improve the experience of all Kinect for Windows applications using the RGB camera or needing RGB and depth data to be mapped together (“green screen” applications are a common example):
New capabilities to enable avatar animation scenarios, which makes it easier for developers to build applications that control a 3D avatar, such as Kinect Sports.
Finally, as I mentioned in my Sneak Peek Blog post, we released four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we released new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain, and Spanish/Mexico.
As we have worked with customers large and small over the past months, we’ve seen the value in having a fully integrated approach: the Kinect software and hardware are designed together; audio, video, and depth are all fully supported and integrated; our sensor, drivers, and software work together to provide world class echo cancellation; our approach to human tracking, which is designed in conjunction with the Kinect sensor, works across a broad range of people of all shapes, sizes, clothes, and hairstyles, etc. And because we design the hardware and software together, we are able to make changes that open up exciting new areas for innovation, like Near Mode.
Furthermore, because Kinect for Windows is from Microsoft, our support, distribution, and partner network are all at a global scale. For example, the Kinect for Windows hardware and software are tested together and supported as a unit in every country we are in (31 countries by June!), and we will continue to add countries over time. Microsoft’s developer tools are world class, and our SDK is built to fully integrate with Visual Studio. Especially important for our global business customers is Microsoft’s ability to connect them to partners and experts who can help them use Kinect for Windows to re-imagine their brands, their products, and their processes.
It is exciting for us to have built and shipped such a significantly enhanced version of the Kinect for Windows SDK less than 16 weeks after launch. But we are even more excited about our plans for the future – both in country expansion for the sensor, and in enhanced capabilities of our runtime and SDK. We believe the best is yet to come, and we can’t wait to see what developers will build with this!
Last week, winners of GeekWire’s fourth annual Seattle 2.0 Startup Awards were announced. Seattle has a vibrant startup community, and this is a very popular event within that community. I attended the awards ceremony, and it was amazing to see how much energy and excitement was in the room – and the EMP (Experience Music Project) where the event was held was packed.
There were entrepreneurs of all types and levels of experience (I even ran into Ray Ozzie!), venture capitalists, CEOs, and so much IQ and passion in one place it was a rush – the same rush I feel every Monday when I spend time at the Kinect Accelerator in Microsoft’s Westlake office.
We were thrilled when Kinect for Windows was named “Innovation of the Year” – not because we won (which is great!), but because it was the popular vote of the startup community. This is the same community that is continuing to deliver so many amazingly different ideas and products that have Kinect for Windows at the center. In so many ways, Kinect for Windows is “Innovation of the Year” because of the innovators who are using it: Thank you.
Learn more about all of the winners of the Seattle 2.0 Startup Awards. And thanks to all of you who voted for Kinect for Windows!
Earlier this month, Geekwire announced the nominees for the 2012 Seattle 2.0 Startup Awards. We're honored Kinect for Windows was selected in the “Innovation of the Year” category, which looks at technologies, which are setting a course for ”where the world is going and the way of the future.”
Other nominees in this category are Symform, ExtraHop, LaserMotive, and Vioguard.
If you’re developing with the Kinect for Windows SDK and sensor, or simply a fan of the technology, cast your vote and help us become Seattle’s startup innovation of the year.
Voting ends Monday, April 23rd. Winners will be announced at the Seattle 2.0 Startup Awards bash on May 3 at the Experience Music Project (EMP) in Seattle.
Kinect for Windows Team
Most developers, including myself, are natural tinkerers. We hear of a new technology and want to try it out, exploring what it can do, dream up interesting uses, and pushing the limits of what’s possible. Most recently, the Channel 9 team incorporated Kinect for Windows into two projects: BoxingBots, and Project Detroit.
The life-sized BoxingBots made their debut in early March at SXSW in Austin, Texas. Each robot is equipped with an on-board computer, which receives commands from two Kinect for Windows sensors and computers. The robots are controlled by two individuals whose movements – punching, rotating, stepping forward and backwards – are interpreted by and relayed back to the robots, who in turn, slug it out, until one is struck and its pneumatic-controlled head springs up.
The use of Kinect for Windows for telepresence applications, like controlling a robot or other mechanical device, opens up a number of interesting possibilities. Imagine a police officer using gestures and word commands to remotely control a robot, exploring a building that may contain explosives. In the same vein, Kinect telepresence applications using robots could be used in the manufacturing, medical, and transportation industries.
Project Detroit asked the question, what do you get when you combine the world’s most innovative technology with a classic American car? The answer is a 2012 Ford Mustang with a 1967 fastback replica body, and everything from Windows Phone integration to built-in WiFI, Viper SmartStart security system, cloud services, augmented reality, Ford SYNC, Xbox-enabled entertainment system, Windows 8 Slate, and Kinect for Windows cameras built into the tail and headlights.
One of the key features we built for Project Detroit was the ability to read Kinect data including a video stream, depth data, skeletal joint data, and audio streams over the network using sockets (available here as an open source project). These capabilites could make it possible to receive an alert on your phone when someone gets too close to your car. You could then switch to a live video/audio stream, via a network from the Kinect, to see what they were doing. Using your phone, you could talk to them, asking politely that they “look, but not touch.”
While these technologies may not show up in production cars in the coming months (or years), Kinect for Windows technologies are suited for use in cars for seeing objects such as pedestrians and cyclists behind and in front of vehicles, making it easier to ease into tight parking spots, and enabling built-in electronic devices with the wave of a hand or voice commands.
It’s an exciting time to not only be a developer, but a business, organization or consumer who will have the opportunity to benefit from the evolving uses and limitless possibilities of the Kinect for Windows natural user interface.
Dan FernandezSenior Director, Microsoft Channel 9
I grew up in the UK and my female cousins all had Barbie. In fact Barbies – they had lots of Barbie dolls and ton of accessories that they were obsessed with. I was more of a BMX kind of kid and thought my days of Barbie education were long behind me, but with a young daughter I’m beginning to realize that I have plenty more Barbie ahead of me, littered around the house like landmines. This time around though, I’m genuinely interested thanks to a Kinect-enabled application.
This week, Barbie lovers in Sydney, Australia, are being given the chance to do more than fanaticize how they’d look in their favorite Barbie outfit. Thanks to Mattel, Gun Communications, Adapptor, and Kinect for Windows, Barbie The Dream Closet is here.
The application invites users to take a walk down memory lane and select from 50 years of Barbie fashions. Standing in front of Barbie’s life-sized augmented reality “mirror,” fans can choose from several outfits in her digital wardrobe—virtually trying them on for size.
The solution, built with the Kinect for Windows SDK and using the Kinect for Windows sensor, tracks users’ movements and gestures enabling them to easily browse through the closet and select outfits that strike their fancy. Once an outfit is selected, the Kinect for Windows skeletal tracking determines the position and orientation of the user. The application then rescales Barbie’s clothes, rendering them over the user in real time for a custom fit.
One of the most interesting aspects of this solution is the technology’s ability to scale - with menus, navigation controls and clothing all dynamically adapting so that everyone from a little girl to a grown woman (and cough, yes, even a committed father) can enjoy the experience. To facilitate these advancements, each outfit was photographed on a Barbie doll, cut into multiple parts, and then built individually via the application.
Of course, the experience wouldn’t be complete without the ability to memorialize it. A photo is taken and, with approval/consent from those photographed, is uploaded and displayed in a gallery on the Barbie Australian Facebook page. (Grandparents can join in the fun from afar!)
I spoke with Sarah Sproule, Director, Gun Communications about the genesis of the idea who told me, “We started working on Barbie The Dream Closet six months ago, working with our development partner Adapptor. Everyone has been impressed by the flexibility, and innovation Microsoft has poured into Kinect for Windows. Kinect technology has provided Barbie with a rich and exciting initiative that's proving to delight fans of all ages. We're thrilled with the result, as is Mattel - our client."
Barbie’s Dream Closet, was opened to the public at the Westfield Parramatta in Sydney today and will be there through April 15. Its first day, it drew enthusiastic crowds, with around 100 people experiencing Barbie The Dream Closet. It's expected to draw even larger crowds over the holidays. It’s set to be in Melbourne and Brisbane later this year.
Meantime, the Kinect for Windows team is just as excited about it as my daughter:
“The first time I saw Barbie’s Dream Closet in action, I knew it would strike a chord,” notes Kinect for Windows Communications Manager, Heather Mitchell. “It’s such a playful, creative use of the technology. I remember fanaticizing about wearing Barbie’s clothes when I was a little girl. Disco Ken was a huge hit in my household back then…Who didn’t want to match his dance moves with their own life-sized Barbie disco dress? I think tens of thousands of grown girls have been waiting for this experience for years…Feels like a natural.”
That’s the beauty of Kinect – it enables amazingly natural interactions with technology and hundreds of companies are out there building amazing things; we can’t wait to see what they continue to invent.
Steve ClaytonEditor, Next at Microsoft
I am pleased to announce that the finalists for our Kinect Accelerator have arrived in ever-sunny Seattle and today are launching into a three-month program to build new products and business using Kinect. I can’t wait to see what they come up with – using Kinect, these teams have the ability to reimagine the way products are used, and perhaps even revolutionize entire industries along the way.
Kinect Accelerator is powered by TechStars, in close collaboration with the Microsoft BizSpark program; my team and I have been working closely with the BizSpark team and others in the Interactive Entertainment Business to help develop and bring this program to life. The response to the Kinect Accelerator has been phenomenal and we expect to see remarkable innovation coming out of the program.
We were hoping to receive 100 to 150 applications, with a goal of selecting the best ten. But the worldwide entrepreneurial community completely surprised us by submitting almost five hundred applications with concepts spanning nearly 20 different industries, including healthcare, education, retail, entertainment, and more.
There were so many clever and innovative ideas and so many great teams it was super challenging to narrow things down – we spent many, many hours in a rigorous and highly energetic review process. We finally landed on 11 finalists from five countries, chosen based on their experience, qualifications, and the potential benefit that could result from their Kinect Accelerator. The finalists are:
Each team will be mentored by entrepreneurs and venture capitalists as well as leaders from Kinect for Windows, Xbox, Microsoft Studios, Microsoft Research and other Microsoft organizations. The teams will spend the first several weeks ideating and refining their business concepts with input and advice from their mentors, followed by several weeks of design and development. They will present their results at an event at the end of June.
We were so amazed by the quality, caliber, and uniqueness of the applications and teams that we decided to reward the top 100 applicants that didn’t make it into the program with a complimentary Kinect for Windows sensor. I believe we are going to see great things from many of the folks that applied to the program and we wish them all the best.
We will share more information about the Kinect Accelerator teams and their applications on this blog in coming months. And for more information on the Kinect Accelerator program in general, go to KinectAccelerator.com.
Craig EislerGeneral Manager, Kinect for Windows
Since our announcement of Kinect for Windows version 1.5 in “What’s Ahead: A Sneak Peek” there have been a few questions that have come up that I wanted to answer.
There have been some folks who have thought that 1.5 included new hardware. Version 1.5 is our new software release that is coming out in the same timeframe that we launch the current Kinect for Windows hardware in 19 additional countries. We will upgrade our software at a faster rate than we refresh our hardware.
We have built version 1.5 of our software with 1.0 compatibility at top of mind. Applications built using 1.0 will work on the same machine with an application built using 1.5 – this is something that we plan to do always, insuring that solutions built using older runtimes can always run side by side with solutions using new runtimes. Furthermore, we have maintained API compatibility for developers – applications that are currently being built using the 1.0 SDK can be recompiled using the 1.5 SDK without any changes required. No one has to wait for 1.5 to get a Kinect for Windows sensor or to start coding using the current SDK!
I love the enthusiasm for the 1.5 SDK and runtime, the new speech languages, and for the new countries we’re launching in – we can’t wait to deliver it to you.
Craig EislerGeneral Manager, Kinect for Windows
The momentum continues for Kinect for Windows. I am pleased to announce that we will be launching Kinect for Windows in nineteen more countries in the coming months. We will have availability in Hong Kong, South Korea, and Taiwan in late May. In June, Kinect for Windows will be available in Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, Singapore, South Africa, Sweden, Switzerland and the United Arab Emirates.
We are also hard at work on our 1.5 release, which will be available at the end of May. Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications. Also coming is what we call “seated” or “10-joint” skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user. What is extra exciting to me about this functionality is that it will work in both default and near mode!
Also included in our 1.5 release will be four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we will be releasing new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain and Spanish/Mexico.
In a future blog post, I’ll discuss the features and capabilities we are releasing in more detail. We are excited by the enthusiasm for Kinect for Windows, and will continue to work on bringing Kinect for Windows to more countries, supporting more languages with our speech engine, and continuing to evolve our human tracking capabilities.