• Kinect for Windows Product Blog

    Kinect for Windows expands its developer preview program


    Last June, we announced that we would be hosting a limited, exclusive developer preview program for Kinect for Windows v2 prior to its general availability in the summer (northern hemisphere) of 2014. And a few weeks ago, we began shipping Kinect for Windows v2 Developer Preview kits to thousands of participants all over the world.

    It’s been exciting to hear from so many developers as they take their maiden voyage with Microsoft’s new generation NUI technology. We’ve seen early unboxing videos that were recorded all over the world, from London to Tokyo. We’ve heard about some promising early experiments that are taking advantage of the higher resolution data and the ability to see six people.  People have told us about early success with the new sensor’s ability to track the tips of hands and thumbs.  And some developers have even described how easy it’s been to port their v1 apps to the new APIs.

    Kinect for Windows v2 Developer Preview kit (Photo courtesy of Vladimir Kolesnikov [@vladkol], a developer preview program participant)
    Kinect for Windows v2 Developer Preview kit
    (Photo courtesy of Vladimir Kolesnikov [@vladkol], a developer preview program participant)

    But we’ve also heard from many people who were not able to secure a place in the program and are eager to get their hands on the Kinect for Windows v2 sensor and SDK as soon as possible. For everyone who has been hoping and waiting, we’re pleased to announce that we are expanding the program so that more of you can participate!

    We are creating 500 additional developer preview kits for people who have great ideas they want to bring to life with the Kinect for Windows sensor and SDK. Like before, the program is open to professional developers, students, researchers, artists, and other creative individuals.

    The program fee is US$399 (or local equivalent) and offers the following benefits:

    • Direct access to the Kinect for Windows engineering team via a private forum and exclusive webcasts
    • Early SDK access (alpha, beta, and any updates along the way to release)
    • Private access to all API and sample documentation
    • A pre-release version of the new generation sensor
    • A final, released sensor at launch next summer (northern hemisphere)

    Applications must be completed and submitted by January 31, 2014, at 9:00 A.M. (Pacific Time), but don’t wait until then to apply! We will award positions in the program on a rolling basis to qualified applicants. Once all 500 kits have been awarded, the application process will be closed.

    Learn more and apply now

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    BUILDing business with Kinect for Windows v2


    BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

    Making schoolwork fun with Freak’n Genius, which lets anyone become an animator using Kinect for Windows v2. Here a student is choosing a character to animate in real time, for a video presentation on nutrition.
    Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.

    Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

    But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

    What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.”  He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.

    Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.

    While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling  to perform the exercises correctly at home—or simply skipping them altogether.

    Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

    Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

    Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.”  And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review.  “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.

    But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor.  Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”

    From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Nissan Pathfinder Virtual Showroom is Latest Auto Industry Tool Powered by Kinect for Windows


    Automotive companies Audi, Ford, and Nissan are adopting Kinect for Windows as a the newest way to put a potential driver into a vehicle. Most car buyers want to get "hands on" with a car before they are ready to buy, so automobile manufacturers have invested in tools such as online car configurators and 360-degree image viewers that make it easier for customers to visualize the vehicle they want.

    Now, Kinect's unique combination of camera, body tracking capability, and audio input can put the car buyer into the driver's seat in more immersive ways than have been previously possible—even before the vehicle is available on the retail lot!

    The most recent example of this automotive trend is the 2013 Nissan Pathfinder application powered by Kinect for Windows, which was originally developed to demonstrate the new Pathfinder at auto shows before there was a physical car available.

    Nissan quickly recognized the value of this application for building buzz at local dealerships, piloting it in 16 dealerships in 13 states nationwide.

    "The Pathfinder application using Kinect for Windows is a game changer in terms of the way we can engage with consumers," said John Brancheau, vice president of marketing at Nissan North America. "We're taking our marketing to the next level, creating experiences that enhance the act of discovery and generate excitement about new models before they're even available. It's a powerful pre-sales tool that has the potential to revolutionize the dealer experience."

    Digital marketing agency Critical Mass teamed with interactive experience developer IdentityMine to design and build the Kinect-enabled Pathfinder application for Nissan. "We're pioneering experiences like this one for two reasons: the ability to respond to natural human gestures and voice input creates a rich experience that has broad consumer appeal," notes Critical Mass President Chris Gokiert. "Additionally, the commercial relevance of an application like this can fulfill a critical role in fueling leads and actually helping to drive sales on site."

    Each dealer has a kiosk that includes a Kinect for Windows sensor, a monitor, and a computer that’s running the Pathfinder application built with the Kinect for Windows SDK. Since the Nissan Pathfinder application first debuted at the Chicago Auto Show in February 2012, developers made several enhancements, including a new pop-up tutorial, and interface improvements, such as larger interaction icons and instructional text along the bottom of the screen so a customer with no Kinect experience could jump right in. "In the original design for the auto show, the application was controlled by a trained spokesperson. That meant aspects like discoverability and ease-of-use for first-time users were things we didn’t need to design for," noted IdentityMine Research Director Evan Lang.

    Now, shoppers who approach the Kinect-based showroom are guided through an array of natural movements—such as extending their hands, stepping forward and back, and leaning from side to side—to activate hotspots on the Pathfinder model, allowing them to inspect the car inside and out.

    Shoppers who approach the Kinect-based showroom are guided through an array of natural movements that allow them to inspect the car inside and out.The project was not, however, without a few challenges. The detailed Computer-Aided Design (CAD) model data provided by Nissan, while ideal for commercials and other post-rendered uses, did not lend itself easily to a real-time engine. "A lot of rework was necessary that involved 'retopolgizing' the mesh," reported IdentityMine’s 3D Design Lead Howard Schargel. "We used the original as a template and traced over to get a cleaner, more manageable polygon count. We were able to remove much more than half of the original polygons, allowing for more fluid interactions and animations while still retaining the fidelity of the client's original model."

    And then, the development team pushed further. "The application uses a dedicated texture to provide a dynamic, scalable level of detail to the mesh by adding or removing polygons, depending on how close it is to the camera,” explained Schargel. “It may sound like mumbo jumbo—but when you see it, you won't believe it."

    You can see the Nissan Pathfinder app in action at one of the 16 participating dealerships or by watching our video case study.

    Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Microsoft’s Kinect Accelerator Begins Today


    I am pleased to announce that the finalists for our Kinect Accelerator have arrived in ever-sunny Seattle and today are launching into a three-month program to build new products and business using Kinect. I can’t wait to see what they come up with – using Kinect, these teams have the ability to reimagine the way products are used, and perhaps even revolutionize entire industries along the way.

    Kinect Accelerator is powered by TechStars, in close collaboration with the Microsoft BizSpark program; my team and I have been working closely with the BizSpark team and others in the Interactive Entertainment Business to help develop and bring this program to life. The response to the Kinect Accelerator has been phenomenal and we expect to see remarkable innovation coming out of the program.

    Craig Eisler and other executives from Microsoft and TechStars met in February to review program applications.We were hoping to receive 100 to 150 applications, with a goal of selecting the best ten. But the worldwide entrepreneurial community completely surprised us by submitting almost five hundred applications with concepts spanning nearly 20 different industries, including healthcare, education, retail, entertainment, and more. 

    There were so many clever and innovative ideas and so many great teams it was super challenging to narrow things down – we spent many, many hours in a rigorous and highly energetic review process. We finally landed on 11 finalists from five countries, chosen based on their experience, qualifications, and the potential benefit that could result from their Kinect Accelerator.  The finalists are: 

    • Freak'n Genius – Seattle, WA
    • GestSure Technologies – Toronto, Canada
    • IKKOS – Seattle, WA
    • Kimetric – Buenos Aires, Argentina
    • Jintronix Inc.  – Montreal, Canada
    • Manctl – Lyon, France
    • NConnex – Hadley, MA
    • Styku - Los Angeles, CA
    • übi interactive – Munich, Germany
    • VOXON – New York, NY
    • Zebcare – Boston, MA

    The Kinect Accelerator will be held in Microsoft’s state of the art facility in Seattle’s vibrant South Lake Union neighborhood.Each team will be mentored by entrepreneurs and venture capitalists as well as leaders from Kinect for Windows, Xbox, Microsoft Studios, Microsoft Research and other Microsoft organizations. The teams will spend the first several weeks ideating and refining their business concepts with input and advice from their mentors, followed by several weeks of design and development.  They will present their results at an event at the end of June.

    We were so amazed by the quality, caliber, and uniqueness of the applications and teams that we decided to reward the top 100 applicants that didn’t make it into the program with a complimentary Kinect for Windows sensor. I believe we are going to see great things from many of the folks that applied to the program and we wish them all the best.

    We will share more information about the Kinect Accelerator teams and their applications on this blog in coming months. And for more information on the Kinect Accelerator program in general, go to KinectAccelerator.com.

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Inside the Newest Kinect for Windows SDK—Infrared Control


    Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

    IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

    One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”

    Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”

    Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    An MVP’s look at the Kinect for Windows v2 developer preview


    A few months ago, Microsoft Most Valuable Professional (MVP) James Ashley, a leader in developing with Kinect for Windows, wrote a very perceptive blog about Kinect for Windows v2 entitled, Kinect for Windows v2 First Look. James’ blog was so insightful that we wanted to check in with him after being in the Developer Preview program for three months and learn more about his experiences with the preview sensor and his advice to fellow Kinect for Windows developers. Here’s our Q&A with James:

    Microsoft: As a participant in the developer preview program, what cool things have you been doing with the Kinect for Windows v2 sensor and SDK over the past few months? Which features have you used, and what did you do with them?

    James: My advanced technology group at Razorfish has been very interested in developing mixed-media and mixed-technology stories with the Kinect for Windows v2 sensor. We recently did a proof- of-concept digital store with the Windows 8 team for the National Retail Federation (aka “Retail’s BIG Show”) in New York. You've heard of pop-up stores? We took this a step further by pre-loading a shipping container with digital screens, high-lumen projectors, massive arrays of Microsoft Surface tablets, and Perceptive Pixel displays and having a tractor-trailer deposit it in the Javits Center in New York City. When you opened the container, you had an instant retail store. We used the Kinect for Windows v2 sensor and SDK to drive an interactive soccer game built in Unity’s 3D toolset, in which 3D soccer avatars were controlled by the player's full body movements: when you won a game, a signal was sent by using Arduino components to drop a drink from a vending machine.

    Watch the teaser for Razorfish's interactive soccer game

    We also used Kinect for Windows v2 to allow people to take pictures with digital items they designed on the Perceptive Pixel. We then dropped a beach scene they selected into the background of the picture, which was printed out on the spot as well as emailed and pushed to their social networks if they wanted. In creating this experience, the new time-of-flight depth camera in Kinect for Windows v2 proved to be leagues better than anything we were able to do with the original Kinect for Windows sensor; we were thrilled with how well it worked. [Editor’s note: You can learn more about these retail applications in this blog post.]

    Much closer to the hardware, we have also been working with a client on using Kinect for Windows v2 to do precise measurements, to see if the Kinect for Windows v2 sensor can be used in retail to help people get fitted precisely—for instance with clothing and other wearables. Kinect for Windows v2 promises accuracy of 2.5 cm at even 4 meters, so this is totally feasible and could transform how we shop.

    Microsoft: Which features do you find the most useful and/or the most exciting, and why?

    James: Right now, I'm most interested in the depth camera. It has a much higher resolution than some standard time-of-flight cameras currently selling for $8,000 or $9,000. Even though the Kinect for Windows v2 final pricing hasn't been announced yet, we can expect it to be much, much less than that. It's stunning that Microsoft was able to pull off this technical feat, providing both improved quality and improved value in one stroke.

    Microsoft: Have you heard from other developers, and if so, what are they saying about your applications and/or their impressions of Kinect for Windows v2?

    James: I'm on both the MVP list and the developer preview program's internal list, so I've had a chance to hear a lot of really great feedback. Basically, we all had to learn a lot of tricks to make things work the way we wanted with the original Kinect for Windows. With v2, it feels like we are finally getting all the hardware performance we've wanted and then some. Of course, the SDK is still under development and we're obviously still early on with the preview program. People need to be patient.

    Microsoft: Any words of advice or encouragement for other developers about using Kinect for Widows v2?

    James: If you are a C# developer and you haven't made the plunge, now is a good time to start learning Visual C++. All of the powerful interaction and visually intensive things you might want to do are taking advantage of C++ libraries like Cinder, openFrameworks, PCL, and OpenCV. It requires being willing to feel stupid again for about six months, but at the end of that time, you'll be glad you made the effort.

    Our thanks to James for taking time to share his insights and experience with us. And as mentioned at the top of this post, you should definitely read James’ Kinect for Windows v2 First Look blog.

    Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows: Developer Toolkit Update (v1.5.1)


    Back in May, we released the Kinect for Windows SDK/Runtime v1.5 in a modular manner, to make it easier to refresh parts of the Developer Toolkit (tools, components, and samples) without the need to update the SDK (driver, runtime, and basic compilation support).

    Today, we have realized that vision with the Developer Toolkit update v1.5.1. This update boosts Kinect Studio performance and stability, improves face tracking, and introduces offline documentation support. If you have already installed the SDK, simply download the new v1.5.1 Developer Toolkit Update. If you are new to Kinect for Windows, you will want to download both Kinect for Windows SDK v1.5 and Developer Toolkit v1.5.1.

    Key Links:

    Rob Relyea
    Program Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Kinect for Windows Helps Girls Everywhere Dress Like Barbie


    I grew up in the UK and my female cousins all had Barbie. In fact Barbies – they had lots of Barbie dolls and ton of accessories that they were obsessed with. I was more of a BMX kind of kid and thought my days of Barbie education were long behind me, but with a young daughter I’m beginning to realize that I have plenty more Barbie ahead of me, littered around the house like landmines. This time around though, I’m genuinely interested thanks to a Kinect-enabled application. The outfits from Barbie the Dream Closet not only scale to fit users, but enable them to turn sideways to see how they look from various angles.

    This week, Barbie lovers in Sydney, Australia, are being given the chance to do more than fanaticize how they’d look in their favorite Barbie outfit. Thanks to Mattel, Gun Communications, Adapptor, and Kinect for Windows, Barbie The Dream Closet is here.

    The application invites users to take a walk down memory lane and select from 50 years of Barbie fashions. Standing in front of Barbie’s life-sized augmented reality “mirror,” fans can choose from several outfits in her digital wardrobe—virtually trying them on for size.

    The solution, built with the Kinect for Windows SDK and using the Kinect for Windows sensor, tracks users’ movements and gestures enabling them to easily browse through the closet and select outfits that strike their fancy. Once an outfit is selected, the Kinect for Windows skeletal tracking determines the position and orientation of the user. The application then rescales Barbie’s clothes, rendering them over the user in real time for a custom fit.

    One of the most interesting aspects of this solution is the technology’s ability to scale - with menus, navigation controls and clothing all dynamically adapting so that everyone from a little girl to a grown woman (and cough, yes, even a committed father) can enjoy the experience. To facilitate these advancements, each outfit was photographed on a Barbie doll, cut into multiple parts, and then built individually via the application. 

    Of course, the experience wouldn’t be complete without the ability to memorialize it. A photo is taken and, with approval/consent from those photographed, is uploaded and displayed in a gallery on the Barbie Australian Facebook page. (Grandparents can join in the fun from afar!)

    I spoke with Sarah  Sproule, Director, Gun Communications about the genesis of the idea who told me, We started working on Barbie The Dream Closet six months ago, working with our development partner Adapptor. Everyone has been impressed by the flexibility, and innovation Microsoft has poured into Kinect for Windows. Kinect technology has provided Barbie with a rich and exciting initiativBarbie enthusiasts of all ages can enjoy trying on and posing in outfits.e that's proving to delight fans of all ages. We're thrilled with the result, as is Mattel - our client."

    Barbie’s Dream Closet, was opened to the public at the Westfield Parramatta in Sydney  today and will be there through April 15. Its first day, it drew enthusiastic crowds, with around 100 people experiencing Barbie The Dream Closet. It's expected to draw even larger crowds over the holidays. It’s set to be in Melbourne and Brisbane later this year.

     Meantime, the Kinect for Windows team is just as excited about it as my daughter:

    “The first time I saw Barbie’s Dream Closet in action, I knew it would strike a chord,” notes Kinect for Windows Communications Manager, Heather Mitchell. “It’s such a playful, creative use of the technology. I remember fanaticizing about wearing Barbie’s clothes when I was a little girl. Disco Ken was a huge hit in my household back then…Who didn’t want to match his dance moves with their own life-sized Barbie disco dress? I think tens of thousands of grown girls have been waiting for this experience for years…Feels like a natural.”

    That’s the beauty of Kinect – it enables amazingly natural interactions with technology and hundreds of companies are out there building amazing things; we can’t wait to see what they continue to invent.

    Steve Clayton
    Editor, Next at Microsoft

  • Kinect for Windows Product Blog

    Build-A-Bear Selects Kinect for Windows for "Store of the Future"


    Build-A-Bear Workshop stores have been delivering custom-tailored experiences to children for 15 years in the form of make-your-own stuffed animals, but the company recently recognized that its target audience was gravitating toward digital devices. So it has begun advancing its in-store experiences to match the preferences of its core customers by incorporating digital screens throughout the stores—from the entrance to the stations where the magic of creating new fluffy friends happens.

    A key part of Build-A-Bear's digital shift is their interactive storefront that's powered by Kinect for Windows. It enables shoppers to play digital games on either a screen adjacent to the store entrance or directly through the storefront window simply by using their bodies and natural gestures to control the game.

    Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.
    Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.

    "We're half retail, half theme park," said Build-A-Bear Director of Digital Ventures Brandon Elliott. The Kinect for Windows platform instantly appealed to Build-A-Bear as "a great enabler for personalized interactivity."

    The Kinect for Windows application, launched at six pilot stores, uses skeletal tracking to enable two players (four hands) to pop virtual balloons (up to five balloons simultaneously) by waving their hands or by touching the screen directly. While an increasing number of retail stores use digital signage these days, Elliott noted: "What they're not doing is building a platform for interactive use."

    "We wanted something that we could build on, that's a platform for ever-improving experiences," Elliott said. "With Kinect for Windows, there’s no learning curve. People can interact naturally with technology by simply speaking and gesturing the way they do when communicating with other people. The Kinect for Windows sensor sees and hears them."

    "Right now, we're just using the skeletal tracking, but we could use voice recognition components or transform the kids into on-screen avatars," he added. "The possibilities are endless."
    Part of the Build-A-Bear's vision is to create Kinect for Windows apps that tie into the seasonal marketing themes that permeate the stores. Elliott said that Build-A-Bear selected the combination of the Microsoft .NET Framework, Kinect for Windows SDK, and Kinect for Windows sensor specifically so that they can take advantage of existing developer platforms to build these new apps quickly.

    “We appreciate that the Kinect for Windows SDK is developing so rapidly. We appreciate the investment Microsoft is making to continue to open up features within the Kinect for Windows sensor to us,” Elliott said. "The combination of Kinect for Windows hardware and software unlocks a world of new UI possibilities for us."

    Microsoft developer and technical architect Todd Van Nurden and others at the Minneapolis-based Microsoft Technology Center helped Build-A-Bear with an early consultation that led to prototyping apps for the project.

    "The main focus of my consult was to look for areas beyond simple screen-tied interactions to create experiences where Kinect for Windows activates the environment. Screen-based interactions are, of course, the easiest but less magical then environmental," Van Nurden said. "We were going for magical."

    The first six Build-A-Bear interactive stores launched in October and November 2012 in St. Louis, Missouri; Pleasanton, California; Annapolis, Maryland; Troy, Michigan; Fairfax, Virginia, and Indianapolis, Indiana (details). Four of the stores have gesture-enhanced interactive signs at the entrance, while two had to be placed behind windows to comply with mall rules. Kinect for Windows can work through glass with the assistance of a capacitive sensor that enables the window to work as a touch screen, and an inductive driver that turns glass into a speaker.

    So far, Build-A-Bear has been thrilled with what Elliott calls "fantastic" results. "Kids get it," he said. "We have a list of apps we want to build over the next couple of years. We can literally write an app for one computer in the store, and put it anywhere."

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    Mysteries of Kinect for Windows Face Tracking output explained


    Since the release of Kinect for Windows version 1.5, developers have been able to use the Face Tracking software development kit (SDK) to create applications that can track human faces in real time. Figure 1, an illustration from the Face Tracking documentation, displays 87 of the points used to track the face. Thirteen points are not illustrated here—more on those points later.

    Figure 1: Tracked Points
    Figure 1: Tracked points

    You have questions...

    Based on feedback we received via comments and forum posts, it is clear there is some confusion regarding the face tracking points and the data values found when using the SDK sample code. The managed sample, FaceTrackingBasics-WPF, demonstrates how to visualize mesh data by displaying a 3D model representation on top of the color camera image.

    MeshModel - Copy
    Figure 2: Screenshot from FaceTrackingBasics-WPF

    By exploring this sample source code, you will find a set of helper functions under the Microsoft.Kinect.Toolkit.FaceTracking project, in particular GetProjected3DShape(). What many have found was the function returned a collection where the length of the array was 121 values. Additionally, some have also found an enum list, called “FeaturePoint”, that includes 70 items.

    We have answers...

    As you can see, we have two main sets of numbers that don't seem to add up. This is because these are two sets of values that are provided by the SDK:

    1. 3D Shape Points (mesh representation of the face): 121
    2. Tracked Points: 87 + 13

    The 3D Shape Points (121 of them) are the mesh vertices that make a 3D face model based on the Candide-3 wireframe.

    Figure 3: image from http://www.icg.isy.liu.se/candide/img/candide3_rot128.gif
    Figure 3: Wireframe of the Candide-3 model http://www.icg.isy.liu.se/candide/img/candide3_rot128.gif

    These vertices are morphed by the FaceTracking APIs to align with the face. The GetProjected3DShape method returns the vertices as an array of  Vector3DF[]. These values can be enumerated by name using the "FeaturePoint" list. For example, TopSkull, LeftCornerMouth, or OuterTopRightPupil. Figure 4 shows these values superimposed on top of the color frame. 

    Figure 4: Feature Point index mapped on mesh model

    To get the 100 tracked points mentioned above, we need to dive more deeply into the APIs. The managed APIs, provide an FtInterop.cs file that defines an interface, IFTResult, which contains a Get2DShapePoints function. FtInterop is a wrapper for the native library that exposes its functionality to managed languages. Users of the unmanaged C++ API may have already seen this and figured it out. Get2DShapePoints is the function that will provide the 100 tracked points.

    If we have a look at the function, it doesn’t seem to be useful to a managed code developer:

    // STDMETHOD(Get2DShapePoints)(THIS_ FT_VECTOR2D** ppPoints, UINT* pPointCount) PURE;
    void Get2DShapePoints(out IntPtr pointsPtr, out uint pointCount);

    To get a better idea of how you can get a collection of points from IntPtr, we need to dive into the unmanaged function:

    /// <summary>
    /// Returns 2D (X,Y) coordinates of the key points on the aligned face in video frame coordinates.
    /// </summary>
    /// <param name="ppPoints">Array of 2D points (as FT_VECTOR2D).</param>
    /// <param name="pPointCount">Number of elements in ppPoints.</param>
    /// <returns>If the method succeeds, the return value is S_OK. If the method fails, the return value can be E_POINTER.</returns>
    STDMETHOD(Get2DShapePoints)(THIS_ FT_VECTOR2D** ppPoints, UINT* pPointCount) PURE; 

    The function will give us a pointer to the FT_VECTOR2D array. To consume the data from the pointer, we have to create a new function for use with managed code.

    The managed code

    First, you need to create an array to contain the data that is copied to managed memory. Since FT_VECTOR2D is an unmanaged structure, to marshal the data to the managed wrapper, we must have an equivalent data type to match. The managed version of this structure is PointF (structure that uses floats for x and y).

    Now that we have a data type, we need to convert IntPtr to PointF[]. Searching the code, we see that the FaceTrackFrame class wraps the IFTResult object. This also contains the GetProjected3DShape() function we used before, so this is a good candidate to add a new function, GetShapePoints. It will look something like this:

    // populates an array for the ShapePoints
    public void GetShapePoints(ref Vector2DF[] vector2DF)
         // get the 2D tracked shapes
         IntPtr ptBuffer = IntPtr.Zero;
         uint ptCount = 0;
         this.faceTrackingResultPtr.Get2DShapePoints(out ptBuffer, out ptCount);
         if (ptCount == 0)
    vector2DF = null;
         // create a managed array to hold the values
         if (vector2DF == null || (vector2DF != null && vector2DF.Length != ptCount))
             vector2DF = new Vector2DF[ptCount];

         ulong sizeInBytes = (ulong)Marshal.SizeOf(typeof(Vector2DF));
         for (ulong i = 0; i < ptCount; i++)
             vector2DF[i] = (Vector2DF)Marshal.PtrToStructure((IntPtr)((ulong)ptBuffer + (i * sizeInBytes)), typeof(Vector2DF));

    To ensure we are using the data correctly, we refer to the documentation on Get2DShapePoints:

    IFTResult::Get2DShapePoints Method gets the (x,y) coordinates of the key points on the aligned face in video frame coordinates.

    The PointF values represent the mapped values on the color image. Since we know it matches the color frame, there is no need to do apply mapping. You can call the function to get the data, which should align to the color image coordinates.

    The sample code

    The modified version of FaceTrackingBasics-WPF is available in the sample code that can be downloaded from CodePlex. It has been modified to allow you to display the feature points (by name or by index value) and toggle the mesh drawing. Because of the way WPF renders, the performance can suffer on machines with lower end graphics cards. I recommend that you only enable these one at a time. If your UI becomes unresponsive, you can block the sensor with your hand to prevent FaceTracking data capturing. Since the application will not detect any face tracked data, it will not render any points, giving you the opportunity to reset the features you enabled by using the UI controls.

    Figure 5: ShapePoints mapped around the face

    As you can see in Figure 5, the additional 13 points are the center of the eyes, the tip of the nose, and the areas above the eyebrows on the forehead. Once you enable a feature and tracking begins, you can zoom into the center and see the values more clearly.

    A summary of the changes:


    • UI changes to enable slider and draw selections



    • Added a Grid control – used for the UI elements
    • Modified the constructor to initialize grid
    • Modified the OnAllFrameReady event
      • For any tracked skeletons, create a canvas and add to the grid. Use that as the parent to put the label controls

    public partial class FaceTrackingViewer : UserControl, IDisposable
         private Grid grid;

         public FaceTrackingViewer()

             // add grid to the layout
             this.grid = new Grid();
             this.grid.Background = Brushes.Transparent;
             this.Content = this.grid;

         private void OnAllFramesReady(object sender, AllFramesReadyEventArgs allFramesReadyEventArgs)
             // We want keep a record of any skeleton, tracked or untracked.
             if (!this.trackedSkeletons.ContainsKey(skeleton.TrackingId))
                 // create a new canvas for each tracker
                 Canvas canvas = new Canvas();
                 canvas.Background = Brushes.Transparent;
                 this.grid.Children.Add( canvas );
                 this.trackedSkeletons.Add(skeleton.TrackingId, new SkeletonFaceTracker(canvas));

    SkeletonFaceTracker class changes:

    • New property: DrawFraceMesh, DrawShapePoints, DrawFeaturePoint, featurePoints, lastDrawFeaturePoints, shapePoints, labelControls, Canvas
    • New functions: FindTextControl UpdateTextControls, RemoveAllFromCanvas, SetShapePointsLocations, SetFeaturePointsLocations
    • Added the constructor to keep track of the parent control
    • Changed the DrawFaceModel function to draw based on what data was selected
    • Updated the OnFrameReady event to recalculate the positions based for the drawn elements
      • If DrawShapePoints is selected, then we call our new function

    private class SkeletonFaceTracker : IDisposable
        // properties to toggle rendering 3D mesh, shape points and feature points
        public bool DrawFaceMesh { get; set; }
        public bool DrawShapePoints { get; set; }
        public DrawFeaturePoint DrawFeaturePoints { get; set; }

        // defined array for the feature points
        private Array featurePoints;
        private DrawFeaturePoint lastDrawFeaturePoints;

        // array for Points to be used in shape points rendering
        private PointF[] shapePoints;

        // map to hold the label controls for the overlay
        private Dictionary<string, Label> labelControls;

        // canvas control for new text rendering
        private Canvas Canvas;

        // canvas is passed in for every instance
        public SkeletonFaceTracker(Canvas canvas)
            this.Canvas = canvas;

        public void DrawFaceModel(DrawingContext drawingContext)
            // only draw if selected
            if (this.DrawFaceMesh && this.facePoints != null)

        internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage, Skeleton skeletonOfInterest)
            if (this.lastFaceTrackSucceeded)
                if (this.DrawFaceMesh || this.DrawFeaturePoints != DrawFeaturePoint.None)
                    this.facePoints = frame.GetProjected3DShape();

                // get the shape points array
                if (this.DrawShapePoints)
                    this.shapePoints = frame.GetShapePoints();

            // draw/remove the components


    Pulling it all together...

    As we have seen, there are two types of data points that are available from the Face Tracking SDK:

    • Shape Points: data used to track the face
    • Mesh Data: vertices of the 3D model from the GetProjected3DShape() function
    • FeaturePoints: named vertices on the 3D model that play a significant role in face tracking

    To get the shape point data, we have to extend the current managed wrapper with a new function that will handle the interop with the native API.

    Carmine Sirignano
    Developer Support Escalation Engineer
    Kinect for Windows

    Additional resources


Page 4 of 18 (175 items) «23456»