Automotive companies Audi, Ford, and Nissan are adopting Kinect for Windows as a the newest way to put a potential driver into a vehicle. Most car buyers want to get "hands on" with a car before they are ready to buy, so automobile manufacturers have invested in tools such as online car configurators and 360-degree image viewers that make it easier for customers to visualize the vehicle they want.
Now, Kinect's unique combination of camera, body tracking capability, and audio input can put the car buyer into the driver's seat in more immersive ways than have been previously possible—even before the vehicle is available on the retail lot!
The most recent example of this automotive trend is the 2013 Nissan Pathfinder application powered by Kinect for Windows, which was originally developed to demonstrate the new Pathfinder at auto shows before there was a physical car available.
Nissan quickly recognized the value of this application for building buzz at local dealerships, piloting it in 16 dealerships in 13 states nationwide.
"The Pathfinder application using Kinect for Windows is a game changer in terms of the way we can engage with consumers," said John Brancheau, vice president of marketing at Nissan North America. "We're taking our marketing to the next level, creating experiences that enhance the act of discovery and generate excitement about new models before they're even available. It's a powerful pre-sales tool that has the potential to revolutionize the dealer experience."
Digital marketing agency Critical Mass teamed with interactive experience developer IdentityMine to design and build the Kinect-enabled Pathfinder application for Nissan. "We're pioneering experiences like this one for two reasons: the ability to respond to natural human gestures and voice input creates a rich experience that has broad consumer appeal," notes Critical Mass President Chris Gokiert. "Additionally, the commercial relevance of an application like this can fulfill a critical role in fueling leads and actually helping to drive sales on site."
Each dealer has a kiosk that includes a Kinect for Windows sensor, a monitor, and a computer that’s running the Pathfinder application built with the Kinect for Windows SDK. Since the Nissan Pathfinder application first debuted at the Chicago Auto Show in February 2012, developers made several enhancements, including a new pop-up tutorial, and interface improvements, such as larger interaction icons and instructional text along the bottom of the screen so a customer with no Kinect experience could jump right in. "In the original design for the auto show, the application was controlled by a trained spokesperson. That meant aspects like discoverability and ease-of-use for first-time users were things we didn’t need to design for," noted IdentityMine Research Director Evan Lang.
Now, shoppers who approach the Kinect-based showroom are guided through an array of natural movements—such as extending their hands, stepping forward and back, and leaning from side to side—to activate hotspots on the Pathfinder model, allowing them to inspect the car inside and out.
The project was not, however, without a few challenges. The detailed Computer-Aided Design (CAD) model data provided by Nissan, while ideal for commercials and other post-rendered uses, did not lend itself easily to a real-time engine. "A lot of rework was necessary that involved 'retopolgizing' the mesh," reported IdentityMine’s 3D Design Lead Howard Schargel. "We used the original as a template and traced over to get a cleaner, more manageable polygon count. We were able to remove much more than half of the original polygons, allowing for more fluid interactions and animations while still retaining the fidelity of the client's original model."
And then, the development team pushed further. "The application uses a dedicated texture to provide a dynamic, scalable level of detail to the mesh by adding or removing polygons, depending on how close it is to the camera,” explained Schargel. “It may sound like mumbo jumbo—but when you see it, you won't believe it."
You can see the Nissan Pathfinder app in action at one of the 16 participating dealerships or by watching our video case study.
Kinect for Windows Team
The Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.
IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.
One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”
Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”
Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."
Kinect for Windows team
In October, we shipped the public release of the Kinect for Windows v2 sensor and its software development kit (SDK 2.0). The availability of the v2 sensor and SDK 2.0 means that we will be phasing out the sale of the original Kinect for Windows sensor in 2015.
The move to v2 marks the next stage in our journey toward more natural human computing. The new sensor provides a host of new and improved features, including enhanced body tracking, greater depth fidelity, full 1080p high-definition video, new active infrared capabilities, and an expanded field of view. Likewise, SDK 2.0 offers scores of updates and enhancements, not the least of which is the ability to create and publish Kinect-enabled apps in the Windows Store. At the same time that we publicly released the v2 sensor and its SDK, we also announced the availability of the Kinect Adapter for Windows, which lets developers create Kinect for Windows applications by using a Kinect for Xbox One sensor. The response of the developer community to Kinect v2 has been tremendous: every day, we see amazing apps built on the capabilities of the new sensor and SDK, and since we released the public beta of SDK 2.0 in July, the community has been telling us that porting their original solutions over to v2 is smoother and faster than expected.
The original Kinect for Windows sensor was a milestone achievement in the world of natural human computing. It allowed developers to create solutions that broke through the old barriers of mouse and keyboard interactions, opening up entirely new commercial experiences in multiple industries, including retail, education, healthcare, education, and manufacturing. The original Kinect let preschoolers play educational games by simply moving their arms; it coached patients through physical rehabilitation; it gave shoppers new ways to engage with merchandise and even try on clothes. The list of innovative solutions powered by the original Kinect for Windows goes on and on.
We hope everyone will embrace the latest Kinect technology as soon as possible, but we understand that some business customers have commitments to the original sensor and SDK. If you’re one of them and need a significant number of original Kinect for Windows sensors, please contact us as soon as possible. We will do our best to fill your orders, but no more original sensors will be manufactured after the current stock sells out.
All of us on the Kinect for Windows team are grateful to all of you in the community who jumped on this technology and showed us what it could do. We know that your proven track record doing great things with the original technology will only get better with v2—the improvements in quality from the original Kinect for Windows sensor to the v2 device are truly immense. And so, we’re cheered by the prospect of seeing all the amazing solutions you’ll create with the new and improved Kinect for Windows.
The Kinect for Windows Team
BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.
Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.
Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.
But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.
What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.” He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.
Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.
While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling to perform the exercises correctly at home—or simply skipping them altogether.
Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail. It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way. The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.
Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health.
Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.” As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.” And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review. “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.
But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor. Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”
From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.
A few months ago, Microsoft Most Valuable Professional (MVP) James Ashley, a leader in developing with Kinect for Windows, wrote a very perceptive blog about Kinect for Windows v2 entitled, Kinect for Windows v2 First Look. James’ blog was so insightful that we wanted to check in with him after being in the Developer Preview program for three months and learn more about his experiences with the preview sensor and his advice to fellow Kinect for Windows developers. Here’s our Q&A with James:
Microsoft: As a participant in the developer preview program, what cool things have you been doing with the Kinect for Windows v2 sensor and SDK over the past few months? Which features have you used, and what did you do with them?
James: My advanced technology group at Razorfish has been very interested in developing mixed-media and mixed-technology stories with the Kinect for Windows v2 sensor. We recently did a proof- of-concept digital store with the Windows 8 team for the National Retail Federation (aka “Retail’s BIG Show”) in New York. You've heard of pop-up stores? We took this a step further by pre-loading a shipping container with digital screens, high-lumen projectors, massive arrays of Microsoft Surface tablets, and Perceptive Pixel displays and having a tractor-trailer deposit it in the Javits Center in New York City. When you opened the container, you had an instant retail store. We used the Kinect for Windows v2 sensor and SDK to drive an interactive soccer game built in Unity’s 3D toolset, in which 3D soccer avatars were controlled by the player's full body movements: when you won a game, a signal was sent by using Arduino components to drop a drink from a vending machine.
Watch the teaser for Razorfish's interactive soccer game
We also used Kinect for Windows v2 to allow people to take pictures with digital items they designed on the Perceptive Pixel. We then dropped a beach scene they selected into the background of the picture, which was printed out on the spot as well as emailed and pushed to their social networks if they wanted. In creating this experience, the new time-of-flight depth camera in Kinect for Windows v2 proved to be leagues better than anything we were able to do with the original Kinect for Windows sensor; we were thrilled with how well it worked. [Editor’s note: You can learn more about these retail applications in this blog post.]Much closer to the hardware, we have also been working with a client on using Kinect for Windows v2 to do precise measurements, to see if the Kinect for Windows v2 sensor can be used in retail to help people get fitted precisely—for instance with clothing and other wearables. Kinect for Windows v2 promises accuracy of 2.5 cm at even 4 meters, so this is totally feasible and could transform how we shop.Microsoft: Which features do you find the most useful and/or the most exciting, and why?James: Right now, I'm most interested in the depth camera. It has a much higher resolution than some standard time-of-flight cameras currently selling for $8,000 or $9,000. Even though the Kinect for Windows v2 final pricing hasn't been announced yet, we can expect it to be much, much less than that. It's stunning that Microsoft was able to pull off this technical feat, providing both improved quality and improved value in one stroke. Microsoft: Have you heard from other developers, and if so, what are they saying about your applications and/or their impressions of Kinect for Windows v2?James: I'm on both the MVP list and the developer preview program's internal list, so I've had a chance to hear a lot of really great feedback. Basically, we all had to learn a lot of tricks to make things work the way we wanted with the original Kinect for Windows. With v2, it feels like we are finally getting all the hardware performance we've wanted and then some. Of course, the SDK is still under development and we're obviously still early on with the preview program. People need to be patient. Microsoft: Any words of advice or encouragement for other developers about using Kinect for Widows v2?James: If you are a C# developer and you haven't made the plunge, now is a good time to start learning Visual C++. All of the powerful interaction and visually intensive things you might want to do are taking advantage of C++ libraries like Cinder, openFrameworks, PCL, and OpenCV. It requires being willing to feel stupid again for about six months, but at the end of that time, you'll be glad you made the effort.
Our thanks to James for taking time to share his insights and experience with us. And as mentioned at the top of this post, you should definitely read James’ Kinect for Windows v2 First Look blog.
Back in May, we released the Kinect for Windows SDK/Runtime v1.5 in a modular manner, to make it easier to refresh parts of the Developer Toolkit (tools, components, and samples) without the need to update the SDK (driver, runtime, and basic compilation support).
Today, we have realized that vision with the Developer Toolkit update v1.5.1. This update boosts Kinect Studio performance and stability, improves face tracking, and introduces offline documentation support. If you have already installed the SDK, simply download the new v1.5.1 Developer Toolkit Update. If you are new to Kinect for Windows, you will want to download both Kinect for Windows SDK v1.5 and Developer Toolkit v1.5.1.
Rob RelyeaProgram Manager, Kinect for Windows
I grew up in the UK and my female cousins all had Barbie. In fact Barbies – they had lots of Barbie dolls and ton of accessories that they were obsessed with. I was more of a BMX kind of kid and thought my days of Barbie education were long behind me, but with a young daughter I’m beginning to realize that I have plenty more Barbie ahead of me, littered around the house like landmines. This time around though, I’m genuinely interested thanks to a Kinect-enabled application.
This week, Barbie lovers in Sydney, Australia, are being given the chance to do more than fanaticize how they’d look in their favorite Barbie outfit. Thanks to Mattel, Gun Communications, Adapptor, and Kinect for Windows, Barbie The Dream Closet is here.
The application invites users to take a walk down memory lane and select from 50 years of Barbie fashions. Standing in front of Barbie’s life-sized augmented reality “mirror,” fans can choose from several outfits in her digital wardrobe—virtually trying them on for size.
The solution, built with the Kinect for Windows SDK and using the Kinect for Windows sensor, tracks users’ movements and gestures enabling them to easily browse through the closet and select outfits that strike their fancy. Once an outfit is selected, the Kinect for Windows skeletal tracking determines the position and orientation of the user. The application then rescales Barbie’s clothes, rendering them over the user in real time for a custom fit.
One of the most interesting aspects of this solution is the technology’s ability to scale - with menus, navigation controls and clothing all dynamically adapting so that everyone from a little girl to a grown woman (and cough, yes, even a committed father) can enjoy the experience. To facilitate these advancements, each outfit was photographed on a Barbie doll, cut into multiple parts, and then built individually via the application.
Of course, the experience wouldn’t be complete without the ability to memorialize it. A photo is taken and, with approval/consent from those photographed, is uploaded and displayed in a gallery on the Barbie Australian Facebook page. (Grandparents can join in the fun from afar!)
I spoke with Sarah Sproule, Director, Gun Communications about the genesis of the idea who told me, “We started working on Barbie The Dream Closet six months ago, working with our development partner Adapptor. Everyone has been impressed by the flexibility, and innovation Microsoft has poured into Kinect for Windows. Kinect technology has provided Barbie with a rich and exciting initiative that's proving to delight fans of all ages. We're thrilled with the result, as is Mattel - our client."
Barbie’s Dream Closet, was opened to the public at the Westfield Parramatta in Sydney today and will be there through April 15. Its first day, it drew enthusiastic crowds, with around 100 people experiencing Barbie The Dream Closet. It's expected to draw even larger crowds over the holidays. It’s set to be in Melbourne and Brisbane later this year.
Meantime, the Kinect for Windows team is just as excited about it as my daughter:
“The first time I saw Barbie’s Dream Closet in action, I knew it would strike a chord,” notes Kinect for Windows Communications Manager, Heather Mitchell. “It’s such a playful, creative use of the technology. I remember fanaticizing about wearing Barbie’s clothes when I was a little girl. Disco Ken was a huge hit in my household back then…Who didn’t want to match his dance moves with their own life-sized Barbie disco dress? I think tens of thousands of grown girls have been waiting for this experience for years…Feels like a natural.”
That’s the beauty of Kinect – it enables amazingly natural interactions with technology and hundreds of companies are out there building amazing things; we can’t wait to see what they continue to invent.
Steve ClaytonEditor, Next at Microsoft
As highlighted during the Build 2015 Conference, Microsoft is more committed than ever to delivering innovative software, services, and devices that are changing the way people use technology and opening up new scenarios for developers. Perhaps no software reflects that commitment better than the RoomAlive Toolkit, whose release was announced Thursday, April 30, in a Build 2015 talk. The toolkit is now available for download on GitHub.
The RoomAlive Toolkit enables developers to network one or more Kinect sensors to one or more projectors and, by so doing, to project interactive experiences across the surfaces of an entire room. The toolkit provides everything needed to do interactive projection mapping, which enables an entirely new level of engagement, in which interactive content can come to virtual life on the walls, the floor, and the furniture. Imagine turning a living room into a holodeck or a factory floor—the RoomAlive toolkit makes such scenarios possible.
This video shows the RoomAlive Toolkit calibration process in action.
The most basic setup for the toolkit requires one projector linked to one of the latest Kinect sensors. But why limit yourself to just one each? Experiences become larger and more immersive with the addition of more Kinect sensors and projectors and the RoomAlive toolkit provides what you need to get everything setup and calibrated.
While the most obvious use for the RoomAlive Toolkit is the creation of enhanced gaming experiences, its almost magical capabilities could be a game-changer in retail displays, art installations, and educational applications. The toolkit derives from the IllumiRoom and RoomAlive projects developed by Microsoft Research.
Over the next several weeks, we will be releasing demo videos that show developers how to calibrate the data from multiple Kinect sensors and how to use the software and tools to create their own projection mapping scenarios. In the meantime, you can get a sense of the creative potential of the RoomAlive Toolkit in the video on Wobble, which shows how a room’s objects can be manipulated for special effects, and the video on 3D Object videos, which shows how virtual objects can be added to the room. Both of these effects are part of the toolkit’s sample app. And please share your feedback, issues, and suggestions over at the project’s home on GitHub.
Build-A-Bear Workshop stores have been delivering custom-tailored experiences to children for 15 years in the form of make-your-own stuffed animals, but the company recently recognized that its target audience was gravitating toward digital devices. So it has begun advancing its in-store experiences to match the preferences of its core customers by incorporating digital screens throughout the stores—from the entrance to the stations where the magic of creating new fluffy friends happens.
A key part of Build-A-Bear's digital shift is their interactive storefront that's powered by Kinect for Windows. It enables shoppers to play digital games on either a screen adjacent to the store entrance or directly through the storefront window simply by using their bodies and natural gestures to control the game.
Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.
"We're half retail, half theme park," said Build-A-Bear Director of Digital Ventures Brandon Elliott. The Kinect for Windows platform instantly appealed to Build-A-Bear as "a great enabler for personalized interactivity."
The Kinect for Windows application, launched at six pilot stores, uses skeletal tracking to enable two players (four hands) to pop virtual balloons (up to five balloons simultaneously) by waving their hands or by touching the screen directly. While an increasing number of retail stores use digital signage these days, Elliott noted: "What they're not doing is building a platform for interactive use."
"We wanted something that we could build on, that's a platform for ever-improving experiences," Elliott said. "With Kinect for Windows, there’s no learning curve. People can interact naturally with technology by simply speaking and gesturing the way they do when communicating with other people. The Kinect for Windows sensor sees and hears them."
"Right now, we're just using the skeletal tracking, but we could use voice recognition components or transform the kids into on-screen avatars," he added. "The possibilities are endless." Part of the Build-A-Bear's vision is to create Kinect for Windows apps that tie into the seasonal marketing themes that permeate the stores. Elliott said that Build-A-Bear selected the combination of the Microsoft .NET Framework, Kinect for Windows SDK, and Kinect for Windows sensor specifically so that they can take advantage of existing developer platforms to build these new apps quickly.
“We appreciate that the Kinect for Windows SDK is developing so rapidly. We appreciate the investment Microsoft is making to continue to open up features within the Kinect for Windows sensor to us,” Elliott said. "The combination of Kinect for Windows hardware and software unlocks a world of new UI possibilities for us."
Microsoft developer and technical architect Todd Van Nurden and others at the Minneapolis-based Microsoft Technology Center helped Build-A-Bear with an early consultation that led to prototyping apps for the project.
"The main focus of my consult was to look for areas beyond simple screen-tied interactions to create experiences where Kinect for Windows activates the environment. Screen-based interactions are, of course, the easiest but less magical then environmental," Van Nurden said. "We were going for magical."
The first six Build-A-Bear interactive stores launched in October and November 2012 in St. Louis, Missouri; Pleasanton, California; Annapolis, Maryland; Troy, Michigan; Fairfax, Virginia, and Indianapolis, Indiana (details). Four of the stores have gesture-enhanced interactive signs at the entrance, while two had to be placed behind windows to comply with mall rules. Kinect for Windows can work through glass with the assistance of a capacitive sensor that enables the window to work as a touch screen, and an inductive driver that turns glass into a speaker.
So far, Build-A-Bear has been thrilled with what Elliott calls "fantastic" results. "Kids get it," he said. "We have a list of apps we want to build over the next couple of years. We can literally write an app for one computer in the store, and put it anywhere."
This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.
Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.
Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.
Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.
"Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."
Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.
Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.
Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:
With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.