Swivel Close-Up, a Kinect for Windows-based kiosk from FaceCake, lets customers visualize themselves in small accessories such as makeup, sunglasses, and jewelry.
Microsoft Kinect for Windows has been playing an increasingly important role in retail, from interactive kiosks at stores such as Build-A-Bear Workshop, to virtual dressing rooms at fashion leaders like Bloomingdale's, to virtual showrooms at Nissan dealerships. This year's National Retail Federation (NRF) Convention and Expo, which took place earlier this week, showcased several solutions that provide retailers with new ways to drive customer engagement, sales, and loyalty.
Trend watchers have noted significant shifts in how consumers shop—often blending online and in-store investigation by using phones, tablets, kiosks, and computers in addition to good old-fashioned salesperson interaction. Brick-and-mortar stores, which are facing vigorous competition from online resellers, are embracing new technologies like Kinect for Windows to help drive sales and retention—and to delight and surprise customers with fun, interactive shopping experiences. Even better, customers can get more accurate and personalized information about whether a specific product is right for them—whether it's an article of clothing or a piece of furniture—reducing dissatisfaction and inconvenient returns.
"This past holiday season, we’ve seen retailers get much more tech savvy in how they engage customers and offer more flexibility in how they shop," said Kinect for Windows Senior Channel Development Manager Michael Fry. "As the lines between traditional and digital shopping channels continue to blur, retailers must seek new ways to deliver the most value and earn loyalty through compelling, seamless experiences across all touch points with their customers. Technologies like Kinect for Windows help retailers engage customers with interactive shopping experiences that are not only fun, but also increase important bottom-line business results—increasing engagement, awareness, and brand value while making it easier to select the best products for them."
At a hospitality event during NRF, Kinect for Windows partner Avanade showed one such innovation: their "shoppable storefront," created for my-wardrobe.com in Norway. Customers can walk up to the showroom window and—even after business hours—interact with the Kinect for Windows sensor to browse the store catalog, view pricing, and scan a Quick Response (QR) code to quickly purchase the product online via mobile phone. See a video of how it works.
"Consider the possibilities within the store, they're almost endless with a technology like Kinect for Windows," said John Konczal, director of service line marketing at Avanade. "You could build a guide for customers to find more information about products and quickly locate them in the store. If an item is not available, order it for shipment and pick-up at the nearest store. The interactivity, simplicity, and responsiveness of this technology can really help retailers differentiate their stores from the competition."
Avanade also demonstrated Natural User Observation of Retail Displays (NUO), which provides a cost-effective solution for retailers by gathering real-time customer response and behavior. This allows retail managers to do things like determine where customers are spending their time in the store, identify trends, and gather demographic and customer behavior as they interact with store displays. Avanade reports that the solution integrates into existing store and back-office IT systems and provides dashboards and data-rich reporting for improving product placement, marketing effectiveness, and overall display performance.
Another of our partners, FaceCake Marketing Technologies, Inc., which developed Swivel, the 3-D virtual dressing room that's been featured at Bloomingdale's, showed NRF attendees the newest enhancements to their Swivel software. The enhancements, which work in conjunction with the latest Kinect for Windows SDK, include face-tracking and a feature called real-time Compare, which allows you to contrast two looks in a full-motion visualization of yourself in two dresses (or any type of clothing) side-by-side. Sizing is now even more accurate, and FaceCake also added multi-user functionality that allows, for example, a bride to see herself, virtually, in various wedding dresses at the same time as her bridesmaids see themselves in their bridesmaid dresses.
We also featured another exciting new product from FaceCake in our booth: Swivel Close-Up. This Kinect for Windows-based kiosk, which operates within a two-foot environment, lets customers try on much smaller accessories than clothing including makeup, sunglasses, and jewelry. Earrings dangle and twist beautifully as a shopper tries them on virtually, and consumers now have the opportunity to try on a limitless number of lip colors without lipstick ever touching their lips.
"We can now provide an extended Try-On solution that is real-time, 3-D, and full motion as opposed to just uploading a static image and then modifying it," said FaceCake CEO Linda Smith. "The result is a lifelike representation that's just like looking in a mirror—your dream dressing room mirror powered by Swivel and Kinect for Windows! It's both efficient and fun for the customer."
One of the key themes of this year's NRF event was putting customers at the center of retail marketing, something that Kinect for Windows accomplishes readily, thanks to its ability to quickly entice customers into virtual shopping spaces within actual storefronts, making it easier than ever for them to find, experience, and purchase products that are right for them.
"Staying competitive in retail today means putting customers at the heart of the business and seeking new ways to deliver value in the store," Fry said. "A Kinect for Windows retail display immediately puts the focus on the shopper, delivering uniquely personalized results that drive both sales and customer satisfaction."
Kinect for Windows team
Build-A-Bear Workshop stores have been delivering custom-tailored experiences to children for 15 years in the form of make-your-own stuffed animals, but the company recently recognized that its target audience was gravitating toward digital devices. So it has begun advancing its in-store experiences to match the preferences of its core customers by incorporating digital screens throughout the stores—from the entrance to the stations where the magic of creating new fluffy friends happens.
A key part of Build-A-Bear's digital shift is their interactive storefront that's powered by Kinect for Windows. It enables shoppers to play digital games on either a screen adjacent to the store entrance or directly through the storefront window simply by using their bodies and natural gestures to control the game.
Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.
"We're half retail, half theme park," said Build-A-Bear Director of Digital Ventures Brandon Elliott. The Kinect for Windows platform instantly appealed to Build-A-Bear as "a great enabler for personalized interactivity."
The Kinect for Windows application, launched at six pilot stores, uses skeletal tracking to enable two players (four hands) to pop virtual balloons (up to five balloons simultaneously) by waving their hands or by touching the screen directly. While an increasing number of retail stores use digital signage these days, Elliott noted: "What they're not doing is building a platform for interactive use."
"We wanted something that we could build on, that's a platform for ever-improving experiences," Elliott said. "With Kinect for Windows, there’s no learning curve. People can interact naturally with technology by simply speaking and gesturing the way they do when communicating with other people. The Kinect for Windows sensor sees and hears them."
"Right now, we're just using the skeletal tracking, but we could use voice recognition components or transform the kids into on-screen avatars," he added. "The possibilities are endless." Part of the Build-A-Bear's vision is to create Kinect for Windows apps that tie into the seasonal marketing themes that permeate the stores. Elliott said that Build-A-Bear selected the combination of the Microsoft .NET Framework, Kinect for Windows SDK, and Kinect for Windows sensor specifically so that they can take advantage of existing developer platforms to build these new apps quickly.
“We appreciate that the Kinect for Windows SDK is developing so rapidly. We appreciate the investment Microsoft is making to continue to open up features within the Kinect for Windows sensor to us,” Elliott said. "The combination of Kinect for Windows hardware and software unlocks a world of new UI possibilities for us."
Microsoft developer and technical architect Todd Van Nurden and others at the Minneapolis-based Microsoft Technology Center helped Build-A-Bear with an early consultation that led to prototyping apps for the project.
"The main focus of my consult was to look for areas beyond simple screen-tied interactions to create experiences where Kinect for Windows activates the environment. Screen-based interactions are, of course, the easiest but less magical then environmental," Van Nurden said. "We were going for magical."
The first six Build-A-Bear interactive stores launched in October and November 2012 in St. Louis, Missouri; Pleasanton, California; Annapolis, Maryland; Troy, Michigan; Fairfax, Virginia, and Indianapolis, Indiana (details). Four of the stores have gesture-enhanced interactive signs at the entrance, while two had to be placed behind windows to comply with mall rules. Kinect for Windows can work through glass with the assistance of a capacitive sensor that enables the window to work as a touch screen, and an inductive driver that turns glass into a speaker.
So far, Build-A-Bear has been thrilled with what Elliott calls "fantastic" results. "Kids get it," he said. "We have a list of apps we want to build over the next couple of years. We can literally write an app for one computer in the store, and put it anywhere."
The Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.
IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.
One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”
Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”
Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."
A unique clinic for treating children with cancer and blood disorders, alex’s place is designed to be a warm, open, communal space. The center—which is located in Miami, Florida—helps put its patients at ease by engaging them with interactive screens that allow them to be transported into different environments—where they become a friendly teddy bear, frog, or robot and control their character’s movements in real time.
"As soon as they walk in, technology is embracing them," said Dr. Julio Barredo, chief of pediatric services at alex's place in The Sylvester Comprehensive Cancer Center, University of Miami Health Systems.
The clinic—which opened its doors in May 2012—was conceived of and designed with this in mind, and the Kinect for Windows digital experience was part of the vision from day one. Created by Snibbe Interactive, Character Mirror was designed to fit naturally within this innovative, unconventional treatment environment. The goal is to help reinforce patients' mind-body connection with engaging play and entertainment, as well as to potentially reduce their fear of technology and the treatments they face. As an added benefit, nurses can observe a child's natural range of movement during play and more easily draw out answers to key diagnostic questions.
"I find the gestural interactive experiences we created for alex's place in Miami among the most worthwhile and satisfying in our history," said Scott Snibbe, founder and CEO of Snibbe Interactive. "Kids in hospitals are feeling lonely, scared, and bored, not to mention sick. Partnering with Alex Daly and Dr. Barredo, we created a set of magical experiences that encourage healthy, social, and physical activity among the kids.
"Kids found these experiences so pleasing that they actually didn't want to leave after their treatments were complete," Snibbe added. "We are very excited to roll out these solutions to more hospitals, and transform healthcare through natural user interfaces that promote social play and spontaneous physical therapy."
Styku, a Kinect Accelerator startup, set out to alter clothes shopping for retailers by using the Kinect for Windows sensor and software development kit to develop its Smart Fitting Room quickly, a new case study reports.
The technology will soon be used by Brooks Brothers, IM-Label, and other fashion retailers. Styku hopes to improve the shopping experience—reducing the problem of shoppers returning up to 40 percent of their online purchases and offering a faster, less expensive body scanning solution. Additionally, military apparel contractors appreciate the improved measurement capability of Kinect for Windows with the Styku software—estimated to be up to 400 percent more accurate—which could save soldiers' lives, thanks to better fitting body armor.
Customers can quickly visualize the fit and fabric characteristics of garments over digital renderings of their bodies that are created by scanning their body with the Kinect for Windows sensor. The scan lasts only one second—reducing the risk that a fidgety customer will compromise the scan’s accuracy. Clothing is rendered in 3-D, and customers can use gesture to rotate, view a custom-fit color map, and compare multiple sizes.
"Kinect for Windows had exactly the sensors that we needed, in a small package," said Pierre Du Charme, vice president of Software Engineering for Styku. "The SDK was easy to learn and gave us the tools to quickly implement a full-featured application."
Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.
Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K. As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.
In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.
Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.
Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.
We look forward to seeing how our developer community and business partners will use the tool.
Chris WhiteSenior Program Manager, Kinect for Windows
Earlier this week at the 2012 Seattle Interactive Conference, Oscar Murillo, user experience architect for Kinect for Windows, kicked off a six-person panel discussion about the transformational power of voice and gesture technology with a demonstration that showed participants how much the Kinect sensor has grown beyond its gaming roots.
"Kinect for Windows is a premier technology that enables users to interact with systems without touching a user interface" noted Murillo. "Human-to-human interactions are fluid and multimodal. With Kinect for Windows, we see human-computer interactions that are coming closer to mirroring the way humans naturally interact: effortless, transparent, and contextual communication between users and technology—by using voice and gesture—are now becoming possible. We see interactions that are as natural as human beings themselves." Murillo illustrated how, by using Kinect for Windows, he can control an environment with his body and voice. The sensor changed his appearance and even placed him in different environments by using "real-time green screening" to provide a museum setting and an abstract landscape of cubes and spheres with which he could interact. Murillo tracked 100 different points on his face and showed both thermal and radiant scanning. The use of these and other emerging techniques provides "a novel way for users to interact with products, brands, environments, services, and each other," Murillo added.
After Murillo’s presentation, Steve Clayton, editor of Next at Microsoft blog, moderated a panel of NUI thought leaders from around the world who are using Kinect for Windows in their work.
Academy Award-winning visual effects designer John Gaeta, who developed the "bullet time" effects in The Matrix Trilogy, also worked on Kinect for Windows in its early stages. His company, Float Hybrid Entertainment, develops interactive displays and participates in the Kinect for Windows advisory board.
"The thing that is interesting is the human interface part of it," observed Gaeta. "To allow people to have some sort of method to reflect themselves back, and that there can be a two-way relationship between the average person and a machine."
Scott Snibbe—founder of Snibbe Interactive and a world-renowned interactive media entrepreneur, researcher, and artist—is also an early pioneer with Kinect, which he has been working with since 2006. "With NUI, we can finally put the person in control of the computer instead of the computer controlling the person," he explained. "Humans are first and foremost social; Kinect for Windows can power social NUIs that respond to gesture and voice—the same way humans communicate with each other."
Matt Von Trott, digital director and partner at Assembly Ltd., noted the growing appeal of Kinect for Windows to the advertising industry. "In advertising these days, you need to make something that does more than make people talk about it," he said.
James Ashley, presentation layer architect on the Emerging Experiences team at the international digital agency Razorfish and a Microsoft Most Valued Professional for the Kinect sensor, remarked that he was skeptical the first time he saw the early concept videos for the project that eventually became Kinect for Windows. He didn’t believe it could really work. But it did, and the results were magical. "People want it and clients want it," he explained.
David Kung, vice president of business development at Oblong Industries, Inc. and former Disney Imagineer, notes that the bar for entry is much lower than with previous technological advances. "What's most exciting is how the developer community is adopting at a very low investment," he said. "Not just at a highly expensive R&D level."
"We still have a while to go before we get to true multi-modal—we are crawling still for sure," Kung added. "We can envision a time where technology could potentially answer a child's question while looking out the car window, 'What is that?' With NUI, GPS, and other advancements, such scenarios are possible."
This year's Seattle Interactive Conference (October 29 and 30) connected about 4,000 entrepreneurs, developers, and online business professionals who are all aspiring to explore the latest online technology and emerging trends. "We're straight-up geeks who just love technology, noted SIC co-founded Mark Peterson. “So, partnering with Microsoft and Kinect for Windows just made sense."
It all started with a couple of kids and a remarkable idea, which eventually spawned two terrifying demon dogs and their master. This concept is transforming the haunt industry and could eventually change how theme parks and other entertainment businesses approach animated mechanical electronics (animatronics). Here's the behind-the-scenes story of how this all came to be:
The boys, 6-year-old Mark and 10-year-old Jack, fell in love with Travel Channel's Making Monsters, a TV program that chronicles the creation of lifelike animatronic creatures. After seeing their dad's work with Kinect for Windows at the Minneapolis-based Microsoft Technology Center, they connected the dots and dreamed up the concept: wouldn't it be awesome if Dad could use his expertise with the Kinect for Windows motion sensor to make better and scarier monsters?
So “Dad”—Microsoft developer and technical architect Todd Van Nurden—sent an email to Distortions Unlimited in Greeley, Colorado, offering praise of their work sculpting monsters out of clay and adjustable metal armatures. He also threw in his boys' suggestion on how they might take things to the next level with Kinect for Windows: Imagine how much cooler and more realistic these monsters could be if they had the ability to see you, hear you, anticipate your behavior, and respond to it. Imagine what it means to this industry now that monster makers can take advantage of the Kinect for Windows gesture and voice capabilities.
Two months passed. Then one day, Todd received a voice mail message from Distortions CEO Ed Edmunds expressing interest. The result: nine months of off-and-on work, culminating with the debut of a Making Monsters episode detailing the project on Travel Channel earlier today, October 21 (check local listings for show times, including repeat airings). The full demonic installation can also be experienced firsthand at The 13th Floor haunted house in Denver, Colorado, now through November 10.
To get things started, Distortions sent Van Nurden maquettes—scale models about one-quarter of the final size—to build prototypes of two demon dogs and their demon master. Van Nurden worked with Parker, a company that specializes in robotics, to develop movement by using random path manipulation that is more fluid than your typical robot and also is reactive and only loosely scripted. The maquettes were wired to Kinect for Windows with skeletal tracking, audio tracking, and voice control functionality as a proof of concept to suggest a menu of possible options.
Distortions was impressed. "Ed saw everything it could do and said, 'I want all of them. We need to blow this out’," recalled Van Nurden.
Todd Van Nurden prepares to install the Kinect for Windows sensor in the demon's belt
The full-sized dogs are four feet high, while the demon master stands nearly 14 feet. A Kinect for Windows sensor connected to a ruggedized Lenovo M92 workstation is embedded in the demon's belt and, after interpreting tracking data, sends commands to control itself and the dogs via wired Ethernet. Custom software, built by using the Kinect for Windows SDK, provides the operators with a drag-and-drop interface for laying out character placement and other configurable settings. It also provides a top-down view for the attraction's operator, displaying where the guests are and how the creatures are tracking them.
"We used a less common approach to processing the data as we leveraged the Reactive Extensions for .NET to basically set up push-based Linq subscriptions," Van Nurden revealed. "The drag-and-drop features enable the operator to control the place-space configuration, as well as when certain behaviors begin. We used most of the Kinect for Windows SDK managed API with the exception of raw depth data."
The dogs are programmed to react very differently if approached by an adult (which might elicit a bark or growl) versus a child (which could prompt a fast pant or soft whimper). Scratching behind a hound's ears provokes a "happy dog" response—assuming you can overcome your fear and get close enough to actually touch one! Each action or mood includes its own set of kinesthetic actions and vocal cues. The sensor quietly tracks groups of people, alternating between a loose tracking algorithm that can calculate relative height quickly when figures are further away and full skeletal tracking when someone approaches a dog or demon, requiring more detailed data to drive the beasts' reactions.
The end product was so delightfully scary that Van Nurden had to reassure his own sons when they were faced with a life-sized working model of one of the dogs. "I programmed him, he's not going to hurt you," he comforted them.
Fortunately, it is possible to become the demons' master. If you perform a secret voice and movement sequence, they will actually bow to you.
Lisa Tanzer, executive producer for Making Monsters, has been following creature creation for two years while shooting the show at Distortions Unlimited. She was impressed by how much more effective the Kinect for Windows interactivity is than the traditional looped audio and fully scripted movements of regular animatronics: "Making the monsters themselves is the same process—you take clay, sculpt it over an armature, mold it, paint it, all the same steps," she said. "The thing that made this project Distortions did for 13th Floor so incredible and fascinating was the Kinect for Windows technology.”
"It can be really scary," Tanzer reported. "The dogs and demon creature key into people and actually track them around the room. The dog turns, looks at you and whimpers; you go 'Oh, wow, is this thing going to get me?' It's just like a human actor latching on to somebody in a haunted house but there's no human, only this incredible technology.”
"Incorporating Kinect for Windows into monster making is very new to the haunt industry," she added. "In terms of the entertainment industry, it's a huge deal. I think it's a really cool illustration of where things are going."
Now that the updated Kinect for Windows SDK is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.
Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.
Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?
Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!
Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.
In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.
This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.
Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.
Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.
Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:
What are your top three favorite features in the latest release of the SDK and why?
Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!
Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!
New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).
Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.
What's it like working together?
Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.
Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.
Peter: My team is organized by the areas of technical affinity. I have developers focused on:
Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.
Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!
What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?
Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.
We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.
Peter Zatloukal, Engineering ManagerBob Heddle, Group Program ManagerKinect for Windows
I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.
Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.
The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:
Extended sensor data access
Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.
One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.
Improved developer tools
We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.
Greater support for operating systems
Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.
This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.
As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.
Craig EislerGeneral Manager, Kinect for Windows