Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.
Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K. As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.
In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.
Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.
Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments. The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point. Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements. You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.
We look forward to seeing how our developer community and business partners will use the tool.
Chris WhiteSenior Program Manager, Kinect for Windows
Earlier this week at the 2012 Seattle Interactive Conference, Oscar Murillo, user experience architect for Kinect for Windows, kicked off a six-person panel discussion about the transformational power of voice and gesture technology with a demonstration that showed participants how much the Kinect sensor has grown beyond its gaming roots.
"Kinect for Windows is a premier technology that enables users to interact with systems without touching a user interface" noted Murillo. "Human-to-human interactions are fluid and multimodal. With Kinect for Windows, we see human-computer interactions that are coming closer to mirroring the way humans naturally interact: effortless, transparent, and contextual communication between users and technology—by using voice and gesture—are now becoming possible. We see interactions that are as natural as human beings themselves." Murillo illustrated how, by using Kinect for Windows, he can control an environment with his body and voice. The sensor changed his appearance and even placed him in different environments by using "real-time green screening" to provide a museum setting and an abstract landscape of cubes and spheres with which he could interact. Murillo tracked 100 different points on his face and showed both thermal and radiant scanning. The use of these and other emerging techniques provides "a novel way for users to interact with products, brands, environments, services, and each other," Murillo added.
After Murillo’s presentation, Steve Clayton, editor of Next at Microsoft blog, moderated a panel of NUI thought leaders from around the world who are using Kinect for Windows in their work.
Academy Award-winning visual effects designer John Gaeta, who developed the "bullet time" effects in The Matrix Trilogy, also worked on Kinect for Windows in its early stages. His company, Float Hybrid Entertainment, develops interactive displays and participates in the Kinect for Windows advisory board.
"The thing that is interesting is the human interface part of it," observed Gaeta. "To allow people to have some sort of method to reflect themselves back, and that there can be a two-way relationship between the average person and a machine."
Scott Snibbe—founder of Snibbe Interactive and a world-renowned interactive media entrepreneur, researcher, and artist—is also an early pioneer with Kinect, which he has been working with since 2006. "With NUI, we can finally put the person in control of the computer instead of the computer controlling the person," he explained. "Humans are first and foremost social; Kinect for Windows can power social NUIs that respond to gesture and voice—the same way humans communicate with each other."
Matt Von Trott, digital director and partner at Assembly Ltd., noted the growing appeal of Kinect for Windows to the advertising industry. "In advertising these days, you need to make something that does more than make people talk about it," he said.
James Ashley, presentation layer architect on the Emerging Experiences team at the international digital agency Razorfish and a Microsoft Most Valued Professional for the Kinect sensor, remarked that he was skeptical the first time he saw the early concept videos for the project that eventually became Kinect for Windows. He didn’t believe it could really work. But it did, and the results were magical. "People want it and clients want it," he explained.
David Kung, vice president of business development at Oblong Industries, Inc. and former Disney Imagineer, notes that the bar for entry is much lower than with previous technological advances. "What's most exciting is how the developer community is adopting at a very low investment," he said. "Not just at a highly expensive R&D level."
"We still have a while to go before we get to true multi-modal—we are crawling still for sure," Kung added. "We can envision a time where technology could potentially answer a child's question while looking out the car window, 'What is that?' With NUI, GPS, and other advancements, such scenarios are possible."
This year's Seattle Interactive Conference (October 29 and 30) connected about 4,000 entrepreneurs, developers, and online business professionals who are all aspiring to explore the latest online technology and emerging trends. "We're straight-up geeks who just love technology, noted SIC co-founded Mark Peterson. “So, partnering with Microsoft and Kinect for Windows just made sense."
Kinect for Windows team
It all started with a couple of kids and a remarkable idea, which eventually spawned two terrifying demon dogs and their master. This concept is transforming the haunt industry and could eventually change how theme parks and other entertainment businesses approach animated mechanical electronics (animatronics). Here's the behind-the-scenes story of how this all came to be:
The boys, 6-year-old Mark and 10-year-old Jack, fell in love with Travel Channel's Making Monsters, a TV program that chronicles the creation of lifelike animatronic creatures. After seeing their dad's work with Kinect for Windows at the Minneapolis-based Microsoft Technology Center, they connected the dots and dreamed up the concept: wouldn't it be awesome if Dad could use his expertise with the Kinect for Windows motion sensor to make better and scarier monsters?
So “Dad”—Microsoft developer and technical architect Todd Van Nurden—sent an email to Distortions Unlimited in Greeley, Colorado, offering praise of their work sculpting monsters out of clay and adjustable metal armatures. He also threw in his boys' suggestion on how they might take things to the next level with Kinect for Windows: Imagine how much cooler and more realistic these monsters could be if they had the ability to see you, hear you, anticipate your behavior, and respond to it. Imagine what it means to this industry now that monster makers can take advantage of the Kinect for Windows gesture and voice capabilities.
Two months passed. Then one day, Todd received a voice mail message from Distortions CEO Ed Edmunds expressing interest. The result: nine months of off-and-on work, culminating with the debut of a Making Monsters episode detailing the project on Travel Channel earlier today, October 21 (check local listings for show times, including repeat airings). The full demonic installation can also be experienced firsthand at The 13th Floor haunted house in Denver, Colorado, now through November 10.
To get things started, Distortions sent Van Nurden maquettes—scale models about one-quarter of the final size—to build prototypes of two demon dogs and their demon master. Van Nurden worked with Parker, a company that specializes in robotics, to develop movement by using random path manipulation that is more fluid than your typical robot and also is reactive and only loosely scripted. The maquettes were wired to Kinect for Windows with skeletal tracking, audio tracking, and voice control functionality as a proof of concept to suggest a menu of possible options.
Distortions was impressed. "Ed saw everything it could do and said, 'I want all of them. We need to blow this out’," recalled Van Nurden.
Todd Van Nurden prepares to install the Kinect for Windows sensor in the demon's belt
The full-sized dogs are four feet high, while the demon master stands nearly 14 feet. A Kinect for Windows sensor connected to a ruggedized Lenovo M92 workstation is embedded in the demon's belt and, after interpreting tracking data, sends commands to control itself and the dogs via wired Ethernet. Custom software, built by using the Kinect for Windows SDK, provides the operators with a drag-and-drop interface for laying out character placement and other configurable settings. It also provides a top-down view for the attraction's operator, displaying where the guests are and how the creatures are tracking them.
"We used a less common approach to processing the data as we leveraged the Reactive Extensions for .NET to basically set up push-based Linq subscriptions," Van Nurden revealed. "The drag-and-drop features enable the operator to control the place-space configuration, as well as when certain behaviors begin. We used most of the Kinect for Windows SDK managed API with the exception of raw depth data."
The dogs are programmed to react very differently if approached by an adult (which might elicit a bark or growl) versus a child (which could prompt a fast pant or soft whimper). Scratching behind a hound's ears provokes a "happy dog" response—assuming you can overcome your fear and get close enough to actually touch one! Each action or mood includes its own set of kinesthetic actions and vocal cues. The sensor quietly tracks groups of people, alternating between a loose tracking algorithm that can calculate relative height quickly when figures are further away and full skeletal tracking when someone approaches a dog or demon, requiring more detailed data to drive the beasts' reactions.
The end product was so delightfully scary that Van Nurden had to reassure his own sons when they were faced with a life-sized working model of one of the dogs. "I programmed him, he's not going to hurt you," he comforted them.
Fortunately, it is possible to become the demons' master. If you perform a secret voice and movement sequence, they will actually bow to you.
Lisa Tanzer, executive producer for Making Monsters, has been following creature creation for two years while shooting the show at Distortions Unlimited. She was impressed by how much more effective the Kinect for Windows interactivity is than the traditional looped audio and fully scripted movements of regular animatronics: "Making the monsters themselves is the same process—you take clay, sculpt it over an armature, mold it, paint it, all the same steps," she said. "The thing that made this project Distortions did for 13th Floor so incredible and fascinating was the Kinect for Windows technology.”
"It can be really scary," Tanzer reported. "The dogs and demon creature key into people and actually track them around the room. The dog turns, looks at you and whimpers; you go 'Oh, wow, is this thing going to get me?' It's just like a human actor latching on to somebody in a haunted house but there's no human, only this incredible technology.”
"Incorporating Kinect for Windows into monster making is very new to the haunt industry," she added. "In terms of the entertainment industry, it's a huge deal. I think it's a really cool illustration of where things are going."
Now that the updated Kinect for Windows SDK is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.
Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.
Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?
Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!
Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.
In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.
This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.
Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.
Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.
Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:
What are your top three favorite features in the latest release of the SDK and why?
Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!
Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!
New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).
Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.
What's it like working together?
Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.
Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.
Peter: My team is organized by the areas of technical affinity. I have developers focused on:
Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.
Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!
What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?
Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.
We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.
Peter Zatloukal, Engineering ManagerBob Heddle, Group Program ManagerKinect for Windows
I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.
Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.
The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:
Extended sensor data access
Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.
One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.
Improved developer tools
We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.
Greater support for operating systems
Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.
This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.
As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.
Craig EislerGeneral Manager, Kinect for Windows
This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.
Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.
Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.
Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.
"Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."
Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.
Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.
Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:
With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.
Kinect for Windows Team
The Kinect for Windows team has been hard at work this summer, and I have some very exciting developments to share with you regarding our roadmap between now and the end of the year.
On October 8, Kinect for Windows is coming to China. China is a leader in business and technology innovation. We are very excited to make Kinect for Windows available in China so that developers and businesses there can innovate with Kinect for Windows and transform experiences through touch-free solutions.
Kinect for Windows hardware will be available in seven additional markets later this fall: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico.
In addition to making Kinect for Windows hardware available in eight new markets this fall, we will be releasing an update to the Kinect for Windows runtime and software development kit (SDK) on October 8. This release has numerous new features that deliver additional power to Kinect for Windows developers and business customers. We will share the full details when it’s released on October 8, but in the meantime here are a few highlights:
It has been a little more than seven months since we first launched Kinect for Windows in 12 markets. By the end of the year, Kinect for Windows will be available in 38 markets and we will have shipped two significant updates to the SDK and runtime beyond the initial release—and this is this just the beginning. Microsoft has had a multi-decade commitment to natural user interface (NUI), and my team and I look forward to continuing to be an important part of that commitment. In coming years, I believe that we will get to experience an exciting new era where computing becomes invisible and all of us will be able to interact intuitively and naturally with the computers around us.
Automotive companies Audi, Ford, and Nissan are adopting Kinect for Windows as a the newest way to put a potential driver into a vehicle. Most car buyers want to get "hands on" with a car before they are ready to buy, so automobile manufacturers have invested in tools such as online car configurators and 360-degree image viewers that make it easier for customers to visualize the vehicle they want.
Now, Kinect's unique combination of camera, body tracking capability, and audio input can put the car buyer into the driver's seat in more immersive ways than have been previously possible—even before the vehicle is available on the retail lot!
The most recent example of this automotive trend is the 2013 Nissan Pathfinder application powered by Kinect for Windows, which was originally developed to demonstrate the new Pathfinder at auto shows before there was a physical car available.
Nissan quickly recognized the value of this application for building buzz at local dealerships, piloting it in 16 dealerships in 13 states nationwide.
"The Pathfinder application using Kinect for Windows is a game changer in terms of the way we can engage with consumers," said John Brancheau, vice president of marketing at Nissan North America. "We're taking our marketing to the next level, creating experiences that enhance the act of discovery and generate excitement about new models before they're even available. It's a powerful pre-sales tool that has the potential to revolutionize the dealer experience."
Digital marketing agency Critical Mass teamed with interactive experience developer IdentityMine to design and build the Kinect-enabled Pathfinder application for Nissan. "We're pioneering experiences like this one for two reasons: the ability to respond to natural human gestures and voice input creates a rich experience that has broad consumer appeal," notes Critical Mass President Chris Gokiert. "Additionally, the commercial relevance of an application like this can fulfill a critical role in fueling leads and actually helping to drive sales on site."
Each dealer has a kiosk that includes a Kinect for Windows sensor, a monitor, and a computer that’s running the Pathfinder application built with the Kinect for Windows SDK. Since the Nissan Pathfinder application first debuted at the Chicago Auto Show in February 2012, developers made several enhancements, including a new pop-up tutorial, and interface improvements, such as larger interaction icons and instructional text along the bottom of the screen so a customer with no Kinect experience could jump right in. "In the original design for the auto show, the application was controlled by a trained spokesperson. That meant aspects like discoverability and ease-of-use for first-time users were things we didn’t need to design for," noted IdentityMine Research Director Evan Lang.
Now, shoppers who approach the Kinect-based showroom are guided through an array of natural movements—such as extending their hands, stepping forward and back, and leaning from side to side—to activate hotspots on the Pathfinder model, allowing them to inspect the car inside and out.
The project was not, however, without a few challenges. The detailed Computer-Aided Design (CAD) model data provided by Nissan, while ideal for commercials and other post-rendered uses, did not lend itself easily to a real-time engine. "A lot of rework was necessary that involved 'retopolgizing' the mesh," reported IdentityMine’s 3D Design Lead Howard Schargel. "We used the original as a template and traced over to get a cleaner, more manageable polygon count. We were able to remove much more than half of the original polygons, allowing for more fluid interactions and animations while still retaining the fidelity of the client's original model."
And then, the development team pushed further. "The application uses a dedicated texture to provide a dynamic, scalable level of detail to the mesh by adding or removing polygons, depending on how close it is to the camera,” explained Schargel. “It may sound like mumbo jumbo—but when you see it, you won't believe it."
You can see the Nissan Pathfinder app in action at one of the 16 participating dealerships or by watching our video case study.
Traditional digital animation techniques can be costly and time-consuming. But KinÊtre—a new Kinect for Windows project developed by a team at Microsoft Research Cambridge—makes the process quick and simple enough that anyone can be an animator who brings inanimate objects to life.
KinÊtre uses the skeletal tracking technology in the Kinect for Windows software development kit (SDK) for input, scanning an object as the Kinect sensor is slowly panned around it. The KinÊtre team then applied their expertise in cutting-edge 3-D image processing algorithms to turn the object into a flexible mesh that is manipulated to match user movements tracked by the Kinect sensor.
Microsoft has made deep investments in Kinect hardware and software. This enables innovative projects like KinÊtre, which is being presented this week at SIGGRAPH 2012, the International Conference and Exhibition on Computer Graphics and Interactive Techniques. Rather than targeting professional computer graphics (CG) animators, KinÊtre is intended to bring mesh animation to a new audience of novice users.
Shahram Izadi, one of the tool's creators at Microsoft Research Cambridge, told me that the goal of this research project is to make this type of animation much more accessible than it's been—historically requiring a studio full of trained CG animators to build these types of effects. "KinÊtre makes creating animations a more playful activity," he said. "With it, we demonstrate potential uses of our system for interactive storytelling and new forms of physical gaming."
This incredibly cool prototype reinforces the world of possibilities that Kinect for Windows can bring to life and even, perhaps, do a little dance.
Peter Zatloukal, Kinect for Windows Engineering Manager
Kinect for Windows partners are finding new business opportunities by helping to develop new custom applications and ready-made solutions for various commercial customers, such as the Coca-Cola Company, and vertical markets, including the health care industry.
Several of these solutions were on display at the Microsoft Worldwide Partner Conference (WPC) in Toronto, Canada, where Kinect for Windows took the stage with two amazing demos as well as strong booth showings at the Solutions Innovation Center.
"Being part of the WPC 2012 event was a great opportunity to showcase our Kinect-based 3-D scanner, and the response was incredibly awesome, both on stage when the audience would spontaneously clap and cheer in the middle of the scan, and in the Kinect for Windows trade show area where people would stand in line to get scanned," said Nicolas Tisserand, co-founder of the France-based Manctl, one of the 11 companies in the Microsoft Accelerator for Kinect program.
Manctl's Skanect scanner software uses the Kinect sensor to build high quality 3-D digital models of people and objects, which can be sent to a 3-D printer to create detailed plastic extruded sculptures. "Kinect for Windows is a fantastic device, capable of so much more than just game control. It's making depth sensing a commodity," Tisserand added.
A demo from übi interactive in Germany uses the Kinect sensor to turn virtually any surface into a 3-D touchscreen that can control interfaces, apps, and games. "Kinect for Windows is a great piece of hardware and it works perfect[ly] with our software stack," reported übi co-founder David Hajizadeh. "As off-the-shelf hardware, it massively reduced our costs and we see lots of opportunities for business applications that offer huge value for our customers."
Snibbe Interactive created its SocialMirror Coke Kiosk to deliver a Kinect-based game in which players aim a stream of soda into a glass and then share videos of the experience with their social networks. "We were extremely excited to show off our unique Coca-Cola branded interactive experience and its unique ability to create instant ROI [return on investment] through our viral marketing component," reported Alan Shimoide, director of engineering at Snibbe.
InterKnowlogy developed KinectHealth to assist doctors with motion-controlled access to patient records and surgery planning tools. "A true game changer, Kinect for Windows allows our designers and developers to think differently about business cases across many verticals," noted Kevin Custer, the director of strategic marketing and partnerships at InterKnowlogy. "Kinect for Windows is not just how we interact with computers, but it offers unique ways to add gesture and voice to our natural user-interface designed software—the combination of which is changing lives of customers and users alike." "Avanade has already delivered several innovative solutions using Kinect, and we expect that demand to keep growing," said Ben Reierson, innovation manager at Avanade, whose Kinect for Virtual Healthcare includes video chat for connecting clinics to remote doctors for online appointments. "Customers and partners are clearly getting more serious about the possibilities of Kinect and natural user interfaces."