I grew up in the UK and my female cousins all had Barbie. In fact Barbies – they had lots of Barbie dolls and ton of accessories that they were obsessed with. I was more of a BMX kind of kid and thought my days of Barbie education were long behind me, but with a young daughter I’m beginning to realize that I have plenty more Barbie ahead of me, littered around the house like landmines. This time around though, I’m genuinely interested thanks to a Kinect-enabled application.
This week, Barbie lovers in Sydney, Australia, are being given the chance to do more than fanaticize how they’d look in their favorite Barbie outfit. Thanks to Mattel, Gun Communications, Adapptor, and Kinect for Windows, Barbie The Dream Closet is here.
The application invites users to take a walk down memory lane and select from 50 years of Barbie fashions. Standing in front of Barbie’s life-sized augmented reality “mirror,” fans can choose from several outfits in her digital wardrobe—virtually trying them on for size.
The solution, built with the Kinect for Windows SDK and using the Kinect for Windows sensor, tracks users’ movements and gestures enabling them to easily browse through the closet and select outfits that strike their fancy. Once an outfit is selected, the Kinect for Windows skeletal tracking determines the position and orientation of the user. The application then rescales Barbie’s clothes, rendering them over the user in real time for a custom fit.
One of the most interesting aspects of this solution is the technology’s ability to scale - with menus, navigation controls and clothing all dynamically adapting so that everyone from a little girl to a grown woman (and cough, yes, even a committed father) can enjoy the experience. To facilitate these advancements, each outfit was photographed on a Barbie doll, cut into multiple parts, and then built individually via the application.
Of course, the experience wouldn’t be complete without the ability to memorialize it. A photo is taken and, with approval/consent from those photographed, is uploaded and displayed in a gallery on the Barbie Australian Facebook page. (Grandparents can join in the fun from afar!)
I spoke with Sarah Sproule, Director, Gun Communications about the genesis of the idea who told me, “We started working on Barbie The Dream Closet six months ago, working with our development partner Adapptor. Everyone has been impressed by the flexibility, and innovation Microsoft has poured into Kinect for Windows. Kinect technology has provided Barbie with a rich and exciting initiative that's proving to delight fans of all ages. We're thrilled with the result, as is Mattel - our client."
Barbie’s Dream Closet, was opened to the public at the Westfield Parramatta in Sydney today and will be there through April 15. Its first day, it drew enthusiastic crowds, with around 100 people experiencing Barbie The Dream Closet. It's expected to draw even larger crowds over the holidays. It’s set to be in Melbourne and Brisbane later this year.
Meantime, the Kinect for Windows team is just as excited about it as my daughter:
“The first time I saw Barbie’s Dream Closet in action, I knew it would strike a chord,” notes Kinect for Windows Communications Manager, Heather Mitchell. “It’s such a playful, creative use of the technology. I remember fanaticizing about wearing Barbie’s clothes when I was a little girl. Disco Ken was a huge hit in my household back then…Who didn’t want to match his dance moves with their own life-sized Barbie disco dress? I think tens of thousands of grown girls have been waiting for this experience for years…Feels like a natural.”
That’s the beauty of Kinect – it enables amazingly natural interactions with technology and hundreds of companies are out there building amazing things; we can’t wait to see what they continue to invent.
Steve ClaytonEditor, Next at Microsoft
Back in May, we released the Kinect for Windows SDK/Runtime v1.5 in a modular manner, to make it easier to refresh parts of the Developer Toolkit (tools, components, and samples) without the need to update the SDK (driver, runtime, and basic compilation support).
Today, we have realized that vision with the Developer Toolkit update v1.5.1. This update boosts Kinect Studio performance and stability, improves face tracking, and introduces offline documentation support. If you have already installed the SDK, simply download the new v1.5.1 Developer Toolkit Update. If you are new to Kinect for Windows, you will want to download both Kinect for Windows SDK v1.5 and Developer Toolkit v1.5.1.
Rob RelyeaProgram Manager, Kinect for Windows
The Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.
IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.
One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”
Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”
Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."
Kinect for Windows team
This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.
Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.
Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.
Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.
"Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."
Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.
Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.
Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:
With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.
Kinect for Windows Team
In addition to being a great day for Xbox One, today is also a great day for Kinect for Windows. We have started delivering Kinect for Windows v2 Developer Preview kits to program participants. The Developer Preview includes a pre-release Kinect for Windows v2 sensor, access to the new generation Kinect for Windows software development kit (SDK), as well as ongoing updates and access to private program forums. Participants will also receive a Kinect for Windows v2 sensor when they become available next summer (northern hemisphere).
Microsoft is committed to making the Kinect for Windows sensor and SDK available early to qualifying developers and designers so they can prepare to have their new-generation applications ready in time for general availability next summer. We continue to see a groundswell for Kinect for Windows. We received thousands of applications for this program and selected participants based on the applicants’ expertise, passion, and the raw creativity of their ideas. We are impressed by the caliber of the applications we received and look forward to seeing the innovative NUI experiences our Developer Preview customers will create.
The new Kinect for Windows v2 sensor will feature the core capabilities of the new Kinect for Xbox One sensor. With the first version of Kinect for Xbox 360, developers and businesses saw the potential to apply the technology beyond gaming—in many different computing environments. Microsoft believes that the opportunities for revolutionizing computing experiences will be even greater with this new sensor. The benefits will raise the bar and accelerate the development of NUI applications across multiple industries, from retail and manufacturing to healthcare, education, communications, and more:
Real VisionKinect Real Vision technology dramatically expands its field of view for greater line of sight. An all-new active IR camera enables it to see in the dark. And by using advanced three-dimensional geometry, it can even tell if you’re standing off balance.
Real MotionKinect Real Motion technology tracks even the slightest gestures. So a simple squeeze of your hand results in precise control over an application, whether you’re standing up or sitting down.
Real VoiceKinect Real Voice technology focuses on the sounds that matter. Thanks to an all-new multi-microphone array, the advanced noise isolation capability lets the sensor know who to listen to, even in a crowded space.
2014 will be exciting, to say the least. We will keep you updated as the Developer Preview program evolves and we get closer to the Kinect for Windows v2 worldwide launch next summer. Additionally, follow the progress of the early adopter community by keeping an eye on them (#k4wdev) and by following us (@kinectwindows).
The Kinect for Windows Team
It all started with a couple of kids and a remarkable idea, which eventually spawned two terrifying demon dogs and their master. This concept is transforming the haunt industry and could eventually change how theme parks and other entertainment businesses approach animated mechanical electronics (animatronics). Here's the behind-the-scenes story of how this all came to be:
The boys, 6-year-old Mark and 10-year-old Jack, fell in love with Travel Channel's Making Monsters, a TV program that chronicles the creation of lifelike animatronic creatures. After seeing their dad's work with Kinect for Windows at the Minneapolis-based Microsoft Technology Center, they connected the dots and dreamed up the concept: wouldn't it be awesome if Dad could use his expertise with the Kinect for Windows motion sensor to make better and scarier monsters?
So “Dad”—Microsoft developer and technical architect Todd Van Nurden—sent an email to Distortions Unlimited in Greeley, Colorado, offering praise of their work sculpting monsters out of clay and adjustable metal armatures. He also threw in his boys' suggestion on how they might take things to the next level with Kinect for Windows: Imagine how much cooler and more realistic these monsters could be if they had the ability to see you, hear you, anticipate your behavior, and respond to it. Imagine what it means to this industry now that monster makers can take advantage of the Kinect for Windows gesture and voice capabilities.
Two months passed. Then one day, Todd received a voice mail message from Distortions CEO Ed Edmunds expressing interest. The result: nine months of off-and-on work, culminating with the debut of a Making Monsters episode detailing the project on Travel Channel earlier today, October 21 (check local listings for show times, including repeat airings). The full demonic installation can also be experienced firsthand at The 13th Floor haunted house in Denver, Colorado, now through November 10.
To get things started, Distortions sent Van Nurden maquettes—scale models about one-quarter of the final size—to build prototypes of two demon dogs and their demon master. Van Nurden worked with Parker, a company that specializes in robotics, to develop movement by using random path manipulation that is more fluid than your typical robot and also is reactive and only loosely scripted. The maquettes were wired to Kinect for Windows with skeletal tracking, audio tracking, and voice control functionality as a proof of concept to suggest a menu of possible options.
Distortions was impressed. "Ed saw everything it could do and said, 'I want all of them. We need to blow this out’," recalled Van Nurden.
Todd Van Nurden prepares to install the Kinect for Windows sensor in the demon's belt
The full-sized dogs are four feet high, while the demon master stands nearly 14 feet. A Kinect for Windows sensor connected to a ruggedized Lenovo M92 workstation is embedded in the demon's belt and, after interpreting tracking data, sends commands to control itself and the dogs via wired Ethernet. Custom software, built by using the Kinect for Windows SDK, provides the operators with a drag-and-drop interface for laying out character placement and other configurable settings. It also provides a top-down view for the attraction's operator, displaying where the guests are and how the creatures are tracking them.
"We used a less common approach to processing the data as we leveraged the Reactive Extensions for .NET to basically set up push-based Linq subscriptions," Van Nurden revealed. "The drag-and-drop features enable the operator to control the place-space configuration, as well as when certain behaviors begin. We used most of the Kinect for Windows SDK managed API with the exception of raw depth data."
The dogs are programmed to react very differently if approached by an adult (which might elicit a bark or growl) versus a child (which could prompt a fast pant or soft whimper). Scratching behind a hound's ears provokes a "happy dog" response—assuming you can overcome your fear and get close enough to actually touch one! Each action or mood includes its own set of kinesthetic actions and vocal cues. The sensor quietly tracks groups of people, alternating between a loose tracking algorithm that can calculate relative height quickly when figures are further away and full skeletal tracking when someone approaches a dog or demon, requiring more detailed data to drive the beasts' reactions.
The end product was so delightfully scary that Van Nurden had to reassure his own sons when they were faced with a life-sized working model of one of the dogs. "I programmed him, he's not going to hurt you," he comforted them.
Fortunately, it is possible to become the demons' master. If you perform a secret voice and movement sequence, they will actually bow to you.
Lisa Tanzer, executive producer for Making Monsters, has been following creature creation for two years while shooting the show at Distortions Unlimited. She was impressed by how much more effective the Kinect for Windows interactivity is than the traditional looped audio and fully scripted movements of regular animatronics: "Making the monsters themselves is the same process—you take clay, sculpt it over an armature, mold it, paint it, all the same steps," she said. "The thing that made this project Distortions did for 13th Floor so incredible and fascinating was the Kinect for Windows technology.”
"It can be really scary," Tanzer reported. "The dogs and demon creature key into people and actually track them around the room. The dog turns, looks at you and whimpers; you go 'Oh, wow, is this thing going to get me?' It's just like a human actor latching on to somebody in a haunted house but there's no human, only this incredible technology.”
"Incorporating Kinect for Windows into monster making is very new to the haunt industry," she added. "In terms of the entertainment industry, it's a huge deal. I think it's a really cool illustration of where things are going."
Now that the updated Kinect for Windows SDK is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.
Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.
Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?
Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!
Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.
In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.
This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.
Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.
Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.
Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:
What are your top three favorite features in the latest release of the SDK and why?
Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!
Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!
New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).
Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.
What's it like working together?
Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.
Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.
Peter: My team is organized by the areas of technical affinity. I have developers focused on:
Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.
Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!
What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?
Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.
We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.
Peter Zatloukal, Engineering ManagerBob Heddle, Group Program ManagerKinect for Windows
This year’s Microsoft TechForum provided an opportunity for Craig Mundie, Microsoft Chief Research and Strategy Officer, to discuss the company’s vision for the future of technology as well as showcase two early examples of third-party Kinect for Windows applications in action.
Mundie was joined by Don Mattrick, President of the Microsoft Interactive Entertainment Business, and his Chief of Staff, Aaron Greenberg, who demonstrated both of the third-party Kinect for Windows applications, including the Pathfinder Kinect Experience. This application enables users to stand in front of a large monitor, and use movement, voice, and gestures to walk around the 2013 Nissan Pathfinder Concept, examining the exterior, bending down and inspecting the wheels, viewing the front and back, and then stepping inside to experience the upholstery, legroom, dashboard, and other details.
Nissan worked with IdentityMine and Critical Mass to create the Kinect-enabled virtual experience, which was initially shown at the Chicago Auto Show in early February. The application is continuing to be refined, taking advantage of the Kinect natural user interface to enable manufacturers to showcase their vehicles in virtual showrooms.
“Using motion, speech, and gestures, people will be able to get computers to do more for them,” explain Greenberg. “You can imagine this Pathfinder solution being applied in different ways in the future - at trade shows, online, or even at dealerships - where someone might be able to test drive a physical car, while also being able to visualize and experience different configurations of the car through its virtual twin, accessorizing it, changing the upholstery, et cetera.”
Also demonstrated at TechForum was a new kind of shopping cart experience, which was developed by mobile application studio Chaotic Moon. This application mounts a Kinect for Windows sensor on a shopping cart, enabling the cart to follow a shopper - stopping, turning, and moving where and when the shopper does.
Chaotic Moon has tested their solution at Whole Foods in Austin, Texas, but the application is an early experiment and no plans are in place for this application to be introduced in stores anytime soon. Conceivably, Kinect-enabled carts at grocery stores, shopping malls, or airports could make it easier for people to navigate and perform tasks hands free. “Imagine how an elderly shopper or a parent with a stroller might be assisted by something like this,” notes Greenberg.
“The Kinect natural user interface has the potential to revolutionize products and processes in the home, at work, and in public places, like retail stores,” continues Greenberg. “It’s exciting to see what is starting to emerge.”
Getting technology to do what you want can be challenging. Imagine building a remote-controlled robot in 6 weeks, from pre-defined parts, which can perform various tasks in a competitive environment. That’s the challenge presented to 2,500 teams of students who will be participating in the FIRST (For Inspiration and Recognition of Science and Technology) Robotics Competition.
The worldwide competition, open to students in grades 9-12, kicks off this morning with a NASA-televised event, including pre-recorded opening remarks from Presidents Clinton and G.W. Bush, Dean Kamen, founder of FIRST and inventor of the Segway, and Alex Kipman, General Manager, Hardware Incubation, Xbox.
Last year, several FIRST teams experimented with the Kinect natural user interface capabilities to control their robots. The difference this year is the Kinect hardware and software will be included in the parts kits teams receive to build their robots. Teams will be able to control their robots not only with joy sticks, but gestures and possibly spoken commands.
The first part of the competition is the autonomous period, in which robot can only be controlled by sensor input and commands. This is when the depth and speech capabilities of Kinect will prove extremely useful.
To help teams understand how to incorporate Kinect technologies into the design of their robot controls for the 2012 competition, workshops are being held around the country. Students will be using C# or C++ to program the drive stations of their robots to recognize and respond to gestures and poses.
In addition, Microsoft Stores across the country are teaming up with FIRST robotics teams to provide Kinect tools, technical support, and assistance.
While winning teams get bragging rights, all participants gain real-world experience by working with professional engineers to build their team’s robot, using sophisticated hardware and software, such as the Kinect for Windows SDK. Team members also gain design, programming, project management, and strategic thinking experience. Last but not least, over $15 million of college scholarships will be awarded throughout the competition.
“The ability to utilize Kinect technologies and capabilities to transform the way people interact with computers already has sparked the imagination of thousands of developers, students, and researchers from around the world,” notes FIRST founder Dean Kamen. “We look forward to seeing how FIRST students utilize Kinect in the design and manipulation of their robots, and are grateful to Microsoft for participating in the competition as well as offering their support and donating thousands of Kinect sensors.”
This morning’s kick-off of the 2012 FIRST Robotics Competition was a highly anticipated day. Approximately 2,500 teams worldwide were given a kit of 600-700 discrete parts including a Kinect sensor and the Kinect for Windows software development kit (SDK), along with the details and rules for this year’s game, Rebound Rumble. Learn how Kinect for Windows will play a role in this year’s game by watching the game animation.
Traditional digital animation techniques can be costly and time-consuming. But KinÊtre—a new Kinect for Windows project developed by a team at Microsoft Research Cambridge—makes the process quick and simple enough that anyone can be an animator who brings inanimate objects to life.
KinÊtre uses the skeletal tracking technology in the Kinect for Windows software development kit (SDK) for input, scanning an object as the Kinect sensor is slowly panned around it. The KinÊtre team then applied their expertise in cutting-edge 3-D image processing algorithms to turn the object into a flexible mesh that is manipulated to match user movements tracked by the Kinect sensor.
Microsoft has made deep investments in Kinect hardware and software. This enables innovative projects like KinÊtre, which is being presented this week at SIGGRAPH 2012, the International Conference and Exhibition on Computer Graphics and Interactive Techniques. Rather than targeting professional computer graphics (CG) animators, KinÊtre is intended to bring mesh animation to a new audience of novice users.
Shahram Izadi, one of the tool's creators at Microsoft Research Cambridge, told me that the goal of this research project is to make this type of animation much more accessible than it's been—historically requiring a studio full of trained CG animators to build these types of effects. "KinÊtre makes creating animations a more playful activity," he said. "With it, we demonstrate potential uses of our system for interactive storytelling and new forms of physical gaming."
This incredibly cool prototype reinforces the world of possibilities that Kinect for Windows can bring to life and even, perhaps, do a little dance.
Peter Zatloukal, Kinect for Windows Engineering Manager