Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately.
A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you can untether yourself and move around a conference room naturally.
In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
The Kinect for Windows Team
Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.
On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:
Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.
Kinect for Windows Interactions transform how people interact with computers insettings ranging from retail to education, training, and physical therapy.
Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.
Kinect Fusion enables developers to create accurate 3-D renderings in real time.
The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:
Seeing is believingWe demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!
As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.
The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.
Bob Heddle, DirectorKinect for Windows
Shortly after the commercial release of Kinect for Windows in early 2012, Microsoft announced the availability of academic pricing for the Kinect for Windows sensor to higher education faculty and students for $149.99 at the Microsoft Store in the United States. We are now pleased to announce that we have broadened the availability of academic pricing through Microsoft Authorized Educational Resellers (AERs).
Most of these resellers have the capability to offer academic pricing directly to educational institutions; academic researchers; and students, faculty, and staff of public or private K-12 schools, vocational schools, junior colleges, colleges, universities, and scientific or technical institutions. In the United States, eligible institutions are accredited by associations that are recognized by the US Department of Education and/or the State Board of Education. Academic pricing on the Kinect for Windows sensor is currently available through AERs in the United States, Taiwan, and Hong Kong SAR.
Within the academic community, the potential of Kinect for Windows in the classroom is generating a lot of excitement. Researchers and academia in higher education collaborate with Microsoft Research on a variety of projects that involve educational uses of Kinect for Windows. The educator driven community resource, KinectEDucation, encourages developers, teachers, students, enthusiasts and any other education stakeholders to help transform classrooms with accessible technology. One such development is a new product from Kaplan Early Learning Company, the Inspire-NG Move, bundled with the Kinect for Windows sensor. This bundle includes four educational programs for children age three years and older. The programs make it possible for children to experience that hands-on, kinesthetic play with a purpose makes learning fun. The bundle currently sells for US$499.
“We’re excited about the new learning models that are enabled by Kinect for Windows,” stated Chris Gerblick, vice president of IT and Professional Services at Kaplan Early Learning Company. “We see the Inspire NG-Move family of products as excellent learning tools for both the classroom and the home.”
With the availability of academic pricing, we look forward to many developments from the academic community that integrate Kinect for Windows into interactive educational experiences.
Michael FryBusiness Development, Strategic AlliancesKinect for Windows
Revealed in November as a future addition to the Kinect for Windows SDK, Kinect Fusion made a big impression at the annual TechFest event hosted by Microsoft Research this week in Redmond, Washington.
Kinect Fusion pulls depth data that is generated by the Kinect for Windows sensor and, from the sequence of frames, constructs a highly detailed 3-D map of objects or environments. The tool averages readings over hundreds or even thousands of frames to create a rich level of detail.
Kinect Fusion, shown during TechFest 2013, enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor and the Kinect for Windows SDK.
"The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly," said Shahram Izadi, senior researcher at Microsoft Research Cambridge. "Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them."
When scanning smaller objects, you also have the option to simply move the object instead of the sensor.
The Cambridge researchers and Kinect for Windows team collaborated closely on Kinect Fusion to construct a tool that can enable businesses and developers to devise new types of applications.
"This has been a wonderful example of collaboration between Microsoft Research and our product group," said Kinect for Windows Senior Program Manager Chris White. "We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper."
"This kind of collaboration is one of the unique strengths of Microsoft, where we can bring together world-class researchers and world-class engineers to deliver real innovation," White added. "Kinect Fusion opens up a wide range of development possibilities—everything from gaming and augmented reality to industrial design. We're really excited to be able to include it in a future release of the Kinect for Windows SDK."
Kinect for Windows team
Much like Build-A-Bear Workshop, Mattel has been watching the trends and finding that children are embracing digital media. How can the company keep a toy like the Barbie doll, launched in 1959, relevant in a world where tablet computers and smartphones dominate kids' wishlists?
Once again, Kinect for Windows has proved a formidable ally in bridging the gap between digital entertainment and traditional toys. A six-month project for Mattel, Gun Communications and creative applications developer Adapptor built Barbie: the Dream Closet, which lets enthusiasts of all ages across Australia virtually try on a variety Barbie outfits from different decades by using a Kinect for Windows-enabled "magic mirror." Have you ever wondered what you’d look like in one of Barbie's ball gowns, or as an astronaut, or a race car driver? With the Dream Closet, it's possible. Additionally, you can save and share photos over social media, or even take a photo home.
To build the application, each outfit was photographed on a Barbie doll, trimmed into its component parts, and then reconstructred dynamically on Barbie fans by the custom Dream Closet application, built in Microsoft XNA. The Kinect for Windows sensor and software development kit (SDK) make it easy to accurately determine the size of the user so the virtual clothes and selection menus can be fitted to match.
"If we would have had to write code from the ground up [versus using code provided in the SDK], it would have taken much longer, and the end result wouldn’t have been nearly as impactful," said Adapptor Managing Director Mark Loveridge. "The Kinect for Windows SDK doubled our development speed."
The result of Barbie: the Dream Closet? Increased customer brand loyalty and media coverage yielding more than 25 million impressions, a new case study reports.
"The impact of Kinect for Windows on the public and the Barbie brand is incredible," notes Mattel Marketing Director Amanda Allegos. "Kinect for Windows has given us a new way to reach existing Barbie fans and attract new ones in a way that's contemporary, interactive, and bridges both the digital and physical worlds."
Almost two years ago, Microsoft announced its intent to take Kinect beyond gaming and make it possible for developers and businesses to innovate with Kinect on computers. The Kinect for Windows team was born.
Shortly after that, I joined the team to oversee Program Management, and over the past year, we’ve shipped the Kinect for Windows sensor as well as multiple updates to the Kinect for Windows software development kit (SDK). Throughout it all, Craig Eisler has been leading our business.
This month, Craig is moving on to do other important work at Microsoft, and I am stepping in to lead the Kinect for Windows team. I am excited to maintain the amazing momentum we’ve seen in industries like healthcare, retail, education, and automotive. There have been more than 500,000 downloads of our free SDK, and the Kinect for Windows sensor can be purchased in 39 regions today.
Such rapid growth would not have been possible without the community embracing the technology. Thanks to all of you—business leaders, technical leaders, creative visionaries, and developers—Kinect for Windows has been deployed across the globe. The community is developing new ways for consumers to shop for clothing and accessories, interesting digital signage that delights and inspires customers, remote monitoring tools that make physical therapy easier, more immersive training and simulation applications across multiple industries, and touch-free computing tools that enable surgeons to view patient information without having to leave the operating room. The list goes on and on…and the list is growing every day.
We launched Kinect for Windows nearly one year ago—pioneering a commercial technology category that didn’t previously exist. I look forward to continuing to be at the forefront of touch-free computing and helping our partners develop innovative solutions that take the natural user interface vision even further. We’ve said it before and I’ll say it again: this is just the beginning. I’m thrilled to continue the great foundational work we did in 2012 and look forward to a very productive 2013.
Bob HeddleDirector, Kinect for Windows
Swivel Close-Up, a Kinect for Windows-based kiosk from FaceCake, lets customers visualize themselves in small accessories such as makeup, sunglasses, and jewelry.
Microsoft Kinect for Windows has been playing an increasingly important role in retail, from interactive kiosks at stores such as Build-A-Bear Workshop, to virtual dressing rooms at fashion leaders like Bloomingdale's, to virtual showrooms at Nissan dealerships. This year's National Retail Federation (NRF) Convention and Expo, which took place earlier this week, showcased several solutions that provide retailers with new ways to drive customer engagement, sales, and loyalty.
Trend watchers have noted significant shifts in how consumers shop—often blending online and in-store investigation by using phones, tablets, kiosks, and computers in addition to good old-fashioned salesperson interaction. Brick-and-mortar stores, which are facing vigorous competition from online resellers, are embracing new technologies like Kinect for Windows to help drive sales and retention—and to delight and surprise customers with fun, interactive shopping experiences. Even better, customers can get more accurate and personalized information about whether a specific product is right for them—whether it's an article of clothing or a piece of furniture—reducing dissatisfaction and inconvenient returns.
"This past holiday season, we’ve seen retailers get much more tech savvy in how they engage customers and offer more flexibility in how they shop," said Kinect for Windows Senior Channel Development Manager Michael Fry. "As the lines between traditional and digital shopping channels continue to blur, retailers must seek new ways to deliver the most value and earn loyalty through compelling, seamless experiences across all touch points with their customers. Technologies like Kinect for Windows help retailers engage customers with interactive shopping experiences that are not only fun, but also increase important bottom-line business results—increasing engagement, awareness, and brand value while making it easier to select the best products for them."
At a hospitality event during NRF, Kinect for Windows partner Avanade showed one such innovation: their "shoppable storefront," created for my-wardrobe.com in Norway. Customers can walk up to the showroom window and—even after business hours—interact with the Kinect for Windows sensor to browse the store catalog, view pricing, and scan a Quick Response (QR) code to quickly purchase the product online via mobile phone. See a video of how it works.
"Consider the possibilities within the store, they're almost endless with a technology like Kinect for Windows," said John Konczal, director of service line marketing at Avanade. "You could build a guide for customers to find more information about products and quickly locate them in the store. If an item is not available, order it for shipment and pick-up at the nearest store. The interactivity, simplicity, and responsiveness of this technology can really help retailers differentiate their stores from the competition."
Avanade also demonstrated Natural User Observation of Retail Displays (NUO), which provides a cost-effective solution for retailers by gathering real-time customer response and behavior. This allows retail managers to do things like determine where customers are spending their time in the store, identify trends, and gather demographic and customer behavior as they interact with store displays. Avanade reports that the solution integrates into existing store and back-office IT systems and provides dashboards and data-rich reporting for improving product placement, marketing effectiveness, and overall display performance.
Another of our partners, FaceCake Marketing Technologies, Inc., which developed Swivel, the 3-D virtual dressing room that's been featured at Bloomingdale's, showed NRF attendees the newest enhancements to their Swivel software. The enhancements, which work in conjunction with the latest Kinect for Windows SDK, include face-tracking and a feature called real-time Compare, which allows you to contrast two looks in a full-motion visualization of yourself in two dresses (or any type of clothing) side-by-side. Sizing is now even more accurate, and FaceCake also added multi-user functionality that allows, for example, a bride to see herself, virtually, in various wedding dresses at the same time as her bridesmaids see themselves in their bridesmaid dresses.
We also featured another exciting new product from FaceCake in our booth: Swivel Close-Up. This Kinect for Windows-based kiosk, which operates within a two-foot environment, lets customers try on much smaller accessories than clothing including makeup, sunglasses, and jewelry. Earrings dangle and twist beautifully as a shopper tries them on virtually, and consumers now have the opportunity to try on a limitless number of lip colors without lipstick ever touching their lips.
"We can now provide an extended Try-On solution that is real-time, 3-D, and full motion as opposed to just uploading a static image and then modifying it," said FaceCake CEO Linda Smith. "The result is a lifelike representation that's just like looking in a mirror—your dream dressing room mirror powered by Swivel and Kinect for Windows! It's both efficient and fun for the customer."
One of the key themes of this year's NRF event was putting customers at the center of retail marketing, something that Kinect for Windows accomplishes readily, thanks to its ability to quickly entice customers into virtual shopping spaces within actual storefronts, making it easier than ever for them to find, experience, and purchase products that are right for them.
"Staying competitive in retail today means putting customers at the heart of the business and seeking new ways to deliver value in the store," Fry said. "A Kinect for Windows retail display immediately puts the focus on the shopper, delivering uniquely personalized results that drive both sales and customer satisfaction."
Build-A-Bear Workshop stores have been delivering custom-tailored experiences to children for 15 years in the form of make-your-own stuffed animals, but the company recently recognized that its target audience was gravitating toward digital devices. So it has begun advancing its in-store experiences to match the preferences of its core customers by incorporating digital screens throughout the stores—from the entrance to the stations where the magic of creating new fluffy friends happens.
A key part of Build-A-Bear's digital shift is their interactive storefront that's powered by Kinect for Windows. It enables shoppers to play digital games on either a screen adjacent to the store entrance or directly through the storefront window simply by using their bodies and natural gestures to control the game.
Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.
"We're half retail, half theme park," said Build-A-Bear Director of Digital Ventures Brandon Elliott. The Kinect for Windows platform instantly appealed to Build-A-Bear as "a great enabler for personalized interactivity."
The Kinect for Windows application, launched at six pilot stores, uses skeletal tracking to enable two players (four hands) to pop virtual balloons (up to five balloons simultaneously) by waving their hands or by touching the screen directly. While an increasing number of retail stores use digital signage these days, Elliott noted: "What they're not doing is building a platform for interactive use."
"We wanted something that we could build on, that's a platform for ever-improving experiences," Elliott said. "With Kinect for Windows, there’s no learning curve. People can interact naturally with technology by simply speaking and gesturing the way they do when communicating with other people. The Kinect for Windows sensor sees and hears them."
"Right now, we're just using the skeletal tracking, but we could use voice recognition components or transform the kids into on-screen avatars," he added. "The possibilities are endless." Part of the Build-A-Bear's vision is to create Kinect for Windows apps that tie into the seasonal marketing themes that permeate the stores. Elliott said that Build-A-Bear selected the combination of the Microsoft .NET Framework, Kinect for Windows SDK, and Kinect for Windows sensor specifically so that they can take advantage of existing developer platforms to build these new apps quickly.
“We appreciate that the Kinect for Windows SDK is developing so rapidly. We appreciate the investment Microsoft is making to continue to open up features within the Kinect for Windows sensor to us,” Elliott said. "The combination of Kinect for Windows hardware and software unlocks a world of new UI possibilities for us."
Microsoft developer and technical architect Todd Van Nurden and others at the Minneapolis-based Microsoft Technology Center helped Build-A-Bear with an early consultation that led to prototyping apps for the project.
"The main focus of my consult was to look for areas beyond simple screen-tied interactions to create experiences where Kinect for Windows activates the environment. Screen-based interactions are, of course, the easiest but less magical then environmental," Van Nurden said. "We were going for magical."
The first six Build-A-Bear interactive stores launched in October and November 2012 in St. Louis, Missouri; Pleasanton, California; Annapolis, Maryland; Troy, Michigan; Fairfax, Virginia, and Indianapolis, Indiana (details). Four of the stores have gesture-enhanced interactive signs at the entrance, while two had to be placed behind windows to comply with mall rules. Kinect for Windows can work through glass with the assistance of a capacitive sensor that enables the window to work as a touch screen, and an inductive driver that turns glass into a speaker.
So far, Build-A-Bear has been thrilled with what Elliott calls "fantastic" results. "Kids get it," he said. "We have a list of apps we want to build over the next couple of years. We can literally write an app for one computer in the store, and put it anywhere."
The Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.
IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.
One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”
Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”
Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."
A unique clinic for treating children with cancer and blood disorders, alex’s place is designed to be a warm, open, communal space. The center—which is located in Miami, Florida—helps put its patients at ease by engaging them with interactive screens that allow them to be transported into different environments—where they become a friendly teddy bear, frog, or robot and control their character’s movements in real time.
"As soon as they walk in, technology is embracing them," said Dr. Julio Barredo, chief of pediatric services at alex's place in The Sylvester Comprehensive Cancer Center, University of Miami Health Systems.
The clinic—which opened its doors in May 2012—was conceived of and designed with this in mind, and the Kinect for Windows digital experience was part of the vision from day one. Created by Snibbe Interactive, Character Mirror was designed to fit naturally within this innovative, unconventional treatment environment. The goal is to help reinforce patients' mind-body connection with engaging play and entertainment, as well as to potentially reduce their fear of technology and the treatments they face. As an added benefit, nurses can observe a child's natural range of movement during play and more easily draw out answers to key diagnostic questions.
"I find the gestural interactive experiences we created for alex's place in Miami among the most worthwhile and satisfying in our history," said Scott Snibbe, founder and CEO of Snibbe Interactive. "Kids in hospitals are feeling lonely, scared, and bored, not to mention sick. Partnering with Alex Daly and Dr. Barredo, we created a set of magical experiences that encourage healthy, social, and physical activity among the kids.
"Kids found these experiences so pleasing that they actually didn't want to leave after their treatments were complete," Snibbe added. "We are very excited to roll out these solutions to more hospitals, and transform healthcare through natural user interfaces that promote social play and spontaneous physical therapy."