As we promised earlier this week, Kinect for Windows information for developers has migrated. You’ll now find it in the Windows Dev Center, where it’s in some pretty snazzy company. (Hello, Cortana. Hi there, Windows 10 and Universal Windows Apps.)
Parlez-vous français? Sprechen Sie Deutsch? 한국어를 말 하나요? We’ve got you covered, since our Dev Center pages are now available in 23 languages. So cruise on over to our new home and pay us a visit. You’ll find information about everything Kinect for Windows, from hardware specs and feature descriptions to information on creating innovative apps that let users interact naturally with computers and computing devices. And you won’t need a bilingual dictionary to read it!
The Kinect for Windows Team
Starting August 27, you will find the information you need for developing with Kinect for Windows in a new location.
Kinect for Windows information for developers will soon reside in the Windows Dev Center. This is where you’ll go to learn about the features of the latest sensor and the free Kinect for Windows SDK 2.0, and where you’ll have ready access to information about creating innovative apps that let users interact naturally with computers and computing devices. You’ll also have developer-related information about building apps on the Windows platform right at hand. Think of it as one-stop shopping.
Meanwhile, information about purchasing the latest Kinect hardware—including the Kinect for Xbox One sensor and the Kinect Adapter for Windows—will be available in the Microsoft Store.
We will continue to showcase solutions from our MVPs (Microsoft Most Value Professionals) and other partners in this blog. In fact, we would love to feature more of the innovative work being done by our partner community. So partners, let’s hear from you: send us email about your solution.
One more benefit to the change of address: the Kinect for Windows web content is becoming more multi-lingual. Although our current website is available in just two languages (English and Chinese), the content we move to the Windows Dev Center will be published in 23 languages (including English and Chinese). So if English or Chinese isn’t your preferred tongue, chances are pretty good that you can find a better linguistic fit now.
Bottom line: rest easy; your Kinect for Windows information is just moving—to some pretty upscale neighborhoods no less.
Dementia: the statistics are sobering. According to the World Health Organization, 47.5 million people worldwide suffer from Alzheimer’s disease, the most common form of dementia, with 7.7 million new cases occurring every year. In the United States alone, there are currently more than 5 million people with Alzheimer’s, and that number is predicted to double by mid-century due to an aging population.
But statistics are just numbers—what’s really heartbreaking is to see how Alzheimer’s affects a loved one. The declines in memory and reasoning may be slight at first, but eventually Alzheimer’s can rob its victims of their memories, intellect, and personality—in short, everything that made them who they were. Until you’ve experienced it, it’s hard to imagine the pain of seeing your loved one’s frustration as they lose their memory and speech, your parent unable to remember which of their children you are, or your spouse incapable of recognizing you at all.
Researchers around the world are searching for ways to prevent or cure Alzheimer’s and other dementias, but the underlying causes are still not completely understood. And with drug therapies offering only modest hope, treatments that stimulate mental activity have emerged as one of the more promising therapeutic areas. And that is where Memore—and Kinect for Windows—come in.
Memore is a suite of video games with a very serious purpose: to help improve the quality of life and cognitive function of elderly people who have or are at risk of developing dementia. Developed by RetroBrain R&D, an award-winning healthcare startup based in Hamburg, Germany, Memore successfully translates years of evidenced-based research from leading neurologists, rehabilitation specialists, and psychological experts—some of whom are members of RetroBrain R&D's advisory board—into real-time, second-by-second decisions embedded within the games.
The games thus incorporate proven therapies, which are based on the plasticity of the brain’s neurons—their ability to grow new connections—with a goal of delaying the effects of dementia and mild cognitive impairment. For example, in Memore’s motorcycle racing game, the player navigates through a soothing virtual landscape of leafy trees, clear skies, and pleasant weather. But he or she also confronts real challenges, such as avoiding obstacles and refueling the motorcycle, while racing toward the final destination. These scaled and adaptable cognitive tasks prime users’ reactions, balance, and spatial orientation, building skills that translate into positive real-world impacts—for example, decreasing the probability of the falls and fall-related injuries that plague many seniors. Such traumatic incidents can be especially devastating to dementia sufferers, worsening their disease progression while increasing the complexity and costs of their nursing care.
Moreover, thanks to Kinect for Windows, RetroBrain R&D’s Memore-games are completely gesture based, which means that players need not master the hand-to-eye coordination required to use a traditional controller. This user-friendly approach makes the games more attractive to elderly people, many of whom have had little or no prior experience with computer games and are, therefore, reluctant to attempt gamified therapy.
As Manouchehr Shamsrizi, one of the co-founders of RetroBrain R&D explains, “Microsoft's Kinect for Xbox One sensor allows for new, innovative control concepts, based on precise recognition of gestures and movements, thus creating a controller-free way of controlling video games. By using gesture-based controls, we avoid the main reason why older people often reject video games and computers: complex, tricky, and unintuitive controls.”
RetroBrain R&D’s developers took full advantage of the latest Kinect sensor and the free Kinect for Windows software development kit (SDK 2.0), using the system’s enhanced body tracking for heuristic recognition of simple gestures, Visual Gesture Builder for recognition of complex gestures, and the 1080p color camera to capture pictures of the user. In addition, the team used Kinect Studio to record and replay gestures, allowing them to refine their coding without being tethered to the Kinect sensor.
To date, RetroBrain R&D’s therapeutic games have shown great promise in case-by-case tests with dementia sufferers and the team has been selected as one of the most innovative and promising startups by the City of Hamburg. Pending the results of a larger, controlled study that is scheduled to begin in 2016, the company will seek certification of Memore as a therapeutic system. In the meantime, RetroBrain R&D's team will start selling to businesses and consumers and is currently looking for investment to scale up sales and continue marketing the games as a platform to enhance general wellbeing and quality of life for senior citizens.
They probably won’t get a ticker tape parade like the U.S. national women’s soccer team did after winning the 2015 World Cup, but the winners of the recent 2015 RoboCup certainly deserve their moment in the sun. This international competition—it’s been held annually since 1997—features soccer-playing robots of different sizes and configurations. And while it looks like all fun and games, the RoboCup competition has a serious purpose: to advance the fields of robotics and artificial intelligence (AI).
“Soccer-bots” are more than an entertaining novelty: designing these complex machines helps advance robotics and artificial intelligence research. Plus, they can vacuum the artificial playing surface (or at least they should!).
The 2015 RoboCup, which took place from July 17 to 23 in Hefei, China, drew more than 100 teams from 76 countries. That’s a lot of robots, and most—over 90 percent—featured motion sensors, 70 percent of which were Kinect sensors. (Those figures are courtesy of Beijing Trainsfer Technology Development, a Chinese high-tech company whose personnel were at the games.) The theme of this year’s RoboCup was advances in inexpensive but high-quality robotic platforms and hardware, so the prominence of Kinect-enabled robots made perfect sense. After all, the latest Kinect sensor is prized for coupling low cost with outstanding capabilities.
Just in case any of the assembled researchers needed more incentive to use the Kinect sensor, Guobin Wu of Microsoft Research Asia and Browning Wan of Trainsfer gave an invited talk in which they demonstrated how the Kinect for Xbox One sensor can enable robots to recognize gestures, body movements, and voice commands.
So, how did our Kinect-enabled robots do? Well, of the 12 teams competing in the Middle Size League, nine used the Kinect sensor in their goalkeepers, including the champion team from Beijing Information Science and Technology University (BISTU). The BISTU developers tested a number of different motion sensors and chose the latest Kinect sensor because its color camera provided an expanded field of vision, enabling the goalkeeper to detect the ball from as far away as 7.5 meters, and its transmission rate of 30 frames per second gave the goalkeeper more time to get into position to block an incoming shot.
This goalkeeper robot has a distinct advantage: a Kinect sensor (surrounded by a protective gray-colored frame).
And while robots playing soccer are surprisingly entertaining, the real intent of the RoboCup, as mentioned earlier, is to promote serious research into AI and robotics, and we’re happy that the latest Kinect sensor is helping in these efforts.
We’re used to seeing applications in which Kinect for Windows tracks the movements of users—after all, that’s what the Kinect sensor is designed to do. But what if you picked up the Kinect sensor and moved it around, so that instead of the sensor tracking your movements, you could track the position and rotation of the sensor through three-dimensional space?
That’s exactly what filmmaker Sam Maliszewski has set out to do with NextStage, a Kinect for Windows application that effectively turns the Kinect for Xbox One sensor into a real-time virtual production camera. Maliszewski places retroreflective markers throughout the scene he intends to film, and then he physically moves the Kinect sensor around the set, using its onboard cameras to record the action while data from the reflective markers instantly and accurately tracks the sensor’s 3D position.
The resulting video footage can then be combined with virtual objects and sets, with no need for frame-by-frame processing. Moreover, since NextStage provides depth-based keying, it lets filmmakers separate live-action subjects from the background, and it allows them to place live-action actors or objects on a virtual set without using green-screen techniques. Alternatively, depth mattes created in NextStage can provide a high-quality “garbage” matte for green-screen overlays.
Maliszewski has developed two versions of NextStage, both currently in beta: NextStage Lite, a free download that captures the video by using the Kinect sensor’s color and depth cameras, and NextStage Pro, which enables filmmakers to sync the tracking data to an external camera and to export it to such applications as Blender and Maya.
To movie lovers, the image of the late John Belushi cavorting around the Delta Tau Chi fraternity house in a toga is comedy gold. But while the period costuming was played for laughs in Animal House, the recreation of virtual Roman attire is serious business to researchers Erkan Bostanci, Nadia Kanwal, and Adrian Clark, who are using Kinect for Windows to create augmented reality (AR) experiences that immerse participants in the cultural heritage of the ancient world. Their system not only recreates virtual representations of ancient Roman architecture, but it also virtually dresses participants in togas and Roman helmets, and even equips them with a virtual Roman sword.
Like previous AR endeavors, theirs combines real-world imagery with computer-generated virtual content. But instead of creating mythical worlds of fantastical beasts, the researchers hope to apply their system (currently in the prototype stage) to recreate archeological treasures at the very sites of the ancient remains, thus allowing visitors to experience to the wonders of antiquity with minimal disturbance to the real-world artifacts.
As shown here in a laboratory demonstration, the AR system recognizes flat surfaces and augments them with the virtual recreations of architectural features.
Their AR system uses the Kinect sensor’s color and depth cameras to determine the camera’s position and then to find and virtually augment flat surfaces in the real world in real time. So, for example, the system recognizes rectangular shapes as columns and augments them accordingly. This differs from earlier systems that attempt to augment realty by overlaying synthetic objects on camera images, a process that requires an offline phase to create a map of the environment. The Kinect-enabled system thus eliminates the need for programming a graphics processing unit, which makes it attractive for use with lower-power and mobile computers.
By using the Kinect sensor's body tracking capabilities, the system virtually clothes participants in Roman togas and helmets.
But what really captured our imagination was the use of the Kinect sensor’s body tracking capabilities to virtually clothe participants in historically correct garb. As the researchers note, by augmenting the appearance of the participants, the system deepens their immersion into the ancient culture. The prototype system uses Kinect tracking data of a participant’s head, torso, and right hand to virtually superimpose a Roman helmet, toga, and a sword. After all, if you’re strolling around an AR recreation of the Roman Forum, your immersion in the virtual world can be ruined by the sight of your fellow explorers in cargo shorts and t-shirts.
So many blogs, so little time. Ever feel that way when you realize it’s been a long time since you’ve checked in with one of your favorite bloggers? Well, that’s how we’re feeling today, after reading this four-month old blog post from Microsoft Most Valuable Professional (MVP) James Ashley. It seems that while we were busy watching demos of Microsoft HoloLens and wondering what’s next for Cortana, Ashley was busy sussing out the possibilities for using the Kinect for Xbox One sensor (along with the Kinect Adapter for Windows) with the March 2015 release of Unity 5.
Kinect MVP James Ashley captures himself in a Kinect-enabled Unity 5 app.
With his usual attention to detail, Ashley takes the reader through nine steps to build a Kinect-enabled application in Unity 5 by using plug-in support available in Unity’s free Personal Edition. As he explains, this plug-in makes it “very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++.”
Unity has long been the preferred platform for game developers, but until now, plug-in support was only available to those who purchased a costly Unity Pro license. With such support now included with the free Unity Personal license, there’s no cost barrier to creating Kinect-enabled Unity apps.
Let the games begin!
Major League Baseball’s 2015 All-Star Game is in the record books, with the victorious American League earning bragging rights and home-field advantage in this year’s World Series. But the American Leaguers weren’t the only winners at Cincinnati’s Great American Ball Park: lots of fans of either league came away with a winning fit in All-Star apparel, thanks to Virtual Mirrors powered by Zugara’s virtual dressing room software and the latest Kinect sensor.
Virtual Mirror stations were strategically placed near the entrance to the Major League Baseball store at the All-Star FanFest, which was located at the nearby Duke Energy Convention Center. These stations looked like boxes with dressing room mirrors, but above the mirrored surface was a Kinect sensor that captured the shopper’s image. The setup allowed fans of all sizes and ages (men, women and children) to try on All-Star branded jerseys, T-shirts and other baseball-related apparel—virtually.
Virtual Mirror at the MLB All-Star Game FanFest
The latest Kinect sensor’s advanced body tracking capabilities are key to the Virtual Mirror, providing an accurate body map, which the software then uses to automatically size and scale the selected clothing to fit the shopper. What’s more, the Kinect sensor’s ability to accurately follow the movements of 25 skeletal joints enables enhanced motion and gesture tracking, allowing the software to synchronize the apparel to the would-be purchaser’s movements. Add in background subtraction and the insertion of a high-definition virtual background image, and the shopper needn’t imagine how she’d look in that new National League jersey, she can see it right on the screen!
Although the 2015 All-Star Game is history, the Virtual Mirror technology is poised to take off, as more and more retailers use it to engage with customers. So maybe you don’t want an American League warm-up jacket—how about a nice Oxford shirt and matching tie? Step up to the virtual dressing room and see how they look.
Imagine you’re wearing multiple hats: you’re a business owner, a software developer, and a devoted spouse and parent. If you’re like Anup Chathoth, this scenario requires no imagining whatsoever. A husband and the father of two young children, Anup is also the CEO and main software developer at UBI Interactive, a Seattle-based startup whose products use the Kinect for Windows v2 sensor to project an interactive computer image onto any surface—walls, tabletops, windows, you name it.
Running a growing startup company is a fulltime job, and as any parent knows, so is raising a family. So with two fulltime gigs already on his plate, how does Anup find time to develop the next features for Ubi’s software?
“That’s a real problem,” he acknowledges. “During the day, I’m busy being a CEO, looking at cash flow, scheduling projects, meeting with potential customers. And at night I want to spend as much time as possible with my wife and kids. It doesn’t leave much time for writing code.”
Using Kinect Studio to develop Ubi without sensor plugged in
Luckily for Anup, Kinect Studio makes coding for Kinect for Windows applications a lot easier to pack into a crowded day. Kinect Studio, which is included in the free Kinect for Windows SDK 2.0, allows a developer to record all the data that’s coming into an application through a Kinect sensor. This means that you can capture the data on a series of body movements or hand gestures and then use that data over and over again to debug or enhance your code. Instead of being moored to a Kinect for Windows setup and having to repeatedly act out the user scenario, you have a faithful record of the color image, the depth data, and the three-dimensional relationships. With this data uploaded to your handy laptop, you can squeeze in a little—or a lot—of Kinect coding whenever time permits.
Let’s take a quick look at the main features of Kinect Studio. As shown below, it features four windows: a color viewer, a depth viewer, a 3D viewer, and a control window that lets you record and playback the captured image.
The four windows in Kinect Studio, clockwise from top: control window, color viewer, depth viewer, and 3D viewer
The color viewer shows exactly what you’d expect: a faithful color image of the user scenario. The depth viewer shows the distance of people and objects in the scene using color: near objects appear red; distant ones are blue; and objects in-between show up in various shades of orange, yellow, and green. The 3D viewer gives you a three-dimensional wire-frame model of the scene, which you can rotate to explore from different perspectives.
The control, or main, window in Kinect Studio is what brings all the magic together. Here’s where you find the controls to record, save, and play back the captured scenario. You can stop and start the recording by moving the cursor along a timeline, and you can select and save sections.
Once you’ve recorded the user scenario and saved it to your laptop in Kinect Studio, you can play it over and over while you modify the code. The developers at Ubi, for instance, employ Kinect Studio to record usability sessions, during which end users act out various scenarios employing Ubi software. They can replay, stop, and start the scenarios frame by frame, to make sure their code is behaving exactly as they want. And since the recordings are accessible from a laptop, developers can test and modify their Kinect for Windows application code just about anywhere.
Using Kinect Studio to perform and analyze user experience studies
For Anup, it means that he can code during the bus ride home or in bed, after his children have gone to sleep. “Kinect Studio doesn’t actually increase the number of hours in the day,” he says, “but it sure feels like it.”
You go to a large grocery store to buy some Fuji apples. The signs tell you which are organic and which aren’t, but that (and the price per weight) is about all the information you’ll get. Now imagine a shopping scenario where you could learn much more about those apples: the orchard where they were grown, what chemicals they were treated with, how long ago they were picked, where they were warehoused, and more. And imagine you could learn all this just by pointing your finger at the fruit.
Visitors to the Future Food District at Milan Expo 2015 needn’t imagine such a shopping trip—they experienced it in the Grocery Store of the Future, a pavilion that functioned as a real grocery store, complete with 1,500 products on display. What the shoppers might not have noticed were the 200 Kinect for Xbox One sensors strategically placed near the product displays. But while the shoppers might not have seen the Kinect sensors, the sensors certainly saw them, capturing their body images and measuring their distance from the product display.
Two hundred fifty strategically placed Kinect for Xbox One sensors enabled shoppers to obtain detailed product information simply by pointing their finger.
Then, when a Kinect sensor detected a shopper pointing at an item, an overhead, mirrored display presented an array of detailed product information, extracted from a Microsoft Azure cloud database. The gesture recognition resulted from a custom WFP application created by business solutions providers Accenture and Avanade. The developers chose the Kinect for Xbox One sensor because it provides precise body tracking and has the ability to detect shoppers at medium depth from the product displays. They also valued the high reliability of the sensors, which were mounted in a crowded environment and functioning 24 hours a day.
So the next time you’re in the produce aisle puzzling over which apple to buy, think about those lucky patrons at the Milan Expo. And be comforted by the knowledge that the technology that powered the Grocery Store of the Future is here today.