In a 1930 jazz classic, singer Lee Morse advised that “t’aint no sin to take off your skin and dance around in your bones.” Well, this past Halloween, Microsoft software engineer Snorri Gislason let the neighborhood kids do just that—in a graveyard scene, no less—with some help from Kinect for Windows, the free personal edition of Unity, and the Microsoft Garage community.
Snorri’s clever Halloween app used the latest Kinect sensor to capture the gyrations of trick-or-treaters as they cavorted on the driveway. The app relied on the sensor’s ability to track 25 body joints per person—perfect for making a skeleton dance—and multiple users. Then, employing the RUIS Kinect plug-in for Unity 5, the app transferred the kids’ dance moves to skeletal avatars projected on a screen hung in front of the garage door, so the delighted trick-or-treaters could, virtually, take off their skin and dance around in their bones.
The Kinect for Windows Team
Back in June, we featured a story about Reflexion Health and its innovative application that uses Kinect for Windows to help physical therapy patients. As we described at that time, the application, called Vera, was being tested at five medical centers. Now we are pleased to report that Reflexion Health has received 510(k) clearance from the US Food and Drug Administration (FDA) for its Vera system.
Vera uses the depth sensing and body tracking capabilities of the latest Kinect sensor to capture a patient’s exercise movements in precise detail. The system provides patients with a model for how to perform the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. Anyone who’s struggled to follow the traditional stick-figure drawings of rehab exercises will appreciate Vera’s precise, real-time feedback—no more wondering if you’re lifting or twisting in the right way. The data on the patient’s movements are also shared with the therapist, who can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.
FDA approval marks an important milestone for Reflexion Health. "We are thrilled to be one of a growing number of digital medicine companies to receive FDA clearance to use innovative tools and methods, such as Vera, to deliver care in a more engaging and efficient way," says Spencer Hutchins, CEO and co-founder of Reflexion Health. "We look forward to continuing to demonstrate Vera's positive impact on patients, doctors, and therapists.”
What might you do with a robotic arm? Well, if you’re part of the maker community, the possibilities are almost endless. Hook it up to an Arduino or Raspberry Pi controller, and you’ll have a companion that can paint, sort objects, play games, and generally amaze your friends and relatives. All you need is a pile of cash and some serious coding skills. Or do you?
A recent Kickstarter campaign is promoting a the 7Bot, a robotic arm with six axes, a 17-inch reach, and big ambitions—coupled with a small price tag of $350. And better yet, you can program it without writing a single line of code, just by using your own arm to model the desired functions under the watchful eye of a Kinect for Xbox One sensor.
The 7Bot desktop robotic arm
By using capabilities enabled by the Kinect for Windows SDK 2.0, the sensor captures the 3D coordinates of your arm joints, after which a set of reverse kinematic algorithms (actually, a series of multiplication operations between matrices and vectors, which the 7Bot team intends to open-source) calculates the angle of each servo that is required to duplicate movements modeled by your arm. Those data are then sent to the 7Bot via Bluetooth, and, voilà, you’ve programmed your robot without first getting an engineering degree. This video shows the process in action:
Many of the posts in this blog demonstrate how the latest Kinect sensor and software development kit (SDK 2.0) comprise a powerful platform that you can use for developing interactive experiences—but as any developer knows, creating great apps requires hard work, no matter how potent the platform is. Anything that eases that workload should come as welcome news—which is why Kinect for Windows developers might want to take a look at Vitruvius.
Vitruvius provides a framework that simplifies many aspects of Kinect for Windows app development. For example, it eases the process of avateering, allowing you to animate 3D models by using your own body and a single line of code. It even provides a pair of rigid models, one female and one male, along with multiple textures. Check it out in the video below.
The product also alleviates a lot of the heavy math associated with implementing motion tracking in advanced Kinect apps. Vitruvius lets you easily calculate joint angles and find the heights of tracked users, the length of specified segments in the 3D space, and the rotation of the body in the X, Y, and Z axes.
Vitruvius also provides simplified bitmap manipulation and lets you take full advantage of the latest Kinect for Windows facial tracking prowess. And its gesture detector makes it easy to add waves, swipes, and other movements that facilitate Kinect for Window’s human-computing interactions.
Created by a team of engineers led by Vangos Pterneas, a Microsoft Kinect Most Valuable Professional, Vitruvius comes in different configurations, with different price points, ranging from free to a “platinum” package that goes for $1,099. But even the free version, which you can find on GitHub, comes loaded with features that speed up Kinect app development, including bitmap generators, background removal tools, angle calculators, and gesture detection. The platinum packages adds a host of advanced features, such as support for Unity 3D, high-definition face extensions, and avateering tools, and it includes free updates, phone support, 24-hour response time, and an hour of consulting with Pterneas himself. Priced in between the free and platinum versions are an academic version, at $199, and a premium edition, at $399. View the complete list of features for each version on the Vitruvius website.
We’ve seen Kinect for Windows used in some amazing applications, from educational games to physical therapy to interactive shopping, but none has packed greater emotional appeal than the interactive display that debuted in Bristol, England, this autumn. The result of volunteer efforts by Andrew Spooner and Mike Taulty, both of Microsoft UK, the display uses Kinect for Windows technology to engage passersby and inform them about the needs of people in rural Africa—and, in the process, it encourages them to make a micro-loan to help finance small businesses in developing countries.
Spooner, whose wife works for Deki, a UK nonprofit that provides microfinance to entrepreneurs in underdeveloped African countries, came up with the idea. He enlisted his colleague Taulty, and together they created a mockup of an African store that is actually run by one of Deki’s loan recipients. The display was set up in the atrium of the Engine Shed, a hub for tech startups in Bristol, England, where its colorful graphics grab the attention of passersby, inviting them to take a quick quiz about the economic realities in developing countries like Uganda.
The interactive display is housed in a mockup of an African store that is actually run by one of Deki’s loan recipients.
That’s where Kinect for Windows comes into play. About six feet in front of the display are three large squares on the floor, marked A, X, and B. The user is instructed to stand on the center square (the one marked X). The virtual storekeeper then asks the user to choose answers A or B in response to a series of brief questions pertaining to economic conditions in rural Africa. The user indicates his or her answers by moving to either square A or B, and the ever-watchful Kinect sensor detects the response and provides appropriate feedback. The BBC ran the following report on the interactive display.
In reality, the correct answers are pretty obvious: the intent isn’t to actually test the user’s knowledge but rather to drive home some key facts about what it’s like to support a family in the developing world. For example, users learn that although women in Africa make up 60 percent of the rural workforce and produce 80 percent of the food, they receive only 10 percent of the region’s income. Or that households in the developing world spend up to 20 percent of their income on primary education.
The use of Kinect technology is key to engaging users, all of whom are busy tech entrepreneurs scurrying through the atrium on their way to somewhere else. Few would be likely to stop and spend a few minutes answering questions on a touchscreen display. But a simple game of hopscotch? That can get the attention of even the most harried would-be tech mogul.
The display highlights some key facts about what it’s like to support a family in the developing world.
The quiz game ends with one final question: “Do you want to find out how you can change a life?” Those who move to square A (that is, answer “yes”) are invited to submit their email address by SMS so that Deki can get in touch with them.
Deki has been delighted with the response to the Kinect-powered display, noting that both lending and website traffic have gone up since its launch. And the BBC news coverage provided a public relations bonanza for Deki and its mission to support entrepreneurs in developing countries.
It’s also worth noting that Spooner and Taulty donated their time under a Microsoft UK program that lets employees spend three days a year on charitable work.
Technology has offered tantalizing prospects to replace the white cane—the traditional low-tech accessory that people who are blind use to detect obstacles in their path. Researchers have experimented with systems based on sonar and on lasers, and while some of these have shown promise, they all require highly specialized and often costly hardware. Now, a multinational team of researchers—Nadia Kanwal of Pakistan, Erkan Bostanci of Turkey, and Keith Currie and Adrian F. Clark, both from the UK—has developed a prototype system that uses the Kinect sensor, a battery, and a laptop PC, a readily available and comparatively inexpensive combination.
The prototype system is strapped to the user’s body; for the production model, the researchers envision a smaller apparatus.
The new system uses the Kinect sensor’s infrared depth sensing capabilities and its RGB video camera to provide information on objects and their distance from a person who is blind or has low vision. It works like this: once its infrared and video sensors have been calibrated, the Kinect sensor’s RGB camera detects the corners of objects in its field of view. This data is correlated with data from the depth sensor, and the result provides information about the presence of an obstacle ahead and its distance from the user.
Lead researcher Kanwal, an assistant professor of computer science at Lahore College for Women University, has spent years exploring the use of adaptive technologies to assist people with special needs. Speaking about the potential value of the Kinect-based system, she says, “A blind person wants to adapt anything that makes them more independent in mobility; therefore, a system that can identify and warn about potential hazards would be much appreciated.”
The Kinect-based system checks for two types of obstacles: those, such as walls, that can completely block a person’s progress, and smaller ones, such as chairs, that can be avoided by changing direction. When an obstruction is detected, the system warns the user to either stop (so he doesn’t run into a wall) or move right or left (so she can navigate safely around a piece of furniture).
The system is carried on the user’s body; much of the equipment is transported in a backpack or shoulder bag, while the Kinect sensor is strapped to the front of the user’s body. While the researchers acknowledge that this setup is cumbersome, they envision a smaller, more portable production model.
The research team tested the prototype with a blindfolded, normally-sighted subject and then with a subject who has been blind since birth. Both participants successfully navigated through a test environment filled with various objects. Interestingly, the blind participant felt that the system would be most useful when combined with the traditional white cane, whose sweeping action can detect smaller objects and changes in the walking surface that the Kinect-based system misses. Conversely, the Kinect system can detect large objects sooner and thus can help the user traverse more smoothly through a crowded space.
The researchers used the Kinect for Xbox 360 in their prototype, but are anxious to incorporate the latest sensor, the Kinect for Xbox One, which offers improvements in both depth sensing video resolution.
Although the researchers have more work to do on the system before it is production ready, the ability to use Kinect for Windows to effectively and inexpensively combine video and depth information offers great potential to help users move more freely through their environment. It might not replace the white cane, but it could go a long way toward complementing it and making the world more navigable for people who are blind.
Are you developing a Unity app and wish you could easily add gesture control? Well, just like Glinda from The Wonderful Wizard of Oz, Kristina Rothe can grant you your wish. However, Kristina, a game development evangelist at Microsoft Germany, uses a video camera instead of a wand, and her magic lies in this detailed video:
Follow Kristina’s video tutorial and you’ll be on your way to creating Kinect-enabled gesture controls for your Unity app—or interactive games. You won’t even need to click your heels three times (though you can if you want to). What you will need is the latest Kinect sensor, the free Kinect for Windows software development kit (SDK 2.0), and the free Kinect plug-in for Unity. Plus, of course, Unity 5, and as Kristina notes in her video, you don’t have to pay for the Professional Edition of Unity 5, as the free Personal Edition provides the necessary engine capabilities.
Once you have the hardware and software, Kristina’s tutorial will take you through each step in the process of creating and adding gesture control to a Unity project. Her patient and detailed explanations will be a wish-come-true for anyone eager to build a gesture-controlled interactive app.
Remember, “there’s no place like home”—and as long as that home includes the latest Kinect sensor and Unity 5, it will be a happily interactive one indeed.
Take the latest Kinect sensor, a PC, a high-definition monitor, and an onscreen avatar—and what do you have? The newest first-person shooter game for Xbox? Nope. What you have is one of most carefully designed physical therapy (PT) systems available: Kinapsys.
Created by RM ingénierie, a French company that designs and develops software for the healthcare industry, Kinapsys uses game-based exercises to provide comprehensive functional rehabilitation of PT patients. Patients simply stand or sit in front of the Kinect sensor and the monitor while they play games that entail movements that are tailored to each patient’s therapeutic needs. For example, a patient who has undergone knee ligament repair can play a game on a virtual walking trail. As the onscreen avatar strolls along, the patient must squat and move laterally to help the avatar avoid objects that hang from above or protrude from the side. Those movements are beneficial for restoring knee function.
The Kinect sensor captures the patient’s movements and transfers them to the avatar. More importantly, the sensor precisely tracks the position of the patient’s joints and compares his or her range of movement against prescribed goals that the therapist has entered into the system. At the end of the game, the patient receives a score that allows both patient and therapist to measure therapeutic progress accurately.
While the games are at the heart of the therapy, Kinapsys offers other interactive modalities that ensure that patients perform the exercises correctly. Here, too, Kinect-enabled interaction is an essential component. In every case, patients see their own image on screen, with their tracked joints superimposed. Depending on the therapist’s choice, patients might see a mirror image of themselves, which allows them to practice the moves and receive immediate feedback on their performance. Alternatively, they might see an avatar that provides feedback on the speed and rhythm of their exercise movements, or they might interact directly with a game interface.
Programmed with more than 400 specific exercises, Kinapsys allows the therapist to create a regimen customized for each patient at every stage of their therapy—from the earliest stages of rehabilitation through the reinforcement of reacquired skills. The system provides exercises and games that facilitate such goals as improved joint movement, muscle toning, gesture reprogramming, flexibility, and cardiovascular fitness. It also features programs that address the specific PT needs of patients with back problems and neurological damage from strokes. In addition, Kinapsys provides group therapy modules, and the system can be purchased in a mobile configuration, complete with a cart that lets the physical therapist transport Kinapsys to assisted living facilities, community centers, or anywhere that it’s needed.
But what really sets Kinapsys apart from traditional physical therapy is the Kinect sensor’s ability to track body movements precisely. This enables the system to measure and chart patient progress with far greater accuracy than ever, and allows the therapist to modify the regimen for maximum patient benefit.
In his novel, Around the World in Eighty Days, Jules Verne celebrated the technological marvels of the nineteenth century—the railways, steamships, the Suez Canal—that made it possible to circumnavigate the globe in less than three months. This spring and summer, the Museum of Communication in Frankfurt, Germany, is honoring Verne’s classic tale in an exhibit entitled “Around the World in 80 Objects: the Jules Verne Code.” Among the objects capturing a lot attention is a Kinect-enabled interactive map of the world, which allows museum goers to use intuitive gestures to pan, zoom, and switch between different maps and map views as they take their own trip around the globe.
Created by Aleksandr Shirokov—a graduate student at the Institute for Geoinformatics, University of Münster—the interactive digital map is displayed on a screen measuring 1.75 meters by 2.10 meters (roughly 6 feet by 7 feet). Museum visitors can pan the maps by moving one hand. They can zoom in and out by using the pinch gestures familiar to smartphone users—except that they use both hands rather than their thumb and index finger. They can also use a unique gesture—passing one hand behind an ear—to switch perspective and to view different map types.
Museum goers can circle the world, like Phileas Fogg, the hero of Verne’s novel, but in seconds rather than weeks. Or they can focus on a particular location, even to the point of zooming in on their own residence—just by using simple hand gestures that Kinect for Windows captures and interprets. Users are captivated by the map’s wealth of information and by the simplicity of navigating its many views. “That was amazing,” enthused one recent museum visitor. “I never really thought about how easy and natural it would be to control something with my hands.”
The first iteration of the interactive map used the original Kinect sensor to capture users’ gestures. After the Kinect v2 sensor became available, the developers were eager to take advantage of its higher resolution camera and more detailed body tracking. These enhancement allowed far more precise recognition of users’ gestures, enabling users to control the map more naturally and smoothly. “We don’t even need to explain to people what to do. They naturally start scrolling, changing, and zooming the map,” notes a museum employee. And since the latest Kinect sensor can “see” in even a completely dark room, the new version of the app can accurately track users’ body joints—and hence their gestures—even if the lights are dim.
After Frankfurt, the exhibit is scheduled to be displayed at the Museum of Communication in Berlin, where its interactive map should engage and delight a whole new group of virtual globetrotters. Meanwhile, developer Shirokov plans to use Kinect for Windows’ ability to detect heart rate and facial expressions to measure users’ emotional response to the interactive experience and then enhance the application accordingly.
The Kinect for Windows Team
As we promised earlier this week, Kinect for Windows information for developers has migrated. You’ll now find it in the Windows Dev Center, where it’s in some pretty snazzy company. (Hello, Cortana. Hi there, Windows 10 and Universal Windows Apps.)
Parlez-vous français? Sprechen Sie Deutsch? 한국어를 말 하나요? We’ve got you covered, since our Dev Center pages are now available in 23 languages. So cruise on over to our new home and pay us a visit. You’ll find information about everything Kinect for Windows, from hardware specs and feature descriptions to information on creating innovative apps that let users interact naturally with computers and computing devices. And you won’t need a bilingual dictionary to read it!