• Kinect for Windows Product Blog

    Kinect helps detect PTSD in combat soldiers

    • 0 Comments

    “War is hell.” So proclaimed U.S. General William Tecumseh Sherman, whose experiences during the American Civil War surely made him an expert on the subject. But for some soldiers and veterans, the hell doesn’t stop when combat ends. It lingers on as post-traumatic stress disorder (PTSD), a condition that can make it all but impossible to lead a normal daily life.

    According to the U.S. Department of Veterans Affairs, PTSD affects 11 to 20 percent of veterans who have served in the most recent conflicts in Afghanistan and Iraq. It’s no wonder, then, that DARPA (the Defense Advanced Research Projects Agency, a part of the U.S. Department of Defense), wants to detect signs of PTSD in soldiers, in order to provide treatment as soon as possible.

    One promising DARPA-funded PTSD project that has garnered substantial attention is SimSensei, a system that can detect the symptoms of PTSD while soldiers speak with a computer-generated “virtual human.” SimSensei is based on the premise that a person’s nonverbal communications—things like facial expressions, posture, gestures and speech patterns (as opposed to speech content)—are as important as what he or she says verbally in revealing signs of anxiety, stress and depression.

    The Kinect sensor plays a prominent role in SimSensei by tracking the soldier’s body and posture. So, when the on-screen virtual human (researchers have named her Ellie, by the way) asks the soldier how he is feeling, the Kinect sensor tracks his overall movement and changes in posture during his reply. These nonverbal signs can reveal stress and anxiety, even if the soldier’s verbal response is “I feel fine.”

    SimSensei interviews take place in a small, private room, with the subject sitting opposite the computer monitor. The Kinect sensor and other tracking devices are carefully arranged to capture all the nonverbal input. Ellie, who has been programmed with a friendly, nonjudgmental persona, asks questions in a quiet, even-tempered voice. The interview begins with fairly routine, nonthreatening queries, such as “Where are you from?” and then proceeds to more existential questions, like “When was the last time you were really happy?” Replies yield a host of verbal and nonverbal data, all of which is processed algorithmically to determine if the subject is showing the anxiety, stress and flat affect that can be signs of PTSD. If the system picks up such signals, Ellie has been programmed to ask follow-up questions that help determine if the subject needs to be seen by a human therapist.

    The algorithms that underlie SimSensei’s diagnostic abilities derive from MultiSense, an ambitious project of the Institute for Creative Technology (ICT) at the University of Southern California. Created under the leadership of computer science researcher Louis-Philippe Morency, MultiSense aims to capture and model human nonverbal behaviors. Morency and psychologist Albert Rizzo, ICT’s director of medical virtual reality, then led a two-year, multidisciplinary effort that resulted in SimSensei. Rizzo points out that SimSensei’s value as an assessment tool is immensely important to the military, as there are too few trained human evaluators to screen all the troops and veterans who are at risk of PTSD.

    Giota Stratou, one of ICT’s key programmers of SimSensei, provided details on the role of the Kinect sensor. “We used the original Kinect sensor and SDKs 1.6 and 1.7, particularly to track the points and angles of rotation of skeletal joints, from which we constructed skeleton-based features for nonverbal behavior. We included in our analysis features encoded from the skeleton focusing on head movement, hand movement and position, and we studied overall value by integrating in our distress predictor models.”

    Although the current version of SimSensei uses the original Kinect sensor, Stratou and her colleagues plan to employ the Kinect for Xbox One sensor and the Kinect for Windows SDK 2.0 in the next version, eager to take advantage of the latest sensor’s enhanced body and facial tracking capabilities. Meanwhile, the first version of SimSensei has been tested with a small sample of soldiers in the field under funding from DARPA, and in a treatment study with survivors of sexual trauma in the military, funded by the Department of Defense. The results thus far have been promising, leading us to hope that someday soon we will see SimSensei widely used in both military and civilian settings, where its abilities to decode nonverbal communication could detect untreated psychological distress.

    The Kinect for Windows Team

     Key links

  • Kinect for Windows Product Blog

    Kinect-enabled PC game scores big in 2015 Imagine Cup

    • 0 Comments

    Team TerraBite: the name evokes mixed metaphors of earth (terra), predation (bite), and technology (terabyte). And it’s the perfect moniker for the squad from Abertay University, whose apocalyptically themed, Kinect-enabled PC game, Project Cyber, made it all the way to the World Semifinals of the 2015 Imagine Cup. On the way, they garnered accolades for the game’s look and feel, its clever narrative, and its creative use of the latest Kinect sensor.

    Project Cyber is a Kinect v2 experience built for the PC. The player takes on the role of a “white hat” hacker, who must go deeper and deeper into cyberspace to save the world from destruction. It features an episodic narrative; turn-based, first-person gameplay; futuristic 3D environments and evolving, dynamic audio. Players progress through various levels, each of which is built like a puzzle that can be solved by observing enemy movements and by using programs that are earned during gameplay. These programs include one for uninstalling an enemy, while another creates new tiles that can bridge gaps in the geometric world of cyberspace. The game also features a companion app, which enables users to create their own levels and offers game-related lore and tips. 

    Although the game can be controlled with a mouse and keyboard, it was designed for the latest Kinect sensor, and it’s at its best when played via Kinect gesture controls. One of the team’s primary objectives was to use Kinect to make a game that can be played while relaxing—physically, if not mentally. Thus they avoided overly complicated, extensive gestures or maneuvers, and instead opted for those that can be executed while seated.

    The team also wanted to give players the sense that they are protagonists in a sci-fi film, moving their hands through the air to navigate cyberspace. As a result, they made the most of the hand-tracking ability of the v2 sensor and SDK 2.0. By moving their hands in front of the sensor, players advance through a cyberspace composed of hexagonal tiles. Since the game is laid out in a hexagonal grid, turns are always at a set distance—turning your hand to the left, for example, lets you look at the tile one to the left.

    SDK 2.0 helped the team get a prototype running quickly. The tools and samples in the SDK gave them a good understanding of what the latest Kinect sensor can do, while Kinect Studio and the Visual Gesture Builder utilities enabled them to create the custom gestures and motions that control game play. They also took advantage of Kinect’s confidence ratings for gestures, using them to determine the level of accuracy with which a player would need to execute a gesture in order to produce the desired gameplay effect. The team also made good use of the Kinect Unity wrapper, which allowed them to see results within the game immediately.

    Team TerraBite’s road to the Imagine Cup Semifinals began with their winning the gaming category in the People’s Choice awards sponsored by Stuff magazine, a UK publication devoted to technology gadgets and gear. That prize put them in the UK National Finals of the Imagine Cup, where they won the Games category—and where they met and networked with many other talented competitors.

     Core members of Team TerraBite: Gerda Gutmane, David Hudson, Maximilian Mantz and Anthony Devlin
    Core members of Team TerraBite, from left to right: Gerda Gutmane, David Hudson, Maximilian Mantz
    and Anthony Devlin. Associate members of the team, not shown, include Andrew Ewen, Dale Smith,
    Lewis Doran, Scott Brown, Tom Marcham and Yana Hristova.

    Then it was off to the Imagine Cup World Semifinals, competing against some of the best student talent from around the globe. Although Team TerraBite’s winning streak ended there, they received valuable feedback on their game and priceless experience in pitching their ideas to a wide audience—skills that will no doubt come in handy should they decide to refine their game and market it. So keep an eye out for the Kinect-enabled Project Cyber—and your chance to save the world, one gesture at a time.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows clicks at Eyeo

    • 0 Comments

    We recently attended the Eyeo Festival in Minneapolis, Minnesota, and what an eye-opening experience it was. For those who aren’t familiar with Eyeo, it’s a creative coding conference that brings together coders, data lovers, artists, designers and game developers. It features one-of-a-kind workshops, theater style sessions and interaction with open-source activists and pioneers.

    This year marked the fifth anniversary of the Eyeo Festival, and the Kinect for Windows team was pleased to be among the workshop presenters. Our workshop, “Incorporating Kinect v2 into Creative Coding Experiences,” drew 35 participants—none of whom had worked with the latest Kinect sensor. In fact, only four of them had worked with the original Kinect sensor, so the workshop offered a great opportunity to acquaint these developers with the creative potential of Kinect.

    The event provided resources, links and training, preparing the participants to incorporate Kinect into their future endeavors. As part of their workshop fee, each received a Kinect for Xbox One sensor, a Kinect Adapter for Windows and a Windows To Go drive. As the workshop progressed, the participants got deeply engaged in the Kinect technology—experimenting, asking questions and writing playful scripts.

     Participants in the Kinect v2 workshop got down to some serious creative coding.
    Participants in the Kinect v2 workshop got down to some serious creative coding.

    Even more Eyeo attendees learned about Kinect for Windows and related technologies from James George, Jesse Kriss and Béatrice Lartigue, three of the festival’s featured speakers. George, one of the most influential artist in the coding community, spoke about the photography of the future and how he has used the Kinect sensor’s depth and color cameras to create images that can be explored in three dimensions. Kriss, who designs and builds tools for artists and scientists, discussed his work on introducing Microsoft HoloLens to the scientists at in the Human Interfaces Group at NASA’s Jet Propulsion Lab. Lartigue, a new media artist and designer, talked about Les métamorphoses de Mr. Kali, her Kinect installation at London’s Barbican Centre, describing how she used Kinect for Windows to reduce the space between us and our environment in her interactive works. These compelling speakers left the audience eager to experiment with Kinect for Windows—and anxious for the release of Microsoft HoloLens.

    Eyeo provided a great opportunity to reach the creative coding community, showing them how Kinect for Windows can be a potent tool in their work.

    The Kinect for Windows Team

    Key links

     

  • Kinect for Windows Product Blog

    Salon gives devs tips and tricks for Kinect for Windows

    • 0 Comments

    On the evening of May 29, 2015, nearly 100 top developers and investors gathered in Beijing for Microsoft China’s Second Kinect for Windows Salon. This five-hour event presented advanced techniques for working with the latest Kinect sensor and the Kinect for Windows SDK 2.0, and helped attendees tackle complex coding problems. The gathering included developers from Samsung and other companies, those who work independently, and students from such prestigious schools as Beihang University (BUAA) and Tsinghua University, along with investors looking for projects or companies to support.

     Nearly 100 devs gathered to learn about using the latest features of Kinect for Windows.
    Nearly 100 devs gathered to learn about using the latest features of Kinect for Windows.

    Attendees were especially excited by a demo of the RoomAlive toolkit, which had made its public debut only weeks earlier at Build 2015. Developers spoke enthusiastically about the potential to create immersive augmented reality experiences by using this open-source SDK, which will enable them to calibrate a network of multiple Kinect sensors and video projectors.

    A session on 3D Builder also generated intense interest, thanks in large part to the presence of 3D Impression Technology, a 3D printing studio whose representatives answered questions about using 3D Builder, creating 3D scans with the Kinect sensor, and printing out 3D models. Attendees even went home with keychains featuring freshly printed 3D models of the latest Kinect sensor.

     Scans of the latest Kinect sensor (top) were used to print 3D models that were attached to souvenir keychains.
    Scans of the latest Kinect sensor (top) were used to print 3D models that were attached to souvenir keychains.

    Perhaps the most enriching aspect of the salon was the opportunity to develop new relationships. With such a broad cross-section of the Kinect for Windows developer community in attendance, the salon provided fertile ground for networking, cooperative learning, and recruiting (much to the delight of the students). It also provided investors with leads on projects and companies worth funding. We look forward to seeing some creative Kinect for Windows applications as a result of these relationships.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect helps apply color to complex surfaces

    • 0 Comments

    We’ve talked about making 3D models from data captured by the Kinect sensor—we’ve even pointed you to a free Windows Store app that lets you do it easily. But one drawback of most home 3D printers is that their output is monochromatic. So that nifty 3D model of your dad that you wanted to give him on Father’s Day will have to be hand painted—unless you want your father memorialized as a pasty grey figurine.

    This video demonstrates the computational hydrographic printing process developed by researchers at Zhejiang University and Columbia University.

    Now, some ingenious researchers at Zhejiang University and Columbia University have come up with a relatively inexpensive way to apply color to your home-made 3D models, even when the model includes complex surface textures. And in a nice cost-effective coincidence, a key piece of their system is the very Kinect sensor that you can use to take the original 3D scans.

    Setup includes a "gripper" mechanism (shown here holding a 3D mask), a water basin to hold the hydrographic color film, and a Kinect sensor to enable precise registration of the colors to the models surface.
    Setup includes a "gripper" mechanism (shown here holding a 3D mask), a water basin to hold the hydrographic color film, and a Kinect sensor to enable precise registration of the colors to the models surface.

    The researchers’ system employs hydrographic printing, a technique for transferring color inks to object surfaces. The setup includes a basin for holding the water and color film that are the heart of the hydrographic printing process, a mechanism for gripping and dipping the 3D model, and a Kinect sensor to measure the object's location and orientation. The researchers use Kinect Fusion to reconstruct a point cloud of the 3D model; they then run this data through an algorithm they’ve devised to provide precise registration of the colors on the object’s surface shapes and textures. The results, as seen in the video above, are nothing short of amazing.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Detecting heart rate with Kinect

    • 0 Comments

    When the latest Kinect sensor was unveiled more than a year ago at Build 2014, demos showed how it could determine a user’s heart rate without attaching sensors or wires to his or her body. But that was old news to regular followers of D Goins Insperience, the personal blog of Dwight Goins, a Microsoft Kinect for Windows MVP and founder of Dwight Goins Inc. As Goins revealed in February 2014, he had already devised his own application for detecting a person’s heart rate with the preview version of the latest Kinect sensor.  

    Goins’ app, which he has subsequently refined, takes advantage of three of the latest sensor’s key features: its time-of-flight infrared data stream, its high-definition-camera color data stream, and face tracking. The infrared stream returns an array of infrared (IR) intensities from zero to 65,536, the color stream returns RGB data pixels, and the face tracking provides real-time location and positioning of a person’s face. He thus knew how to capture a facial image, measure its infrared intensity, and gage the RGB color brightness level in its every pixel. The following video shows Goins' Kinect v2 heart rate detector in action.

    Goins’ app uses a blind source separation algorithm on the four sources of light—RGB (red, green, and blue) and IR—to obtain an estimated separation of components that contain a hidden frequency, the blood pulse signal. . These color data streams from the Kinect sensor enable Goins’ app to calculate the changes in color brightness at 30 frames per second. And since the amount of color intensity that the face radiates changes when the heart contracts—as more arterial blood is pushed through the facial capillaries—the IR and RGB values will change slightly over time as the heart contracts and relaxes. The frequency of these changes corresponds to the frequency of cardiac contractions—in other words, the pulse. The pulse is then calculated mathematically by separating the pulse frequency from other noise and color signals in the face—providing the user with a calculation of the heart rate.

    Want to try it out? Take a look at Goins’ community-release code.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Hand tracking with Kinect

    • 0 Comments

    The latest Kinect sensor provides major enhancements in body tracking, including the ability to recognize more hand and finger poses. But as useful as this level of hand tracking is, it doesn’t begin to cover the enormous variety of hand motions we use every day.

    Imagine the possibilities if we could accurately track and interpret all the hand and finger gestures that are part of our nonverbal communications. Such precise hand tracking could lead to a new level of experience in VR gaming and open up almost limitless possibilities for controlling TVs, computers, robots—just about any device—with a flick of the hand or a crook of a finger. Not to mention the potential for understanding the “flying hands” of sign language speakers, a capability that could facilitate communications between people who are deaf or hard of hearing and the broader community.

    Researchers at Microsoft Research Cambridge are hard at work on perfecting just such precise hand-tracking technology, as their Handpose prototype demonstrates. By using data that is captured by the depth camera in the latest Kinect sensor, the Handpose team has devised algorithms that enable the system to reconstruct complex hand poses accurately. In the event that the camera misses a slight movement, the algorithm quickly and smoothly fills in the missing segment. And unlike past approaches, which focused on understanding front-facing, close-up gestures, Handpose offers the flexibility to track at distances of up to several yards or meters away, and it doesn’t require the user to wear cumbersome sensors or special gloves.

    While it’s still in the prototype stage, Handpose demonstrates the feasibility of using the latest Kinect sensor to capture the incredibly complex and varied gestures that comprise our nonverbal communications. You can read more about the Handpose project at Next at Microsoft.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect app for physical therapy put to the test

    • 0 Comments

    A year ago, during the 2014 BUILD conference, we profiled Reflexion Health, a San Diego-based software startup that had developed a promising Kinect for Windows application to track physical therapy patients’ exercise sessions. We’re now pleased to report that this application, called Vera, is being piloted by five medical centers, including Brooks Rehabilitation, one of Florida’s leading providers of rehab services.

    Vera takes full advantage of Kinect’s depth sensing and body tracking capabilities, both of which were significantly enhanced in the latest Kinect sensor, using them to capture a patient’s exercise moves in precise detail. It provides patients with a model for how to perform the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. Vera thus offers patients immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way. The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit. The system even allows the patient and physical therapist to interact in real time through live video conferencing.

    "One of the features I've been most impressed with is the system's ability to capture subtle deviations from optimal form. If a patient is trying to move their hip to the right but they twist their hip, the system will provide feedback on the screen telling them how to perform the exercise correctly. It picks up the nuances that are really important to a physical therapist," says Drew Kayser, PT, clinical orthopedic therapy coordinator at Brooks Rehabilitation in Jacksonville, Florida.

    Another enormous benefit of the Vera application is that it keeps patients on track with their rehab. As any physical therapist will tell you, once patients leave the clinic, many of them lose momentum, often struggling to perform their exercises correctly at home—or simply skipping them altogether. Vera gives patients detailed instructions and crucial feedback—and even counts their reps. Because Vera reports patient progress to the therapist, patients are less likely to cheat on the prescribed regimen.

    "It's been shown time and time again that adherence to a home exercise program generates better results, but only about 25% of patients adhere to their exercise program. If they have to log in, I can tell how much they are exercising and how accurate they are with their specific exercises. It provides a level of engagement and accountability that will ultimately benefit them in the long run," explains Kayser.

    Brooks Rehabilitation is testing Vera with patients who’ve recently had a knee or hip replacement and meet criteria to use the at-home system. They begin working with a therapist who sets up Vera in their home for the duration of their rehab. Feedback provided from Brooks will allow Reflexion to make additional, clinically based enhancements to the system—a boon to all future rehab patients.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Can Kinect improve customer relationship management?

    • 0 Comments

    Kevin Hughes thinks it can—and we agree. A technical solution professional on the Microsoft Dynamics team in the UK, Hughes has prototyped a Kinect-enabled interface for Dynamics CRM. As Hughes demonstrates in the video below, this interface provides an entirely new way of visualizing and interacting with your customer relationship data in Dynamics CRM.

    Hughes has rendered the customer data trees as three-dimensional structures, making relationships among customers strikingly clear. But that’s just the beginning. A user can then easily manipulate the data with simple, intuitive hand gestures to rotate the tree and zoom in and out on any branch of data. And on the drawing board is another set of gestures that would allow the user to access detailed data on a given customer and render it within the 3D model.         

    As Hughes explains, the dynamic, Kinetic-enabled interface makes it easier to model CRM data in different ways and promotes group collaboration. Thus, it could really improve meeting efficiency, particularly if linked to the upcoming Microsoft Surface Hub with its large pen-and-touch display. Imagine the impact of displaying this data in 3D on a gigantic interactive and writable screen, exploring relationships and brainstorming about new customer services and opportunities. Everyone in the room could participate—as could those in remote locations who are connected via Skype for Business—highlighting points of interest exposed by the 3D model and even annotating them with the Surface Hub pen.

     These two screen shots demonstrate how the Kinect application might allow users to drill down through customer data.
    These two screen shots demonstrate how the Kinect application might allow users to drill down
    through customer data.

    The model relies on the body tracking capabilities of the latest Kinect sensor, so that the user can “peek” at the data simply by moving his or her head. By using two hands, users can rotate the three-dimensional database tree, promoting further exploration. They then can go deeper into the data by pointing to select and making a fist to grab particular customer data fields.

    Hughes notes the ease of working with the latest Kinect sensor and its SDK. “Kinect for Windows enables developers to drop NUI into their applications with minimal development,” he says. “The SDK handles all of the complex body tracking with minimal latency, allowing the developer to focus on creating an experience which excites their users.”

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect-enabled VR puts users in space with Earthlight

    • 0 Comments

    The following blog was guest-authored by Russell Grain, a development lead at Opaque Multimedia, a Melbourne (Australia)-based digital design studio specializing in the application of video game technologies in novel domains.

    Earthlight is a first-person exploration game where the players step into the shoes of an astronaut on the International Space Station (ISS). There, some 431 kilometers (about 268 miles) above the Earth, they look down on our planet from the comfort of their own spacesuit. Featuring the most realistic depiction yet of the ISS in an interactive virtual reality (VR) setting, Earthlight demonstrates the limits of what is visually achievable in consumer-oriented VR experiences.

     Opaque Multimedia’s Earthlight game enables players to explore the International Space Station in an interactive VR setting, thanks to the latest Kinect sensor.
    Opaque Multimedia’s Earthlight game enables players to explore the International Space Station in an
    interactive VR setting, thanks to the latest Kinect sensor.

    Our team at Opaque Multimedia developed Earthlight as a technical demo for our Kinect 4 Unreal plug-in, which exposes all the functionality of the latest Kinect sensor in Unreal Engine 4. Our goal was to create something visceral that demonstrated the power of Kinect as an input device—to show that Kinect could enable an experience that couldn’t be achieved with anything else.

    Players explore the ISS from a truly first-person perspective, in which the movement of their head translates directly into the viewpoint of a space suit-clad astronaut. To complete this experience, players interact with the environment entirely through a Kinect 4 Unreal powered avateering solution, pushing and pulling themselves along the surface of the ISS as they navigate a network of handles and scaffolds to reach the top of the communications array.

    Everyone behaves somewhat differently when presented with the Earth hanging below them. Some race straight to the top of the ISS, wanting to propel themselves to their goal as fast as possible. Others are taken with the details of the ISS’s machinery, and some simply relax and stare at the Earth. On average, players take about four minutes to ascend to the top of the station’s communications array.

     By using Kinect, Earthlight enables players to explore the ISS without disruptions to the immersive VR experience that a keyboard, mouse, or gamepad interface would create.
    By using Kinect, Earthlight enables players to explore the ISS without disruptions to the immersive VR
    experience that a keyboard, mouse, or gamepad interface would create.

    As well as being a fantastic tool for building immersion in a virtual game world, Kinect is uniquely positioned to help solve some user interface challenges unique to the VR experience: you can’t see a keyboard, mouse, or gamepad while wearing any current generation virtual reality device. By using Kinect, not only can we overcome these issues, we also increase the depth of the experience.

    The enhanced experience offers a compelling new use case for the fantastic body-tracking capabilities of the Kinect for Windows v2 sensor and SDK 2.0: to provide natural and intuitive input to virtual reality games. The latest sensor’s huge increase in fidelity makes it possible to track the precise movement of the arms. Moreover, the Kinect-enabled interface is so intuitive that, despite the lack of haptic feedback, users still adopt the unique gait and arm movements of a weightless astronaut. They are so immersed in the experience that they seem to forget all about the existence of gravity.

    Earthlight has enjoyed a fantastic reception everywhere it’s been shown—from the initial demonstrations at GDC 2015, to the Microsoft Build conference, the Silicone Valley Virtual Reality meetup, and the recent appearance at We.Speak.Code. At each event, there was barely any reprieve from the constant lines of people waiting to try out the experience.

    We estimate more than 2,000 people have experienced Earthlight, and we’ve been thrilled with their reactions. When asked afterwards what they thought of Earthlight, the almost universal response was “amazing.” We look forward to engendering further amazement as we push VR boundaries with Kinect 4 Unreal.

    Russell Grain, Kinect 4 Unreal Lead, Opaque Multimedia

    Key links

Page 1 of 16 (156 items) 12345»