• Kinect for Windows Product Blog

    RoomAlive Toolkit unveiled at Build 2015

    • 0 Comments

    As highlighted during the Build 2015 Conference, Microsoft is more committed than ever to delivering innovative software, services, and devices that are changing the way people use technology and opening up new scenarios for developers. Perhaps no software reflects that commitment better than the RoomAlive Toolkit, whose release was announced Thursday, April 30, in a Build 2015 talk. The toolkit is now available for download on GitHub.

    The RoomAlive Toolkit enables developers to network one or more Kinect sensors to one or more projectors and, by so doing, to project interactive experiences across the surfaces of an entire room. The toolkit provides everything needed to do interactive projection mapping, which enables an entirely new level of engagement, in which interactive content can come to virtual life on the walls, the floor, and the furniture. Imagine turning a living room into a holodeck or a factory floor—the RoomAlive toolkit makes such scenarios possible.

    This video shows the RoomAlive Toolkit calibration process in action.

    The most basic setup for the toolkit requires one projector linked to one of the latest Kinect sensors. But why limit yourself to just one each? Experiences become larger and more immersive with the addition of more Kinect sensors and projectors and the RoomAlive toolkit provides what you need to get everything setup and calibrated.

    While the most obvious use for the RoomAlive Toolkit is the creation of enhanced gaming experiences, its almost magical capabilities could be a game-changer in retail displays, art installations, and educational applications. The toolkit derives from the IllumiRoom and RoomAlive projects developed by Microsoft Research.

    Over the next several weeks, we will be releasing demo videos that show developers how to calibrate the data from multiple Kinect sensors and how to use the software and tools to create their own projection mapping scenarios. In the meantime, you can get a sense of the creative potential of the RoomAlive Toolkit in the video on Wobble, which shows how a room’s objects can be manipulated for special effects, and the video on 3D Object videos, which shows how virtual objects can be added to the room. Both of these effects are part of the toolkit’s sample app. And please share your feedback, issues, and suggestions over at the project’s home on GitHub.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Encounters with Kinect

    • 0 Comments

    Imagine an interactive digital art performance where the audience becomes the performers and interact not just with other audience members but also with lights, sounds, and professional dancers. That’s what visitors to SummerSalt, an outdoor arts festival in Melbourne, Australia, experienced. The self-choreographed event came courtesy of Encounters, an installation created by the Microsoft Research Centre for Social Natural User Interfaces (SocialNUI, for short), a joint research facility sponsored by Microsoft, the University of Melbourne, and the State Government of Victoria. Held in a special exhibition area on the grounds of the university’s Victorian College of the Arts, Encounters featured three Kinect for Windows v2 sensors installed overhead.

    The installation ran over four Saturday evenings and was attended by more than 1,200 people. Participants experienced dance performances from Victorian College of the Arts students who interacted with the crowd to create social interactions captured by the Kinect sensors. The results included spectacular visual and audio effects, as the participants came to recognize that their movements and gestures controlled the music and sound effects as well as the light displays on an enormous outdoor screen.

    Kinect sensors captured the crowd’s gestures and movements, using them to control audiovisual effects at Encounters, an investigation into social interactions facilitated by natural user interfaces.

    The arresting pubic art installation was powered by the three overhead Kinect v2 sensors, whose depth cameras provide the data critical to the special effects. Each sensor tracked a space approximately 4.5 meters by 5 meters (about 14.75 feet by 16.5 feet), from a height of 5 meters (about 16.5 feet). This enabled the sensors to track the movement of up to 15 people at a time on three axes. The sensors' depth cameras detected when people jumped, which was an important interaction mechanism for the installation. Using the depth camera also overcame the problem of varying lighting conditions. 

    Feeding the depth-camera data into custom software revealed a surprising amount of information about people moving through the space: as already mentioned, it tracked the location of up to 15 people in three axes (X, Y, Z); in addition, it provided information on the participant’s area, their velocity (speed and direction), the length of time they were present, whether they were jumping, whether they were part of a group (and if so, how many people were in that group), and the overall space’s dispersion, crowdedness, and centroid. The technical team achieved this across three separate spaces and maintained frame rates of approximately 30 frames per second.

     From a high-level perspective, the end-to-end image processing process involved four steps:

    • Receipt of the raw depth pixels from the Kinect sensor
    • Preliminary filtering and then construction of an image from the depth data
    • Application of OpenCV to recognize contours (blobs) that represented a first guess at where people were located
    • Calculation via a series of heuristics to derive all the information mentioned in the preceding paragraph

    The technical team experimented with the sensor in different configurations, at different heights, in different lighting conditions, with different flooring, with different sizes of people, and using different cameras in order to work this all out.

    “We really enjoyed working with the Kinect sensor,” says John Downs, a research fellow at the Microsoft Research Centre for SocialNUI and the leader of the technical team on Encounters. “The different types of cameras—RGB, infrared, and depth—gave us a lot of flexibility when we designed Encounters. And as we moved through to the development phase, we appreciated the level of control that the SDK provided, especially the flexibility to process the raw camera images in all sorts of interesting ways. We took advantage of this, and to great effect. Additionally, the entire development process was completed in only six weeks, which is a testament to how simple the SDK is to use.”

    The result of all this development creativity was more than just an amazing public art installation—it was also an intriguing social science investigation. Researchers from the SocialNUI Centre conducted qualitative interviews while members of the public interacted with their Kinect-generated effects, probing for insights into the social implications of the experience. As Frank Vetere, director of SocialNUI, explains, “The Centre explores the social aspects of natural user interfaces, so we are interested in the way people form, come together, and explore the public space. And we are interested in the way people might claim and re-orient the public space. This is an important part of starting to take technological developments outside of our lab and reaching out to the public and other groups within the University.”

    This unique, cross-disciplinary collaboration was a wonderful success, delighting not only the NUI developers and researchers, but the public as well.

    The Kinect for Windows Team

    Key links

     

  • Kinect for Windows Product Blog

    Can Kinect technology improve brain imaging?

    • 0 Comments

    Blurry images: they’re a nuisance when they wreck your holiday photographs or ruin that video of your sister’s wedding. But they’re much more than a nuisance in the world of medical imaging. Blurry medical images can result in costly repeat scans or, worse yet, can lead to misdiagnoses.

    As anyone who’s ever been x-rayed or scanned knows, it’s hard to hold still while a medical image is made. It’s harder yet for patients who suffer from dementia, many of whom cannot control their head movements during PET (positron emission tomography) scans of their brain. Such scans can be invaluable in determining brain function and the effectiveness of treatments, especially since the recent development of PET tracers that can image the characteristic protein plaques and inflammatory processes found in the brains of Alzheimer’s patients.

    PET brain scans reveal beta amyloid and tau protein structures characteristic of Alzheimer’s disease, but head movements can render the scans unusable.
    PET brain scans reveal beta amyloid and tau protein structures characteristic of Alzheimer’s disease,
    but head movements can render the scans unusable. Researchers hope to use Kinect for Windows
    and computer vision software to remedy this problem.

    Now Imanova, a UK-based international imaging center, is looking to solve the problem of blurred PET scans—and the latest Kinect for Windows technology is a big part of their proposed solution. Britain’s Medical Research Council recently awarded Imanova and Imperial College London a grant to integrate the Kinect for Windows v2 sensor into PET scanners, in order to detect patient movement in real time during scans.

    The Kinect sensor will be mounted above the patient's head in the PET scanner.
    The Kinect sensor will be mounted above the patient's head in the PET scanner.

    The latest Kinect sensor’s state-of-the-art 3D camera does not require special lighting or direct contact with the patient, but it can effectively capture even slight movements, the effects of which can then be removed by applying computer vision algorithms during the reconstruction of the diagnostic image. Coupled with the latest high-resolution medical scanners, the Kinect-enabled system should produce images uncontaminated by movement. Medical researchers will compile clinical data to demonstrate the accuracy and usability of the technology in a real-world imaging environment.

    This project demonstrates how Kinect technology can serve as a low-cost but highly sophisticated tool for medical use. Not only does the Imanova system promise significant benefit to dementia patients, it also has the potential to improve the understanding of a range of neurological conditions.

    The Kinect for Windows Team

    Key links



  • Kinect for Windows Product Blog

    Kinect for Windows shows promise in helping seniors retain cognitive functions

    • 0 Comments

    Declines in cognitive function are among the most debilitating problems that affect senior citizens. Various studies have reported that physical training designed to prevent falls can have a positive effect on cognition in older adults. With that in mind, Japanese researchers at Kyoto University’s medical school employed Kinect for Windows to create Dual-Task Tai Chi, a unique game concept in which elderly participants use various body movements and gestures—captured by the Kinect sensor and relayed to a stick figure projected on a screen—to select numbers and placements to complete an on-screen Sudoku puzzle. The game thus involves two tasks: the coordination task of controlling the stick figure, and the cognitive task of figuring out the puzzle pattern.

    The patient uses movements and gestures—which are detected by the Kinect sensor—to complete an on-screen Sudoku puzzle.

    The patient uses movements and gestures—which are detected by the Kinect sensor—to complete an
    on-screen Sudoku puzzle.

    Forty-one elderly participants were divided into a test group (26 individuals) and a control group (15 individuals). After 12 weeks, the test group showed improved cognitive function compared to the control group, as measured by the trail-making test—which measures visual attention and task switching abilities—and a verbal fluency test (see note). While the researchers caution that the results are preliminary, they demonstrate yet another way that Kinect for Windows can help improve health and well-being.

    Kinect for Windows Team

    Key links

     

    ____________

    Note: As the investigators' research paper states, “Significant differences were observed between the two groups with significant group x time interactions for the executive cognitive functions measure, the delta-trail-making test (part B—part A; Fi.36= 4.94, P =.03; TG: pre mean 48.8 [SD43.9], post mean 42.2 [SD 29.0]; CG pre mean 49.5 [SD 51.8], post mean 64.9 [SD 54.7].” Read the research paper for more details.

    Back to blog

  • Kinect for Windows Product Blog

    Coffee, creativity, and Kinect: hacking London-style

    • 0 Comments

    A marathon came to London early this year, but it wasn’t the usual 26 miles of pavement pounding. This was a different kind of endurance event, one involving 36 hours of coding with the Kinect v2 sensor.

    The event, which took place March 21–22, was organized by Dan Thomas of Moov2, who had approached my colleagues and me in the Microsoft UK Developer Experience team a few months earlier, wondering if we could help put together a Kinect v2 hackathon. Of course we said yes, and with assistance from quite a few friends, the London Kinect Hack was off and running.

    After many weeks of planning and hard work on the part of Dan and his team, the event came together and a site opened to distribute tickets. The site featured a snazzy logo (later emblazoned on T-shirts that were distributed at the event), and the hackathon’s allotted 100 tickets sold out within two days.

     The London hackathon featured a snazzy logo that adorned participants' T-shirts.
    The London hackathon featured a snazzy logo that adorned participants' T-shirts.

    Clearly there was a lot of interest, but we worried about actual turnout—always a risk with a weekend event, when other diversions compete for participants’ time. Moreover, we hoped that all the registrants appreciated that this was a coding event.

    So, did the developers come to Kinect Hack London? Did they code? Did they have fun and deliver some great work? Absolutely—see for yourself in this video:

    The hackathon was an unqualified success: more than 80 developers turned up for the weekend, coming from not only the UK but also France, Belgium, Holland, Germany, and even Mexico. In addition, a number of people came through on “spectator” tickets, eager to see what was happening.

    Over the course of the next two days, teams were formed, Kinect sensors were loaned, laptops were borrowed, bugs were squashed, and sleep was (mostly) ignored. Twitter got a serious workout as teams tweeted their progress, while burgers, curry, and pasta disappeared, along with much coffee and a little beer.

    Thirty-six hours later, the indefatigable hackers had produced a long list of projects to pitch during the show-and-tell that closed the event. This was a very relaxed, fun couple of hours, with participants getting to see and try out what the other teams had made. Here’s the full roll call of projects (many of which are featured in the video above):

    Team (members)

    Project

    Kinect Pong
    (Dave)

    A variation of the classic game controlled by doing exercises—squats or press-ups (push-ups, to you Yanks). You can see my colleague Andrew Spooner demonstrating this in the video above.

    Sphero Slalom
    (Victoria, Matthew, Hannah, and Phaninder)

    Kinect-captured gestures steered a Sphero (a remote-controlled ball) through a challenging course.

    Flight of Light
    (James)

    A four-player Unity game was adapted to support Kinect input, with players spreading their wings and leaning to the left or right to control their avatar.

    Vase Maker
    (a different James)

    Not happy with regular home accessories, this one-man team used the Kinect sensor’s camera and Open Frameworks to create a host of weird and wonderful psychedelic, 3D vase visualisations from such props as a shopping bag.

    Functional Movement Screen
    (Chris, Mustafa, Glenn, and Matthew)

    Like a watchful gym teacher, this app used Kinect body tracking to analyze the quality with which participants performed a set of exercises.

    Skelxplore
    (James and Leigh)

    This app used Kinect to explore a user’s skeletal system and musculature.

    Do It For Walt
    (Joe and Sam)

    Developers from Disney prototyped an interactive theme park guide, which featured artificial reality that let users assume their favorite movie characters.

    Kinect + Oculus
    (Tom)

    Impressive visuals and sensations ensued when the Kinect sensor brought the body into a view presented by the Oculus Rift, as real limbs combined with augmented challenges.

    Flappy Box
    (Chi and Bryan)

    A Flappy Bird-like game, this app enabled up to six players to control a flying bird by jumping and crouching in front of the Kinect sensor.

    Music Machine
    (Jon)

    In this multi-person experience, users’ bodies controlled the mix of a set of parts from a music track.

    Skynect
    (Rick, Elizabeth, Tatiana, and Sankha)

    This app brought Kinect into the world Skype calls.

    Kinect Talks
    (Fernando)

    Intended as a tool to assist a five-year-old suffering from cerebral palsy, this app utilized simple body movements to create voice outputs.

    Hole in the Wall
    (James, Alex, Scott, and Andrew)

    In this Unity game, players used gestures to push shaped blocks into an advancing 3D wall.

    Box Sizer
    (Alex, Michael, Tim, and Navid)

    Designed for use by shipping companies, Box Sizer uses the Kinect sensor’s camera and depth detection to measure the volume of cardboard boxes.

    Multi-Kinect Server
    (Julien)

    This app combined output from multiple Kinect sensors over a network, creating a multi-sensor view of all the tracked bodies on a single monitor screen.

    Bubblecatch
    (Sam, David, and Mark)

    In this WPF-powered multiplayer game, players had to catch bubbles and avoid explosives.

    Kinect Juggling
    (Phil and Joe)

    A tool to teach juggling, this app used Kinect data to track the path of a juggled ball and analyze the accuracy of juggler.

    Kinect Kombat
    (Gareth, Yohann, and Rene)

    A prototype first-person game, this app let combatants hurl virtual fireballs.

    Helicar & Lewis
    (Joel, James, and Thomas)

    Intended to help children visualise their imagined environments, this app placed 3D characters in a modelled 3D world.

    Kinect Shooter
    (Kunal and Shabari)

    This app provided a gun-wielding shoot-‘em-up experience.

    3D Fuser
    (Claudio and Maruisz)

    Need to map Kinect sensor data onto 3D models? This app did it.


    The event was not organized as a competition, but Dan put together a small judging panel and three teams received special prizes at the end of the hackathon:

    • Julien received a Parrot AR Drone 2.0  for his work on bringing together multiple Kinect sensors
    • Phil and Joe received a Parrot Jumping Sumo for their innovative juggling application
    • Tom received a Sphero Ollie for his work on bringing together Kinect and Oculus Rift

    But really, everyone was a winner. With help from the US Kinect team, Dan had many additional prizes to give away during impromptu games. More than two dozen developers went home with a Kinect sensor of their own; others received Raspberry Pi 2 devices and starter kits or Spheros, the latter donated by—you guessed it—Sphero.

    This event demonstrated that the Kinect v2 sensor is an inspirational piece of hardware for hackers, and Dan’s team did a wonderful job of a creating a “by community, for community” event. Everyone had a great time, as witness the incredibly positive feedback at the event and on Twitter (search #KinectHackLondon).

    Here are a few of the participants’ write-ups: some even include the code they produced:

    Huge thanks to Dan Thomas and the team at Moov2 for putting this hackathon together. It was a great piece of work and a lot of fun to be involved in. Thanks also to the UK Microsoft colleagues who helped out, especially Paul Lo and Andrew Spooner.

    Above all, many thanks to all the participants who made this weekend so outstanding.

    Mike Taulty, Tech Evangelist, Microsoft UK Developer Experience Team

    Key links

  • Kinect for Windows Product Blog

    Learn to build Kinect apps for the Windows Store

    • 0 Comments

    As we discussed in a recent blog, the Kinect v2 sensor and SDK 2.0 enables developers to create Kinect-powered Windows Store apps, opening up an entirely new market for your Kinect for Windows applications. Now on GitHub you can find the Kinect 2 Hands on Labs, a tutorial series that teaches you, step by step, how to build a Windows Store 8.1 app that uses almost every feature of the new sensor.

    The lab is a complete introduction, covering everything from setting up the Kinect sensor to utilizing its major features. Included are hands-on lessons about using the infrared, color, and depth data; creating a body mask; displaying body data; removing backgrounds; using the face library; creating hand cursor interactions; employing Kinect Studio; building gestures; adding speech recognition; and tracking multiple users.

    The Kinect 2 Hands on Labs include lessons on every major feature of the latest sensor, complete with illustrations such as this one from the lesson on using depth data.
    The Kinect 2 Hands on Labs include lessons on every major feature of the latest sensor, complete with
    illustrations such as this one from the lesson on using depth data.

    Each lesson includes code to help you build samples, providing a true hands-on learning experience. For example, here is part of the code included in the lesson on how to assemble a body mask:

    Part of the code included in the lesson on how to assemble a body mask

    If you’re thinking about tapping into the Windows Store market with your own Kinect app, this tutorial series is a great place to start.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows game may help patients qualify for experimental treatments

    • 0 Comments

    Watching Cole throw his arms and shoulders into playing a video game, you might never guess that he suffers from a severe muscular disease. But he does. Cole has Duchenne muscular dystrophy (DMD), a genetic disorder that results in progressive muscle degeneration. DMD patients, almost all of whom are boys, seldom live beyond early adulthood, and most are wheelchair bound by their early teens.

    Dedicated medical researchers are testing a host of experimental treatments that might slow or even halt the disease’s otherwise relentless progress. Currently, most clinical trials limit admission to patients who can walk unassisted for six straight minutes. The distance the boy can walk in six minutes is used as a baseline; if the distance increases during the course of the treatment, it indicates that the experimental therapy is having a positive effect.

    Unfortunately, the six-minute-walk requirement rules out a lot of boys who still have considerable upper-body strength but cannot walk the requisite six minutes. Physical therapists Linda Lowes and Lindsay Alfano at Nationwide Children’s Hospital are working to get more boys accepted into clinical trials by developing a simple, reliable measure of upper body abilities that could be used as an alternative to the walk test. And Kinect for Windows v2 is playing a critical role in their efforts.

    ACTIVE-seated uses Kinect for Windows to measure upper-body muscle strength in boys with
    Duchenne muscular dystrophy.

    Lowes, Alfano, and their colleagues have devised a Kinect-enabled video game in which seated DMD patients control the action by vigorous arm and shoulder movements. Called ACTIVE-seated (the acronym stands for Ability Captured Through Interactive Video Evaluation), the game not only measures upper-extremity abilities but does so while motivating the patient to perform his best.

    ACTIVE-seated uses Kinect for Windows’ capabilities to record accurate data on the patient’s upper-extremity reach and range. The gamer—that is, the patient—is seated at a special table, some distance from a video monitor that displays the game. Taking advantage of the body tracking options in the Kinect software development kit (SDK), the researchers use the Kinect sensor’s infrared camera to track the position of the patient’s head, trunk, and arms as he plays the game. By identifying points on the head and sternum, both shoulders, and each arm, the researchers can measure the patient’s maximal upper-extremity movement in three planes: horizontal (left and right), vertical (table top to overhead), and depth (forward toward the camera).

    Players can choose between two different games based on their interests. Both games were developed with input from the boys, who obviously know what pre-teen males enjoy. They overwhelmingly agreed that something “gross” would be best. Based on this recommendation, one game involves a spider invasion, in which the boys squish the spiders, which crunch realistically and ooze green innards. The second game, designed for the more squeamish, involves digging for jewels in a cave.

    “You should see the faces of new patients light up when they hear that they’re going to be playing a video game instead of undergoing another boring set of tests,” says Lowes. The allure of a video game increases the patients’ motivation, which, in turn, improves the reliability of the results. When asked to perform uninspiring tests day after day, boredom sets in, and the desultory results don’t measure true functional ability. But when it comes to playing a video game, boredom isn’t a problem.

    ACTIVE-seated is currently in testing, and a recent study of 61 DMD patients found that scores in the game correlated highly with parent reports of daily activities and mobility. Lowes and her colleagues are hopeful that these results will help convince the U.S. Food and Drug Administration to use the game as an alternative test for admission to DMD clinical trials.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinetisense brings objectivity to range of motion therapy

    • 0 Comments

    “Jane” had a problem: a so-called frozen shoulder, which made it painful to use her left arm. The pain, which had begun mysteriously eight months earlier, affected nearly every aspect of Jane’s life, making it difficult for her to perform routine tasks at her office job and at home.

    She had tried a number traditional and alternative treatments, from massage therapy and stretching to acupuncture-like intramuscular stimulation and a soft-tissue treatment called myofascial release. None of these treatments provided meaningful relief, and Jane abandoned each out of disappointment. Emotionally exhausted by the seemingly incurable pain, Jane was prescribed antidepressants by her physician.

    Then, as what he called a “last resort,” Jane’s physician referred her to chiropractor Ryan Comeau, one of the founders of Kinetisense, a Canadian company that has pioneered the use of Kinect for Windows v2 to record and track progress during physiotherapy for joint and range-of-motion problems.

    Kinetisense’s software takes advantage of the Kinect v2 sensor’s ability to accurately record the exact position of body joints during therapeutic sessions. Unlike traditional methods of measuring joint angles, the Kinetisense system measures true joint values—based on the actual position of the bones—rather than approximating the angles formed by the external body parts.

    Kinetisense uses the Kinect v2 sensor to record the exact position of the body joints during therapy,
    providing an unparalleled level of accuracy.

    Kinetisense algorithms obtain the positions of the joints and calculate the exact angle of any given joint at any time. And it does this in less than half a second, without resorting to imprecise hand tools, such as inclinometers and goniometers, or expensive wearable equipment. The patient simply stands or sits in front of the v2 sensor and the Kinetisense software performs all of the necessary calculations with remarkable accuracy and speed. And because the sensor is measuring the true positions of the joints, Kinetisense provides accurate joint analysis even when patients unintentionally try to extend their range of motion by leaning, rather than relying solely on joint movement.

    The objective accuracy of the Kinetisense measurements allows the practitioner to adjust the treatment and reach a more realistic prognosis. What’s more, Kinetisense helps with patient compliance, which is a well-documented problem in physiotherapy. And while there are several reasons for noncompliance, Comeau notes that, like Jane, “Many patients with range of motion problems drop out of therapy when they fail to discern a meaningful lessening of their pain. But because pain is a very subjective matter, many people perceive it as an “all or nothing” proposition—either the pain is gone or it has not lessened at all. People may, in fact, be experiencing real benefits from their therapy, but fail to realize it because they are not yet completely pain-free. Kinetisense helps by providing both the patient and the practitioner with graphs that demonstrate real progress in range of motion, even when the patient has yet to sense the improvement in terms of pain reduction. The realization that therapy is working is incredibly reinforcing to patients, who are then much more likely to continue their treatment."

    The precise measurements of the joint angles enable Kinetisense to chart improvements in the patient's range of motion.

    The precise measurements of the joint angles enable Kinetisense to chart improvements in the
    patient's range of motion. The quantifiable therapeutic results allow patients to see irrefutable
    evidence of improvement.

    Kinetisense meets the longstanding need for objectivity and evidence-based rehabilitation care—a boon to both the patient and the practitioner. And as for Jane, she’s continuing her treatment and shows ongoing improvement. She’s been able to reduce her antidepressant dosage by half, and has referred several friends and family members to Comeau’s practice.

    The Kinect for Windows Team

    Key links

     

  • Kinect for Windows Product Blog

    Microsoft to consolidate the Kinect for Windows experience around a single sensor

    • 0 Comments

    At Microsoft, we are committed to providing more personal computing experiences. To support this, we recently extended Kinect’s value and announced the Kinect Adapter for Windows, enabling anyone with a Kinect for Xbox One to use it with their PCs and tablets. In an effort to simplify and create consistency for developers, we are focusing on that experience and, starting today, we will no longer be producing Kinect for Windows v2 sensors.

    Kinect for Xbox One sensor

    Kinect for Xbox One sensor

    Over the past several months, we have seen unprecedented demand from the developer community for Kinect sensors and have experienced difficulty keeping up with requests in some markets. At the same time, we have seen the developer community respond positively to being able to use the Kinect for Xbox One sensor for Kinect for Windows app development, and we are happy to report that Kinect for Xbox One sensors and Kinect Adapter for Windows units are now readily available in most markets. You can purchase the Kinect for Xbox One sensor and Kinect Adapter for Windows in the Microsoft Store.

    Kinect Adapter for Windows

    Kinect Adapter for Windows

    The Kinect Adapter enables you to connect a Kinect for Xbox One sensor to Windows 8.0 and 8.1 PCs and tablets in the same way as you would a Kinect for Windows v2 sensor. And because both Kinect for Xbox One and Kinect for Windows v2 sensors are functionally identical, our Kinect for Windows SDK 2.0 works exactly the same with either.

    Microsoft remains committed to Kinect as a development platform on both Xbox and Windows. So while we are no longer producing the Kinect for Windows v2 sensor, we want to assure developers who are currently using it that our support for the Kinect for Windows v2 sensor remains unchanged and that they can continue to use their sensor.

    We are excited to continue working with the developer community to create and deploy applications that allow users to interact naturally with computers through gestures and speech, and continue to see the Kinect sensor inspire vibrant and innovative commercial experiences in multiple industries, including retail, education, healthcare, education, and manufacturing. To see the latest ways that developers are using Kinect, we encourage you to explore other stories in the Kinect for Windows blog.

    Michael Fry, Senior Technology Evangelist for Kinect for Windows, Microsoft

     Key links

  • Kinect for Windows Product Blog

    GesturePak v2 simplifies creation of gesture-controlled apps

    • 0 Comments

    What do you do after you’ve built a great app? You make it even better. That’s exactly what Carl Franklin, a Microsoft Most Valuable Professional (MVP), did with GesturePak. Actually, GesturePak is both a WPF app that lets you create your own gestures (movements) and store them as XML files, and a .NET API that can recognize when a user has performed one or more of your predefined gestures. It enables you to create gesture-controlled applications, which are perfect for situations where the user is not physically seated at the computer keyboard.

    GesturePak v2 simplifies the creation of gesture-controlled apps. This image shows the app in edit mode.
    GesturePak v2 simplifies the creation of gesture-controlled apps. This image shows the app
    in edit mode.

    Franklin’s first version of GesturePak was developed with the original Kinect for Windows sensor. For GesturePak v2, he utilized the Kinect for Windows v2 sensor and its related SDK 2.0 public preview, and as he did, he rethought and greatly simplify the whole process of creating and editing gestures. To create a gesture in the original GesturePak, you had to break the movement down into a series of poses, then hold each pose and say the word “snapshot,” during which a frame of skeleton data was recorded. This process continued until you captured each pose in the gesture, which could then be tested and used in your own apps. 

    GesturePak v2 works very differently. You merely tell the app to start recording (with speech recognition), then you perform the gesture, and then tell it to stop recording. All of the frames are recorded. This gives you a way to play an animation of the gesture for your users.

    GesturePak v2 still uses the same matching technology as version 1, relying on key frames (called poses in v1) that the user matches in series. But with the new version, once you've recorded the entire gesture, you can use the mouse wheel to "scrub" through the movement and pick out key frames. You also can select which joints to match simply by clicking on them. It's a much easier and faster way to create a gesture than the interface of GesturePak v1, which required you to select poses by using voice and manual commands. 


    Carl Franklin offered these words of technical advice for devs who are
    writing WPF apps:

    • If you want to capture video, use SharpAVI
      (http://sharpavi.codeplex.com/
    • If you want to convert the AVI to other formats, use FFmpeg
      (http://ffmpeg.org/)
    • When building an app with multiple windows/pages/user controls that use the Kinect sensor, only instantiate one instance of a sensor and reader, then bind to the different windows
    • Initialize the Kinect sensor object and all readers in the Form Loaded event handler of a WPF window, not the constructor


    Another big change is the code itself. GesturePak v1 is written in VB.NET. GesturePak v2 was re-written in C#. (Speaking of coding, see the green box above for Franklin’s advice to devs who are writing WPF apps.)

    Franklin was surprised by how easy it was to adapt GesturePak to Kinect for Windows v2. He acknowledges there were some changes to deal with—for instance, “Skeleton” is now “Body” and there are new JointType additions—but he expected that level of change. “Change is the price we pay for innovation, and I don't mind modifying my code in order to embrace the future,” Franklin says.

    He finds the Kinect for Windows v2 sensor improved in all categories. “The fidelity is amazing. It can track your skeleton in complete darkness. It can track your skeleton from 50 feet away (or more), and with a much wider field of vision. It can tell whether your hands are open, closed, or pointing,” Franklin states, adding, “I took full advantage of the new hand states in GesturePak. You can now make a gesture in which you hold out your open hand, close it, move it across your chest, and open it again.” In fact, Franklin credits the improvements in fidelity with convincing customers who had been on the fence. “They’re now beating down my door, asking me to build them a next-generation Kinect-based app.”

    The Kinect for Windows Team

    Key links

     


Page 1 of 14 (139 items) 12345»