• Kinect for Windows Product Blog

    Kinect with soccer and suds

    • 0 Comments

    What’s better than watching your team compete in the World Cup? Perhaps enjoying the game with a refreshing beer, especially if you purchased the brew at a nice discount. Thanks to a Kinect for Windows application, Argentine fans were able to do just that.

    The Kinect-based display rewarded Argentine soccer fans with discounts on beer.
    The Kinect-based display rewarded Argentine soccer fans with discounts on beer.

    The application was created by Kimetric, an Argentine company that specializes in real-time customer analytics. It involved a special World Cup display, which featured a video of renowned Argentine footballer Oscar Ruggeri, placed in the beer aisle at several supermarkets in Buenos Aires. The Kinect for Windows sensor’s camera detected when customers engaged with the display, and at that point, Ruggeri’s video persona asked them if they’re old enough to drink. If the customers answered yes, the system scanned them to see if they did indeed appear to be over the legal drinking age. Then the sensor looked to see if they were wearing an Argentine soccer jersey. The Kinect for Windows sensor could identify whether or not a customer was clad in Argentine soccer attire based on a machine-learning algorithm. More than 50,000 images of different Argentine soccer jerseys and other apparel were used to train the system.

    So, what happened if the customer was wearing an Argentine jersey? He or she was rewarded with a discount coupon for the purchase of Quilmes Cristal, a popular Argentine beer. If the Kinect for Windows sensor detected just one jersey-clad customer, the discount was 10 percent. If the sensor detected two such customers, the discount rose to 15 percent, and if there were three or more customers posing in Argentine soccer apparel, the discount jumped to 25 percent. Bringing your soccer-crazy friends with you on a beer run really paid off.

    If the customer wasn’t wearing an appropriate jersey, he or she still got a shot at scoring a beer bargain. In those cases, Ruggeri asked a couple of questions about Argentine soccer, and customers who answered correctly were rewarded with a 10 percent discount coupon.

    The display was in stores from early May until the end of June 2014, during which time tens of thousands of lucky customers scored a beer discount. One lucky, randomly chosen customer even received an all-expenses-paid trip to Brazil to attend one of the Argentine team’s matches. Of course, another winner was Quilmes, which saw a healthy increase in beer sales during the promotion.

    All and all, a win for everyone: soccer fans, beer lovers, Quilmes, and Kimetric.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Hackers put Kinect for Windows v2 through its paces

    • 0 Comments

    The Kinect for Windows team recently traveled to New York City for a terrific hackathon. In partnership with NUI Central, we had the pleasure of sponsoring a 27-hour event, June 21–22, focused on creating innovative applications and experiences using the forthcoming Kinect for Windows v2 sensor and SDK 2.0.

    The event drew more than 100 participants, from as far away as the Czech Republic and Dubai. You will find a summary of the teams and projects below; we were blown away by what we saw. We were also impressed by the tremendous interest in the experimental near-field firmware that we brought. This firmware turns the Kinect for Windows v2 sensor into a near-field device that can be used at close range (from 10cm to 1m).

    Lots of participants told us they wanted to get their hands on this near-field firmware. Others said how pleased they were just to have access to our engineering team. In fact, there was so much interest and energy at this event that we have set up some additional ones this summer. We would love to see you at one of our upcoming in-person events, or to have you join us online for our virtual training event on Tuesday, July 15. Please see the in-person event information below.


    Upcoming in-person events

    All of these events will be attended by members of the Kinect for Windows engineering team and will feature prizes, food, and, best of all, access to Kinect for Windows v2 sensors (including experimental near-field).

    July 18–19, 2014: Dallas Entrepreneur Center (Dallas, TX, United States)

    Computer Visionaries is hosting; we are sponsoring. It’s going to be so hot outside that you’ll want to be hunkered down inside at our event.


    July 26–27, 2014: Microsoft Headquarters (Redmond, WA, United States)

    This event will be here on our main campus. Participants will have access to The Cube, a 4’ x 4’ x 4’ display with a Kinect for Windows v2 sensor on each side. The Cube is brand new and will be available for you to experiment with and use for your app.

     

    “The Cube” display: each side has a Kinect for Windows v2 sensor.
    “The Cube” display: each side has a Kinect for Windows v2 sensor.

    August 8–10, 2014: Kitchener Studio Space (Kitchener, ON, Canada)

    We are sponsoring Deep Realities and hosting a weekend hackathon in the Waterloo Region.


    New York City hackathon summary

     New York City Hackathon participants
    New York City hackathon participants


    As noted earlier, this event was hosted by our partner, NUI Central, in Manhattan (New York). The top three teams were:

    First place: lightspeed. Their application, called K4B, used a Kinect for Windows v2 sensor to scan large ceiling spaces in order to map HVAC and electrical details so that renovators can get accurate, detailed as-built measurements.

    First-place winners, team lightspeed
    First-place winners, team lightspeed


    Second place:
    adabiits. Their application, Thing T. Thing, included a robotic hand that waves when someone walks by and can be controlled by using finger tracking.

    Second-place winners, team adabiits
    Second-place winners, team adabiits


    Third place:
    Body Labs. Their application, ScanAdHoc, combined multiple Kinect for Windows v2 sensors wirelessly over WebSockets, enabling accurate 3D body scans to be used for fitting clothes.

    Third-place winners, Body Labs
    Third-place winners, Body Labs


    Other teams that presented projects:

    • Augmented Travel presented “Augmented Travel,” which allows you to explore an area virtually, by using your body as the controller.
    • Cornell Tech presented “Body DJ,” which enables you to “rock out” by using body gestures.
    • Critical Mass presented “Sweeper,” which uses Kinect Ripple to raise awareness about the deadly nature of landmines. The team posted a video.
    • Gunters presented “Blox,” which combines the v2 sensor and Oculus Rift to create a personal world using drag and drop.
    • Landroids presented “Obstacle Avoidance Robot,” which used Kinect for Window’s depth sensing capabilities to aid navigation for a Lego Mindstorm robot.
    • Leftover presented “Touchless Controller,” which enables you to use gestures to control any Windows application or to scroll and navigate in a web browser or Visual Studio.
    • Sandwich Dance Party presented “Hue Light Controller,” which enables you to point at a physical light in the room to turn it on or off, and then to use a hand gesture to change the color of the light.
    • SLIP SLAP presented “Around the Cluck,” a game about gold and a guardian chicken, created in Unity 3D. Players must move around the room and carefully collect eggs, so as not to awaken the chicken.
    • SoundPound presented “SoundPound,” a human drum that functions as roadside sobriety test.
    • SpiceDoctor presented “Kinect Virtual Doctor,” a Windows 8 application that uses the Kinect for Windows v2 sensor to monitor your heart rate in real time.
    • Team Fantabulous presented “Power Doge,” a cutting-edge media browser in which novel hand gestures create a unique photo viewing experience.
    • Trail Makers presented “Immersive Trail,” a twenty-first–century approach to the Trail Making Test, a neuro-psychological assessment of visual attention and the ability to switch tasks.
    • Tropicana Pure Premium presented “Finger Tracking Keyboard,” which enables you to shift keys by moving your hand.

    Kinect for Windows MVP András Velvárt helps team adabiits.
    Kinect for Windows MVP András Velvárt helps team adabiits
    .

     

     Hannes Hofman and Chris Schaller from Metrilus brought their finger-tracking library to the event.
    Hannes Hofman and Chris Schaller from Metrilus brought their finger-tracking library to the event.


    Congratulations to all the participants, whose creativity highlighted the immense possibilities in the Kinect for Windows v2 sensor and SDK. If you haven’t pre-ordered the v2 sensor yet, there is still time, but don’t wait too long.

    Ben Lower, Kinect for Windows developer community manager
    Contact Ben on Twitter at @benlower

    Key links

     

  • Kinect for Windows Product Blog

    Keeping watch in the night

    • 0 Comments

    The following blog post was guest authored by Ana Isabel Zorrilla, project manager at EIC BBK-Dravet Syndrome Foundation, a Spanish nonprofit organization dedicated to the treatment and cure of Dravet syndrome and related disorders.

    Imagine what it’s like to go for weeks on end without a decent night’s sleep. For Julian Isla of Madrid, Spain, this scenario requires no imagining. He has lived it, spending night after sleepless night lying awake, listening for the telltale signs that his child is having a seizure. You see, Julian’s son, Sergio, suffers from Dravet syndrome, a severe form of epilepsy. Children with Dravet syndrome experience frequent seizures; from an average of one crisis per week in the mildest cases, to one or more a day—or even multiple seizures per hour—in the most severe instances. These attacks occur more frequently while a child sleeps, so parents often struggle to stay awake seven nights a week, prepared to give their child emergency medical treatment in the event of a prolonged seizure.

    When he was born, Sergio was a healthy baby. The seizures began when he was four months old, and he was diagnosed as having Dravet syndrome after experiencing several long-lasting seizures and multiple admissions to the pediatric intensive care unit. After receiving his son’s diagnosis five years ago, Julian connected with other Dravet families, and together they explored the idea of creating an organization to promote research on the condition. They got in touch with the Dravet Syndrome Foundation (DSF) in the United States, and out of that connection, the Spanish Delegation of the DSF was born. Julian serves as the executive chairman of the Spanish Delegation, which has nine employees, including me. Our organization is involved in multiple research projects, including a search for new drugs that can treat Dravet syndrome.

    Technology has been pivotal for our group; this year, for example, we conducted the world’s first genetic tests for Dravet syndrome, using state-of-the-art technologies. Today, children with Dravet syndrome are being diagnosed by a new generation of genetic tests running on Microsoft Azure that the Spanish Delegation pioneered. Our group’s technological bent is hardly surprising given Julian’s background: with a degree in computer science and some 20 years of professional experience in the IT sector, he currently works for Microsoft in Madrid, where he manages a team of software consultants.

    As soon as Julian heard about Kinect for Windows, he began exploring the use of the technology to help Dravet families. He was aware of the technology’s potential for medical applications, thanks to interactions with colleagues in the Microsoft Madrid offices. About the same time, the Spanish Delegation created EIC BBK, a development center focused on e-health applications. Julian proposed that the center investigate the use of Kinect for Windows to monitor Dravet children while they slept. With the Kinect sensor serving as a sentinel, Julian thought the beleaguered parents might get some much needed sleep themselves.

    The use of monitors to detect seizures is not groundbreaking: there have been multiple studies and projects on seizure monitor systems. What is new and exciting is the Kinect sensor’s exquisite sensitivity, the result of its multisensory inputs. “With a color camera, an infrared detector, and an array of microphones, the Kinect sensor can detect physical movement and acoustical changes with tremendous accuracy,” says Julian. He adds, “The affordability of the Kinect sensor is another huge advantage.”

    By the end of 2013, EIC BBK had started the “Night Seizure Monitor” project, a research initiative that uses Kinect for Windows. This project’s aim is to track the child’s movements while sleeping. When the Kinect sensor detects movements that follow a seizure pattern, an alarm warns parents that their child might be having a seizure. This solution provides dual benefits: when a seizure is detected, the monitor system ensures that the child gets medication right away to reduce the length and intensity of the episode. And when no seizures occur, the monitor enhances family’s quality of life, because parents are able to enjoy a restful sleep.

     At the outset, developers programmed the Kinect sensor to be able to detect the movements of a child even if he or she was in a darkened room and lying under a blanket or comforter.

    At the outset, developers programmed the Kinect sensor to be able to detect the movements of a child even if he or she was in a darkened room and lying under a blanket or comforter, above. Then, they added the ability to spot seizures that begin with abrupt movements or loud vocalizations, below.

     Then, the developers added the ability to spot seizures that begin with abrupt movements or loud vocalizations.

    Using data collected by its color camera and depth sensor, Kinect for Windows detects seizures by comparing changes in the child’s body position between two sequential frames. If these changes are frequent, the seizure alarm sounds. In addition, the sensor’s microphone plays a role in recognizing the seizures, as the system has been programmed to respond to the shouts that typically accompany the onset of a Dravet seizure. The Kinect-based solution can process this sound and calculate the child’s location in the room.

    As participants in the Kinect for Windows Developer Preview program, our developers here at EIC BBK have been testing a preview version of the Kinect for Windows v2 sensor and SDK since December 2013, exploring the technology’s potential to improve upon the existing Night Seizure Monitor research. They are especially pleased with the v2 sensor’s infrared capabilities—which provide an even higher quality image—and its wider field of vision and greater depth range. These enhanced features should make the Night Seizure Monitor even more valuable.

    Moreover, the developers are eagerly awaiting the general availability of the Kinect for Windows v2 sensor and SDK, which promise enhanced discrimination of facial expressions. The developers believe the enhanced face tracking capability will help the monitor detect those seizures that do not present limb shaking but rather are manifested by movements of the eyes and mouth.

    The Night Seizure Monitor initiative is a great example of how needs can promote creativity. Julian had a problem at home, and rather than accepting it as unavoidable, he decided to seek out a solution.

    It also shows the power of teamwork: Julian has received enormous support from colleagues at Microsoft; right now a dozen of them are helping our organization as volunteers.

    Finally, it demonstrates how technology can empower people. Julian sums up the experience eloquently, observing that “When you have a child with special needs, everything seems filled with problems. You feel impotent. I’ve had the privilege of using technology for a project that will, we believe, improve the lives of many young patients and provide a sense of control to their families. I feel proud to work for a company whose technology can make such a difference in people’s lives.”

    While the Night Seizure Monitor is still a development project, we hope to have a fully functional prototype available for testing with Dravet Foundation families by the end of 2014. After that, our goal is to make the monitor available to Dravet families around the world. But as wonderful as this development will be, it is but a way station in the Dravet Syndrome Foundation’s ultimate mission: to find a cure for this disorder. Despite the difficulty of this quest, DSF supporters, volunteers, and workers are laboring tirelessly to achieve it. We call it our “moonshot,” taking inspiration from US President John Kennedy’s audacious mission to send a manned mission to the moon. Our moonshot represents the dream of parents who will never give up.

    Ana Isabel Zorrilla, project manager,
     EIC-BBK– Dravet Syndrome Foundation

    Key links

             

     

  • Kinect for Windows Product Blog

    Pre-order your Kinect for Windows v2 sensor starting today

    • 30 Comments

    At BUILD in April, we told the world that the Kinect for Windows v2 sensor and SDK would be coming this summer, and with them, the ability for developers to start creating Windows Store apps with Kinect for the first time. Well here in Redmond, Washington, it’s not summer yet. But today we are pleased to announce that developers can pre-order the Kinect for Windows v2 sensor. Developers who take advantage of this pre-order option will be able to start building solutions ahead of the general public.

    Sensors purchased during the pre-order phase will be shipped in July, at which time we will also release a public beta of our software development kit (SDK). All of this will happen a few months ahead of general availability of sensors and the SDK, giving pre-order customers a head start on using the v2 sensor’s new and improved features, including increased depth-sensing capabilities, full 1080p video, improved skeletal tracking, and enhanced infrared technology.

    Kinect for Windows v2 sensor

    Thousands of developers wanted to take part in our Developer Preview program but were unable to do so—in fact, we’re still receiving requests from all around the world. So for these and other developers who are eager to start using the Kinect for Windows v2, the pre-order option offers access to the new sensor ahead of general availability. Bear in mind, however, that we have limited quantities of pre-order sensors, so order while supplies last.

    The v2 sensors will also be shipped in July to those who participated in the Developer Preview program. For these early adopters, it’s been an amazing six months: we’ve seen more stunning designs, promising prototypes, and early apps than we can count—from finger tracking to touch-free controls for assembly line workers to tools for monitoring the environment. At BUILD, we showed you what Reflexion Health and Freak’n Genius were able to achieve with the v2 sensor in just a matter of weeks. And in July, when the sensor and SDK are more broadly available, we can only imagine what’s next.

    Kinect for Windows will continue to feature more innovative uses of the v2 technology on this blog in the coming months. As Microsoft Corporate Vice President and Chief Evangelist Steven Guggenheimer notes, “I love what the Kinect sensor and SDK can do. Getting the v2 sensor into the hands of more developers and getting the SDK more widely available is the next step.”

    We are committed to a future where humans and technology can interact more seamlessly—in the living room, on their PCs, and beyond.

    —The Kinect for Windows Team

    Key links


  • Kinect for Windows Product Blog

    Kinect-powered stroke rehab system gets FDA clearance

    • 0 Comments

    Back in January, we featured a story about Jintronix and its innovative rehabilitation system that uses Kinect for Windows to help stroke victims recover physical functions. The origins of that story date back to April 2012, when Jintronix, then based out of Montreal, became one of 11 startups selected from more than 500 applicants worldwide to take part in the Kinect Accelerator program. That program was powered by Techstars—one of the world’s foremost technology accelerators—in close collaboration with Microsoft.

    Now, as Dr. Bill Crounse, senior director of Worldwide Health at Microsoft reports in his blog, Jintronix has received 510(k) clearance from the US Food and Drug Administration (FDA) for its rehab system. This marks an important milestone for Jintronix. “We’re very excited about receiving FDA clearance, which paves the way for Jintronix to help in the rehabilitation of countless stroke victims,” said CEO Shawn Errunza.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Real-time 3D scanning stuns the gnome world

    • 4 Comments

    Garden gnomes: they decorate our yards, take bizarre trips, and now can be scanned in 3D in real time by using readily available computer hardware, as can be seen in this video from ReconstructMe. The developers employed the preview version of the Kinect for Windows v2 sensor and SDK, taking advantage of the sensor’s enhanced color and depth streams. Instead of directly linking the input of the Kinect with ReconstructMe, they streamed the data over a network, which allowed them to decouple the reconstruction from the data acquisition.

    Real-time 3D scan of garden gnome created by using Kinect for Windows v2

    Developer Christoph Heindl (he’s the one holding the gnome in the video) notes that the ReconstructMe team plans to update this 3D scanning technology when the Kinect for Windows v2 is officially released this summer, saying, “We’re eager to make this technology widely available upon the release of Kinect for Windows v2.”

    Heindl adds that this real-time process has potential applications in 3D scanning, 3D modelling through gestures, and animation. Not to mention the ability to document gnomic travels in 3D!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Windows Store app development is coming to Kinect for Windows

    • 9 Comments

    Today at Microsoft BUILD 2014, Microsoft made it official: the Kinect for Windows v2 sensor and SDK are coming this summer (northern hemisphere). With it, developers will be able to start creating Windows Store apps with Kinect for the first time. The ability to build such apps has been a frequent request from the developer community. We are delighted that it’s now on the immediate horizon—with the ability for developers to start developing this summer and to commercially deploy their solutions and make their apps available to Windows Store customers later this summer.

    The ability to create Windows Store apps with Kinect for Windows not only fulfills a dream of our developer community, it also marks an important step forward in Microsoft’s vision of providing a unified development platform across Windows devices, from phones to tablets to laptops and beyond. Moreover, access to the Windows Store opens a whole new marketplace for business and consumer experiences created with Kinect for Windows.

    The Kinect for Windows v2 has been re-engineered with major enhancements in color fidelity, video definition, field of view, depth perception, and skeletal tracking. In other words, the v2 sensor offers greater overall precision, improved responsiveness, and intuitive capabilities that will accelerate your development of voice and gesture experiences.

    Specifically, the Kinect for Windows v2 includes 1080p HD video, which allows for crisp, high-quality augmented scenarios; a wider field of view, which means that users can stand closer to the sensor—making it possible to use the sensor in smaller rooms; improved skeletal tracking, which opens up even better scenarios for health and fitness apps and educational solutions; and new active infrared detection, which provides better facial tracking and gesture detection, even in low-light situations.

    The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:

    • Window Store app development: Being able to integrate the latest human computing technology into Windows apps and publish those to the Windows Store will give our developers the ability to reach more customers and open up access to natural user experiences in the home.
    • Unity Support: We are committed to supporting the broader developer community with a mix of languages, frameworks, and protocols. With support for Unity this summer, more developers will be able to build and publish their apps to the Windows Store by using tools they already know.
    • Improved anatomical accuracy: With the first-generation SDK, developers were able to track up to two people simultaneously; now, their apps can track up to six. And the number of joints that can be tracked has increased from 20 to 25 joints per person. Lastly, joint orientation is better. The result is skeletal tracking that’s greatly enhanced overall, making it possible for developers to deliver new and improved applications with skeletal tracking, which our preview participants are calling “seamless.”
    • Simultaneous, multi-app support: Multiple Kinect-enabled applications can run simultaneously. Our community has frequently requested this feature and we’re excited to be able to give it to them with the upcoming release.

    Developers who have been part of the Kinect for Windows v2 Developer Preview program praise the new sensor’s capabilities, which take natural, human computing to the next level. We are in awe and humbled by what they’ve already been able to create.

    Technologists from a few participating companies are on hand at BUILD, showing off the apps they have created by using the Kinect for Windows v2. See what two of them, Freak’n Genius and Reflexion Health, have already been able to achieve, and learn more about these companies.

    The v2 sensor and SDK dramatically enhance the world of gesture and voice control that were pioneered in the original Kinect for Windows, opening up new ways for developers to create applications that transform how businesses and consumers interact with computers. If you’re using the original Kinect for Windows to develop natural voice- and gesture-based solutions, you know how intuitive and powerful this interaction paradigm can be. And if you haven’t yet explored the possibilities of building natural applications, what are you waiting for? Join us as we continue to make technology easier to use and more intuitive for everyone.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    BUILDing business with Kinect for Windows v2

    • 4 Comments

    BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

    Making schoolwork fun with Freak’n Genius, which lets anyone become an animator using Kinect for Windows v2. Here a student is choosing a character to animate in real time, for a video presentation on nutrition.
    Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.

    Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

    But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

    What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.”  He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.

    Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.

    While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling  to perform the exercises correctly at home—or simply skipping them altogether.

    Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

    Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

    Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.”  And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review.  “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.

    But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor.  Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”

    From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Revealing Kinect for Windows v2 hardware

    • 49 Comments

    As we continue the march toward the upcoming launch of Kinect for Windows v2, we’re excited to share the hardware’s final look.

    Sensor

    The sensor closely resembles the Kinect for Xbox One, except that it says “Kinect” on the top panel, and the Xbox Nexus—the stylized green “x”—has been changed to a simple, more understated power indicator:

    Kinect for Windows v2 sensor
    Kinect for Windows v2 sensor

    Hub and power supply

    The sensor requires a couple other components to work: the hub and the power supply. Tying everything together is the hub (top item pictured below), which accepts three connections: the sensor, USB 3.0 output to PC, and power. The power supply (bottom item pictured below) does just what its name implies: it supplies all the power the sensor requires to operate. The power cables will vary by country or region, but the power supply itself supports voltages from 100–240 volts.

    Kinect for Windows v2 hub (top) and power supply (bottom)

    Kinect for Windows v2 hub (top) and power supply (bottom)

    As this first look at the Kinect for Windows v2 hardware indicates, we're getting closer and closer to launch. So stay tuned for more updates on the next generation of Kinect for Windows.

    Kinect for Windows Team

    Key links


  • Kinect for Windows Product Blog

    Swap your face…really

    • 2 Comments

    Ever wish you looked like someone else? Maybe Brad Pitt or Jennifer Lawrence? Well, just get Brad or Jennifer in the same room with you, turn on the Kinect for Windows v2 sensor, and presto: you can swap your mug for theirs (and vice versa, of course). Don’t believe it? Then take a look at this cool video from Apache, in which two developers happily trade faces.

    Swapping faces in real time—let the good times roll

    According to Adam Vahed, managing director at Apache, the ability of the Kinect for Windows v2 sensor and SDK to track multiple bodies was essential to this project, as the solution needed to track the head position of both users. In fact, Adam rates the ability to perform full-skeletal tracking of multiple bodies as the Kinect for Windows v2 sensor’s most exciting feature, observing that it “opens up so many possibilities for shared experiences and greater levels of game play in the experiences we create.”

    Adam admits that the face swap demo was done mostly for fun. That said, he also notes that “the ability to identify and capture a person’s face in real time could be very useful for entertainment-based experiences—for instance, putting your face onto a 3D character that can be driven by your own movements.”

    Adam also stressed the value of the higher definition color feed in the v2 sensor, noting that Apache’s developers directly manipulated this feed in the face swap demo in order to achieve the desired effect. He finds the new color feed provides the definition necessary for full-screen augmented-reality experiences, something that wasn’t possible with the original Kinect for Windows sensor.

    Above all, Adam encourages other developers to dive in with the Kinect for Windows v2 sensor and SDK—to load the samples and play around with the capabilities. He adds that the forums are a great source of inspiration as well as information, and he advises developers “to take a look at what other people are doing and see if you can do something different or better—or both!”

    The Kinect for Windows Team

    Key links

Page 1 of 10 (91 items) 12345»