• Kinect for Windows Product Blog

    Two hackathons—one city: creativity reigns in Austin, Texas


    Austin, Texas: capital of the Lone Star State, home to the Texas Longhorns, and host of not one but two Kinect for Windows hackathons in the past few weeks. We were blown away—like Texas tumbleweeds, you know—by the ingenuity and talent on display at these Austin events.

    NUI Central Kinect for Windows Hackathon

    Developers, UI/UX designers, and enthusiasts gathered in Austin for 24 hours of coding ingenuity using Kinect for Windows v2 on February 21. Austin Mayor Steve Adler kicked off the event, reminding everyone of Austin’s role as a technology hub and challenging the hackers to create their best innovations. Sponsored by Microsoft, the event was held at WeWork, a shared office space for startups; the venue offered a comfortable lounge and private offices for the hardworking devs, who coded through the night.

    The WeWork offices in Austin’s Historic District provided an inviting space for all-night hacking.
    The WeWork offices in Austin’s Historic District provided an inviting space for all-night hacking.

    All of that coding resulted in some truly innovative Kinect for Windows applications (and some bleary-eyed hackers). The output ranged from games to medical applications to productivity enhancers. It was tough to choose the winners, but, steeled in our resolve by some Texas-strength black coffee, our panel of judges selected the top three apps. Each winning team received a cash prize and Kinect for Windows v2 sensors.

    First place went to AR Sandbox, an onscreen, augmented-reality playground based on the infrared data collected by the Kinect sensor. When users manipulated a hand-held infrared reflective cube, the cube’s onscreen image transformed into a rubber duck or puppy. The app also created virtual rainstorms of rubber ducks and puppies. The user was able to interact with the ducks and puppies as onscreen objects.

    Coming in second was the Advanced Coma Patient Monitoring System, which is intended to keep watch on comatose patients, generating alerts and recording events to a video file.

    The third-place winner was I'm Hungry, an app that integrates Kinect and Skype, allowing callers to play a mini-game during a Skype call.

    Inspired by the resourcefulness on display at the NUI Central Kinect for Windows Hackathon, we were eager to get back to Austin for the SXSW Music Hackathon. Luckily, we had fewer than four weeks to wait.

    SXSW Music Hackathon Championship

    Wednesday, March 18, found the Kinect for Windows team back in Austin for the start of the 2015 SXSW Music Hackathon Championship, where world-class hackers, designers, and programmers competed to create innovations for musicians, the music industry, and, of course, the fans. With their programming know-how and a collection of music-tech APIs they could use, competing teams had 24 hours to work on their prototypes and compete for the $10,000 Grand Prize. Among the Microsoft APIs available to the hackers were the Kinect for Windows SDK and the recently released Microsoft Band SDK.

    Developers got a chance to learn about the APIs and meet the sponsors before the hackers pitched their ideas to recruit team members. Once the teams were formed, everyone quickly set to work creating music innovations.

    The Kinect v2 sensor and the Microsoft Band added a unique flair to the hackathon. Teams tested their apps throughout the night by dancing in front of the Kinect sensor—when they weren’t busy doing laps to check their heart rate with the Band. These Microsoft products brought an interactive element that intensified the energy level throughout the night.

    The SXSW Music Hackathon Championship was a beehive of coding activity, as developers raced the clock to create music apps.
    The SXSW Music Hackathon Championship was a beehive of coding activity, as developers raced
    the clock to create music apps.

    Adding to the excitement of the late-night hackathon was a surprise performance by Boyfriend69, a talented entertainer who drew the developers to the front of the room, where she mingled and danced with them. Her show gave off a high-voltage vibe that kept the devs working through the night in true hackathon spirit.

    Entertainer Boyfriend69’s surprise performance got the hackers up and mingling.
    Entertainer Boyfriend69’s surprise performance got the hackers up and mingling.

    As dawn broke on March 19, the developers had fewer than eight hours to finish their projects before presenting them to qualify for the finals. While the last minutes of hacking ticked away, the teams feverishly polished their presentations. Here are the apps that emerged from the hackathon’s 24 hours of frenzied creativity:

    App name



    This one-man team used Rdio and last.fm to create a QR code that aggregates listening data for display on an Apple Watch. When a user scans the code from another watch, Dandelion surfaces the song being listened to, using Rdio to play full songs or using other services to present 30-second previews.


    MusicMap.io, an Austin-based team, is similar to Apple’s Meerkat app, but for music. MusicMap allows anyone to broadcast geo-tagged video and plot it on a map. With this service, users can discover new music from all over the world. MusicMap uses Stream.me as a live streaming service.


    KYM (an acronym for Know Your Music), presented by Vince Davis, goes through the existing library on a user’s phone and gathers relevant information about the music by using APIs from various sources. Users can also hook up the app to Apple TV or the Apple Watch, so when they’re listening to music at home, the app shows relevant tweets from the artist.


    SetStory aims to solve a problem in festival logistics. Currently, no tool exists that quantitatively evaluates the potential of an event's success based on its artists. By using OpenAura to grab information from various social feeds, SetStory calculates a quantifiable score that gives festival promotors and organizers a reliable gauge of an event's financial viability.


    Groupie helps users find promising new artists in their local city. Users can also look at data from other cities, in case they want to discover the hot new bands from places near and far. Groupie uses the Rdio API to play the music and the Echonest API to look up the band's locale.


    Bandarama is a workout tool that provides video and audio feedback on the user’s exercise performance. If you're running, for example, and your heart rate slows down, the tempo of the music will slow down, too, signaling you to pick up the pace. Team members Boris Polania and Guillermo Zambrano ran in circles around the room to demonstrate that once you start running faster again, the tempo of the music speeds back up and an applause sound effect provides extra motivation.


    Divebomb uses the Kinect for Xbox One sensor to bring users into the music through virtual reality. As songs play, notes fly across the screen and the user can move his or her avatar to hit the notes as they race across the screen.


    Mashr takes two different songs and then mashes them together by using the Gracenote API. It also ties into the Musicnote API, which helps determine if two different songs will work well together.

    (List and descriptions from William Gruger, social/streaming charts manager for Billboard)

    The judges faced a tough job, as only five of these presenters would advance to the finals on Friday. But the intrepid judges were up to the task, selecting Bandarama, Mashr, MusicMap, KYM, and Dandelion to advance.

    On Friday, a celebrity panel of judges, consisting of Ty Roberts (Gracenote), Alex White (Next Big Sound), Jonathan Dworkin (Warner Music Group), Bryan Calhoun (Blueprint), Eric Sheinkop (Music Dealers), Jonathan Hull (Facebook), Todd Hansen (SXSW), and Marc Ruxin (Rdio) reviewed the finalists’ projects and selected the winner.

    Dandelion took top honors, winning the 2015 SXSW Music Hackathon and its $10,000 grand prize. But the big winners are music lovers, who will undoubtedly enjoy some of the great innovations created by the event’s hackers, sponsors, and artists.

    Microsoft unveiled some exciting new APIs at the SXSW Music Hackathon. These included the Neon Hitch API, which enabled artist-in-residence Neon Hitch to close out herstage show with a Kinect v2-enabled creative visual accompaniment to her song “Sparks.” Meanwhile, artist-in-residence Robert DeLong worked with Ableton and Microsoft, two of the hackathon's major sponsors, to turn his body into an instrument, which he then used on stage during his shows, including his set at the YouTube space. Another novel API was DJ Windows 98, an homage to the long-gone Microsoft operating system. It used a vintage CRT monitor controlled by the audience via Kinect for Windows.

    As we left Austin for the second time in less than a month, we carried away memories of the creative energy we witnessed at both the NUI Central Kinect Hackathon and the 2015 SXSW Music Hackathon Championship.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect gets artistic


    While we’ve always thought that Kinect for Windows was a work of art, figuratively speaking, we are delighted to see the art world embracing the Kinect sensor as a creative tool. Two highly imaginative artistic uses of Kinect for Windows recently caught our attention, and we want to share them with you.

    The first is a series of photographs by Israeli artist Assaf Evron, displayed at the Andrea Meislin Gallery in New York City from March 7 to April 25, 2015. Titled Visual Pyramid after Alberti, Evron’s striking photos show the interplay of light on everyday objects. The light is actually from the infrared spectrum emitted by the Kinect sensor. Using a separate infrared camera, Evron captures the Kinect-emitted infrared light as it’s reflected off the objects he’s photographing. The resulting images are a bold purple with a dense overlay of points of reflected infrared light.

    This photograph, which captures reflected infrared light emitted by a Kinect sensor, is part of artist’s Assaf Evron’s "Visual Pyramid after Alberti," 2013–2014.

    This photograph, which captures reflected infrared light emitted by a Kinect sensor, is part of artist’s Assaf Evron’s Visual Pyramid after Alberti, 2013–2014.

    (Copyright Assaf Evron. Photograph courtesy Andrea Meislin Gallery, New York.) 

    The photographs were inspired by the aesthetic philosophy of Renaissance thinker Leon Batista Alberti, who described a theory of linear perspective in his 1436 treatise Della pittura (On Painting). Alberti provided the mathematical underpinnings of perspective, showing how to render a three-dimensional illusion on a two-dimensional canvas. Evron’s photographs demonstrate Alberti’s theory in dramatic fashion.

    Once you’ve stopped pondering Alberti’s ideas, we have a new brainteaser for you: what do you get when you mix performance art, experimental filmmaking, and an avant-garde music composition? Well, throw in two Kinect v2 sensors, some computers, and the right software, and you get as-phyx-i-a, an otherworldly movie that, in the words of creators, “…is centered in an eloquent choreography that stresses the desire to be expressive without bounds.”

    The work of co-directors Maria Takeuchi and Frederico Phillips, the three-minute film renders the sinuous dancing of performance artist Shiho Tanaka as a glowing array of light points and spidery connections, all set to a haunting electronic score. The visuals and music are both eerie and beautiful, as the dancer’s image, which seems both digital and human simultaneously, moves gracefully across the screen.

    Phillips was responsible for the visuals, which captured some 30 minutes of Tanaka’s dancing as a mesh of point-cloud data using two Kinect v2 sensors. The data from both sensors was combined and then styled with various 3D tools to create the ethereal images on the final film. Composer Takeuchi used a variety of digital and analogue techniques to create the original sound track that accompanies the visuals.

    As Visual Pyramid after Alberti and as-phyx-i-a show, Kinect for Windows can be a potent artistic tool in the right creative hands.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect comes to the aid of golfers


    Virtual golfers may be eagerly anticipating the upcoming EA Sports PGA Tour for Xbox One, which is slated to include Kinect for Windows motion controls. But real golfers—the ones who actually hit the links—don’t have to wait to enjoy the golfing benefits of Kinect. They can use the power of the Kinect sensor to capture, analyze, and improve their golf swing, thanks to two golf-swing analysis products from Belgium-based Guru Training Systems: My Swinguru and Swinguru Pro.

    My Swinguru

    Designed for use by serious amateur golfers, My Swinguru detects flaws in the user’s golf swing and offers remedies. The golfer simply takes a swing in front of the Kinect sensor, which captures the entire motion in 3D. There are no wires or markers to interfere with the golfer’s swing—just one Kinect sensor that provides state-of-the-art, three-dimensional motion capture.

    Designed for use by amateur golfers, My Swinguru uses a single Kinect sensor to capture the golfer's
    swing in three dimensions.

    The data collected by the Kinect sensor is crunched by a Windows PC running the Swinguru software, which uses a unique combination of synchronized 2D and 3D captures to measure key elements of the swing at any moment, something not possible with traditional video techniques. The golfer receives immediate feedback that detects flaws and recommends remedial drills. And My Swinguru automatically records the golfer’s swings for comparative replay.

    Swinguru Pro

    Designed for use by professional golf instructors, Swinguru Pro provides simultaneous top, side, and front views on the same screen. Pause, forward, and back controls allow instructors to drill down frame-by-frame. Each training session, with all its swing data, is automatically saved, so it can be replayed and compared to earlier or later sessions. The Pro version also allows swing motions to be recorded as a series of pictures, which enables a sophisticated “match-your-posture” function. This function freezes the golfer’s set up, 9 o’clock, and top-of-backswing positions for comparison and direct feedback. In addition, Swinguru Pro provides balance tracking, including a view that shows the golfer’s center of mass displacement during the swing. It also includes automated drawing tools, which make it easy for users to compare body position in swing after swing.

    As demonstrated in this video, Swinguru Pro provides additional analyses for use by teaching professionals.
    Like My Swinguru, it uses just one Kinect sensor, which enables wireless motion capture.

    Enhanced with Kinect for Windows v2

    Initially developed for use with the original Kinect for Windows sensor, both versions of Swinguru have now been adapted for use with Kinect for Windows v2. Guru Training Systems CEO Sabastien Wulf is delighted with the improvements enabled by the new sensor. “The Kinect v2 sensor is a revolution for our use in sports motion analysis. Not only does the v2 sensor use a wider angle time-of-flight camera, which allows us to reduce the minimum distance from the sensor for full body tracking, it also increases the image resolution tremendously, which enables a much enhanced user experience. What’s more, its new infrared time-of-flight depth sensor, combined to its new infrared illuminator, makes it so much more resistant to direct sunlight for 3D full body tracking.”

    With help from Kinect for Windows and Swinguru, this golf season could be your best yet.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Windows Store provides new market for Kinect apps


    In case you hadn't noticed, the Windows Store added something really special to its line-up not too long ago: its first Kinect applications. The ability to create Windows Store applications had been a longstanding request from the Kinect for Windows developer community, so we were very pleased to deliver this capability through the latest Kinect sensor and the public release of the Kinect for Windows software development kit (SDK) 2.0.

    The ability to sell Kinect solutions through the Windows Store means that developers can reach a broad and heretofore untapped market of businesses and consumers, including those with an existing Kinect for Xbox One sensor and the Kinect Adapter for Windows. Here is a look at three of the first developers to have released Kinect apps to the Windows Store.

    Nayi Disha – getting kids moving and learning

    You wouldn’t think that Nayi Disha needs to broaden its market—the company’s innovative, Kinect-powered early education software is already in dozens of preschools and elementary schools in India and the United States. But Nayi Disha co-founder Kartik Aneja is a man on a mission: to bring Nayi Disha’s educational software to as many young learners as possible. “The Windows Store gives us an opportunity to reach beyond the institutional market and into the home market. What parent doesn’t want to help their child learn?” asks Aneja, somewhat rhetorically. In addition, deployment in the Windows Store could help Nayi Disha reach schools and daycare centers beyond those in the United States and India.

    Parents and teachers who discover Nayi Disha in the Windows Store will be impressed by its creative approach to learning. Based on Howard Gardner’s widely acclaimed theory of multiple intelligences, Nayi Disha appeals to children who learn best through movement, music, and storytelling. Each lesson teaches an important skill, such as from learning to count and recognizing common foods.

    Comparisons with Kaju, available in the Windows Store, teaches children about number values. Here, we see "Gator" swimming through the app's main menu.
    Comparisons with Kaju, available in the Windows Store, teaches children about number values.
    Here, we see "Gator" swimming through the app's main menu.

    These lessons are imparted through stories featuring Kaju, a friendly space-traveling alien, whose adventures and misadventures get the kids up and moving—and learning. For example, in one story Kaju is ejected from his spaceship and lands on an interstellar number line. To return to the spacecraft, he must jump sequentially from digit to digit, until he reaches a specified number. But here’s the rub: Kaju only jumps if the kids jump and call out the correct numbers. This, of course, is where the Kinect sensor comes into play. The sensor sees the children jumping and hears them counting, and Kaju responds accordingly. Watching a roomful of preschoolers joyfully leap and count as they work to get their alien friend back to his space capsule, you can see how Nayi Disha makes learning fun. The youngsters are acquiring important skills, but all they know is they’re having fun. In fact, their identification with Kaju is so strong that one little girl referred to Aneja as “Kaju’s papa.”

    YAKiT: bringing animation to the masses

    It doesn’t take much to get Kyle Kesterson yakking about YAKiT—the co-founder and CEO of the Seattle-based Freak’n Genius is justifiably proud of what his company has accomplished in fewer than three years. “We started with the idea of enabling anybody to create animated cartoons,” he explains. But then reality set in. “We had smart, creative, funny people,” he says, “but we didn’t have the technology that would allow an untrained person to make a fully animated cartoon. We came up with a really neat first product, which let users animate the mouth of a still photo, but it wasn’t the full-blown animation we had set our sights on.”

    Then something wonderful happened. Freak’n Genius was accepted into a startup incubation program funded by Microsoft’s Kinect for Windows group, and the funny, creative people at YAKiT began working with the developer preview version of the Kinect v2 sensor.

    Now, Freak’n Genius is poised to achieve its founders’ original mission: bringing the magic of full animation to just about anyone. Its Kinect-based technology takes what has been highly technical, time consuming, and expensive and makes it instant, free, and fun. The user simply chooses an on-screen character and animates it by standing in front of the Kinect v2 sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character. What’s more, with the ability to create Windows Store apps, Kinect v2 stands to bring Freak’n Genius’s full animation applications to countless new customers.

    YAKiT's Kinect-powered app makes it possible for anyone to create humorous animations in real time.
    YAKiT's Kinect-powered app makes it possible for anyone to create humorous animations in real time.

    “When we tested the Kinect-based product with users, they loved it,” says Kesterson. “We had a couple of teenaged girls create animated foods—like apples and broccoli—for a school report on nutrition. They got so into the animation that soon they were making fruit-attacking zombies, and before we knew it, they were on the floor from laughing so hard. Their mother said to me ‘I’ve got to get this.’ That’s when I knew that we’d have a winner in the Windows Store.”

    3D Builder: commoditizing 3D printing

    As any tech-savvy person knows, 3D printing holds enormous potential—from industry (think small-batch manufacturing) to medicine (imagine “bio-printing” of body parts) to agriculture (consider bio-printed beef). Not to mention its rapid emergence as source of home entertainment and amusement, as in the printing of 3D toys, gadgets, and gimcracks. It was with these capabilities in mind that, last year, Microsoft introduced the 3D Builder app, which allows users to make 3D prints easily from a Windows 8.1 PC.

    Now, 3D Builder has taken things to the next level with the incorporation of the Kinect v2 sensor. “The v2 sensor generates gorgeous 3D meshes from the world around you,” says Kris Iverson, a principal software engineer in the Windows 3D Printing group. “It not only provides precise depth information, it captures full-color images of people, pets, and even entire rooms. And it scans in real scale, which can then be adjusted for output on a 3D printer.”

    3D Builder uses Kinect v2 to create accurate, three-dimensional models, ready for 3D printing.
    3D Builder uses Kinect v2 to create accurate, three-dimensional models, ready for 3D printing.

    Beyond its scanning fidelity, the Kinect-enabled version of 3D Builder also lets users automatically refine and repair their 3D models prior to printing. In addition, it gives them to power to manipulate the print image, combining or deleting objects or slicing them into pieces. Users can see the reconstruction as it happens and revise it on the fly.

    The Kinect-enabled version of 3D Builder is now available in the Windows Store, opening up the enhanced possibilities of three-dimensional printing to both home users and a wider audience of professionals. For home users, the app enables the creation of 3D portraits and gadgets that are more realistic. Hobbyists can print through their Windows 8.1 computer directly to their own 3D printer, or they can send the data via the cloud to 3D printing service. For professionals, most of whom will likely use cloud-based printing services, 3D Builder offers the potential to print in range of materials, including plastics, ceramics, and metals.

    While home enthusiasts seem the most likely first adopters of the new app, the appeal to professionals is clear. For example, Iverson recounts an experience when he was showing 3D Builder at Maker Faire New York last year. An event planner asked him where she might get the app, which she mused would be perfect for creating 3D mementos at weddings and bar mitzvahs. To Iverson, this is just the tip of the iceberg. “The Kinect v2 version of 3D Builder and its availability in the Windows Store really puts the pieces together, making a complex technology super simple for anyone.”

    Nayi Disha, YAKiT, and 3D Builder represent just a thin slice of the potential for Kinect apps in the Windows Store. Whether the apps are educational, entertainment, or tools, as in these three vignettes, or intended for healthcare, manufacturing, retailing, or other purposes, Kinect v2 and the Windows Store offer a new world of opportunity for both developers and users.

    The Kinect for Windows Team

    Key links                                                              

  • Kinect for Windows Product Blog

    Using Kinect to monitor Parkinson’s patients


    Swedish neurologists and software developers have teamed up to create a potentially groundbreaking way to monitor patients with Parkinson’s disease. The secret ingredient? Kinect for Windows.

    Softronic, an IT management and consulting company headquartered in Stockholm, joined forces with the renowned Karolinska University Hospital to create an easy-to-use, affordable way to follow up remotely with Parkinson’s patients. Taking advantage of Kinect’s depth sensing, high-definition video, and skeletal tracking, the application can assess five movements based on the Unified Parkinson’s Disease Rating Scale (URSDRS), a widely used tool for assessing the status of Parkinson patients. For example, the Kinect application measures finger taps (tapping the thumb with the index finger in rapid succession), leg agility (tapping the heel on the ground in rapid succession raising the leg), and rapid alternating movements of the hands (simultaneously moving the hands vertically and horizontally). The software analyzes the movement data and presents the results in an interface for the physician’s assessment.

    The project's interface helps the physician easily assess the patient's status.
    The project's interface helps the physician easily assess the patient's status.

    Parkinson’s patients routinely visit their physician just once or twice a year, but the Kinect-based application lets doctors remotely monitor patients on an as-needed basis. It thus enables the physician to focus on those patients whose condition merits additional remote follow-up and perhaps a change in medication—or even a trip to the clinic or hospital.

    The system measures the patient's finger-tap ability, one of the standard measurements used to assess the motor function of Parkinson's patients.
    The system measures the patient's finger-tap ability, one of the standard
    measurements used to assess the motor function of Parkinson's patients.

    Another advantage of the system is its affordability, the Kinect sensor being far less costly than standard telemedicine equipment. Moreover, the presence of a Kinect sensor in the household does not indicate that someone is suffering from a disease, a comfort to patients who prefer to keep their condition private.

    The system is currently undergoing clinical trials in Sweden, and the doctors and developers are already exploring additional functionality, particular gamification, which could make rehabilitation exercises more fun. They are also looking into using the system to help educate physicians about Parkinson’s disease and to provide data for second opinions.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect helps bring technology to Chinese classrooms and community centers


    NuiStar, a software development company based in Nanjing, China, has utilized the Kinect sensor and SDK (software development kit) to bring cutting-edge technology to China’s schools and community centers. The centerpiece of the company’s offerings is their natural user interface, or NUI, which is built around the Kinect sensor’s ability to recognize and respond to users’ natural gestures and voice commands. This user-friendly capability truly makes NUI the star of the NuiStar software.

    In NuiStar’s Joystar games, children use intuitive body movements to "play" their way through educational content. Watch the NuiStar video.
    In NuiStar’s Joystar games, children use intuitive body movements to "play" their way through
    educational content. Watch the NuiStar video.

    The value of natural human-computer interactions is illustrated dramatically in NuiStar’s educational products, which provide digital content for preschool and elementary school levels. Consider the company’s Joystar Preschool Game Series, a collection of 20 game-like apps that encourage youngsters to use intuitive body movements to “play” their way through the content, thereby engaging the students in an immersive and enjoyable learning process.

    The body-controlled interactions come naturally to the children, thus reducing the time they spend learning how to play the game and providing educators with a robust and easy-to-implement teaching tool. Moreover, the interface is innately engaging and satisfying to young students, who enjoy being active in the classroom. And Joystar’s games support multiple players, a feature that not only boosts participation rates but also builds cooperative and team-based skills.

    In addition, since these Kinect-based, gamified apps get students up and moving, they help to meet China’s requirement that physical activity be integrated into the curriculum. The whole-body engagement inherent in NuiStar’s educational content keeps students physically active even when weather or time constraints preclude traditional outdoor exercises. In initial assessments, a group of 300 preschoolers using NuiStar’s software showed a 33% increase in average exercise time. NuiStar is now working with teachers to implement more robust methodologies to evaluate the efficacy of the games’ educational content.

    The NuiStar team has also developed two Kinect-enabled, gesture-controlled training apps for students: one on fire safety and the other on pedestrian safety. The fire safety app tests the student’s knowledge of how to evacuate a burning school and monitors his or her behaviors during a simulated fire in a school building. In the pedestrian safety app, students must walk through a simulated city street scene, practicing traffic safety rules in this virtual-reality environment. By enabling students to respond naturally by using gestures, these pilot apps help them learn how to cope with dangerous situations through an interactive trial-and-error method that is more vivid and engaging than passive instructional methods.

    While enhancing learning is NuiStar’s current focus, the company is also bringing its NUI technology to the community technology centers that have become commonplace in China’s towns. These centers are designed to provide local residents with access to technology and services that are otherwise out of their reach. But because many rural residents and older citizens are unfamiliar with modern technology, center employees often spend a great deal of time showing them how to use the hardware and software and putting them at ease with technology. NuiStar’s Kinect-based programs address these issues in a novel way, reassuring new users with an innately natural method of interaction. To new users, waving an arm or making a gesture is infinitely more accessible than typing on a keyboard or using a mouse to navigate menu options.

    NuiStar's Kinect-enabled gesture controls help patrons at a community tech center intuitively navigate unfamiliar technology.
    NuiStar's Kinect-enabled gesture controls help patrons at a community tech center intuitively
    navigate unfamiliar technology. The patrons here are watching a video of the equestrian event
    from last summer’sYouth Olympic Games in Nanjing.

    Recently, NuiStar added a content management interface tool that lets teachers and community center employees integrate their current multimedia resources into the Kinect system. This further reduces the learning curve for students and local residents, making technology more accessible and useful.

    From the classroom to the community center, NuiStar is working to make technology intuitive and interactive for children and adults.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect-powered experiences bring people together


    The following blog was guest authored by Kyle Banuelos, co-founder of Stublisher Inc., a startup that engineers interactive experiences for cutting-edge campaigns, events, spaces, and connected devices.

    Mini-computers, powerful microcontrollers, and sensors (such as Kinect), are increasingly becoming more affordable and customizable. Frameworks and libraries built for these technologies are typically open source and written in approachable, high-level languages. This fundamental democratization of technology has spawned a generation of “creative technologists,” hybrid artists/coders whose role is largely experimental. As budding members of this creative coding community, we at Stublisher develop software that thrives at the interplay of physical environments and human participation. These projects range from unique brand activations to purely artistic endeavors.  

    We are fascinated by Kinect’s affordance of participatory experiences, because we believe that interactivity as a medium spawns understanding, significance, and lasting takeaway. We’re particularly exploring how to harness the artistic potential of the Kinect sensor to further our mission of connecting people through shared experience. This objective is the basis for everything we create, including our latest Kinect for Windows projects: Branch and Pinscreen. Both of these Kinect-enabled experiences fostered individual and collaborative exploration, interpretation, and expression.


    Stublisher was commissioned to build an interactive art piece for a 2014 Halloween extravaganza in Portland, Oregon. The result was Branch—an evolving, illusionary light sculpture that tracked and outlined viewers’ bodies via a Kinect sensor, using the information to create an interactive, explorative experience. From the outset, we focused our conceptualization around three specific questions:

    1. How can light physically and emotionally transform a space?
    2. How can we trick the eye into perceiving a three-dimensional structure as flat?
    3. How can movement drive an experience?

    Through our initial conversations, we arrived at a central theme that drove the key aesthetic attributes and development of our piece: altered perception.

    Technically, a single processing sketch that included a rudimentary mapping utility allowed us to position a series of virtual LED strands on a two-dimensional plane. Information from the Kinect data stream was returned in real-time, updating the actual LED strands that were driven by a network of LED-specific microcontrollers. A computer was housed inside a custom-fabricated, matte black enclosure, where the Kinect sensor was discreetly positioned.

    In addition to transforming this otherwise static work into an expressive canvas by tracking viewers’ movements, Kinect for Windows empowered people to find meaning in Branch through connection with their peers around a shared interest of dance expression.


    Recently, Portland organized its inaugural Startup Week, a five-day celebration of our local community. A series of independently organized and managed events took place across the city, and we hosted the closing party at Stublisher’s headquarters. With world-renowned DJ MICK on the decks, we wanted to further encourage community celebration and concert participation within our space.

    This was a great opportunity to exhibit and gather feedback on a smaller, internal experiment that we had hacked together, which was inspired by pinscreen animation—a technique that uses a surface filled with movable pins to create textures and shapes. The system that powered Pinscreen was built in TouchDesigner, using the depth data from a Kinect v2 sensor to drive the motion of hundreds of particles through a 3D scene. Incoming data was smoothed to create fluid, procedural motion that subtly augmented the architecture of our office.

    Pinscreen resonated strongly with the audience, fostering choreographed efforts of manipulation. Pinscreen taught us that sometimes the simplest applications can not only delight, but achieve our objective: to connect people through shared experience.

    The various experimental and practical applications of the Kinect hardware by the creative coding community serves as continuous inspiration for our own interactive experiments at Stublisher, and we’re eager to further explore Kinect’s potential for cultivating shared experiences.

    Kyle Banuelos, Co-founder, Stublisher

    Key links

  • Kinect for Windows Product Blog

    Helping doctors focus on what matters


    We’ve grown accustomed to seeing the Kinect sensor put to amazing uses in healthcare—from stroke rehabilitation, to fall prevention, to seizure monitoring—but even we were surprised to learn that Kinect for Windows is now being used to watch the doctor! That’s right: researchers at the University of California, San Diego (UCSD) are using the Kinect for Windows sensor as part of an innovative tool to monitor how physicians interact with patients during consultations.

    The project, called Lab-in-a-Box, is the brainchild of UCSD researcher Nadir Weibel and his colleagues at the San Diego Veterans Affairs (VA) Medical Center.  Designed to fit unobtrusively in a doctor’s office, the apparatus detects when physicians are so focused on their computer screen that they fail to establish eye contact and one-to-one rapport with the patient.  The Kinect sensor plays a key role in the process, as its depth camera accurately records the movements of the physician’s head and body. An independent eye-tracker device detects the doctor’s gaze, while a microphone picks up the doctor-patient conversation.  The Lab-in-a-Box software merges all of these data streams and synchronizes them with data on the doctor’s computer usage, combining everything to give a detailed picture of the physician’s interaction with the patient.

    The Kinect sensor captures a video and depth image; the latter is then overlaid with joint and gaze estimation based on yaw, roll, and pitch.The Kinect sensor captures a video and depth image; the latter is then overlaid with joint and
    gaze estimation based on yaw, roll, and pitch.

    Weibel, who is a research assistant professor in the Department of Computer Science and Engineering, devised the Lab-in-a-Box in part to help physicians deal with the ever-expanding barrage of digital medical records. With so much digital information about the patient, doctors can find themselves staring at their computer or tablet screen, instead of focusing attention on the patient.

    The Lab-in-a-Box, which will be used only with the consent of both the doctor and the patient, is currently being tested at the UCSD Medical Center and the San Diego VA Medical Center as part of a study funded by the Agency for Healthcare Research and Quality and directed by physician and researcher Zia Agha. Its developers hope that it will help medical personnel run their practices more efficiently and with greater doctor-patient rapport.

                                                                    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    No bones about it: Kinect for Windows v2 skeletal tracking vastly better


    You can read about the improvements that Kinect for Windows v2 offers over its predecessor, but seeing the differences with your own eyes is really, well, eye-opening—which is why we’re so pleased by this YouTube video posted by Microsoft MVP Josh Blake of InfoStrat. In it, Blake not only describes the improvements in skeletal tracking provided by the v2 sensor and the preview SDK 2.0 (full release of SDK 2.0 now available), he actually demonstrates the differences by showing side-by-side comparisons of himself and others being tracked simultaneously with the original sensor and the more robust v2 sensor.

    As Blake shows, the v2 sensor tracks more joints, with greater anatomical precision, than the original sensor. His video also highlights the major improvements in hand tracking that the v2 sensor and SDK 2.0 provide, and, with the help of two colleagues, he demonstrates how Kinect for Windows v2 can track more bodies than was possible with the original sensor and prior releases of the SDK.

    When asked how the improved skeletal-tracking capabilities can be utilized, Blake responded, “It helps improve several different scenarios. The more accurate anatomical precision is particularly useful in health and rehabilitation apps, as well as for controlling virtual avatars more accurately.” He also finds great potential in the enhanced hand-tracking capabilities, noting that “recognizing the two-finger point pose in addition to the hand open and hand closed poses means we have more options for developing interesting deep interactions.”

    Finally, Blake points out that the ability to track the movements of up to six individuals will be valuable in a variety of situations, such as showroom scenarios or workplace applications that involve multiple people. “Before, users had a hard time understanding why the application would respond to two people but not more, or how to get it to switch to a new person,” he says. “The support for six full skeletons also means that we don’t have to compromise in how many people can interact with an application or experience at once.“

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    This nurse makes house calls, thanks to Kinect for Windows


    Telemedicine has become one of the hot trends in healthcare, with more and more patients and doctors using smartphones and tablets to exchange medical information. The convenience of not having to travel to the doctor’s office or clinic is a big part of the appeal—as is the relief of not wasting valuable time thumbing through outdated waiting-room magazines when an appointment runs late. And for patients living in isolated or underserved areas, telemedicine offers care that might otherwise be unattainable. Despite these advantages, telemedicine can be coldly impersonal, lacking the comfort of interacting with another human being.

    Silicon Valley-based Sense.ly is working to bring a human face to telemedicine. The company’s Kinect-powered “nurse avatar” provides personalized patient monitoring and follow-up care—not to mention a friendly, smiling face that converses with patients in an incredibly lifelike manner. The nurse avatar, affectionately nicknamed Molly, has access to a patient’s records and asks appropriate questions related directly to the patient’s past history or present complaints. She has a pleasant, caring demeanor that puts patients at ease. Interacting with her seems surprisingly natural, which, of course, is the goal.

    Sense.ly's nurse avatar, Molly, responds to patients' speech and body movements.
    With the help of Kinect for Windows technology, Sense.ly's nurse avatar, called Molly, can respond
    to patients’ speech and body movements.

    By using Kinect for Windows technology, Sense.ly enables Molly to recognize and respond to her patient’s visual and spoken inputs. The patient stands or sits in from of a Kinect sensor, which captures his or her image and sends it to Molly. Does the patient have knee pain? She can show Molly exactly where it hurts. Is the patient undergoing treatment for bursitis that limits his range of motion? He can raise his affected arm and show Molly whether his therapy is achieving results. In fact, the Kinect sensor’s skeletal tracking capabilities allow Sense.ly to measure the patient’s range of motion and to calculate how it has changed from his last session. What’s more, with Kinect providing a clear view of the patient, Molly can help guide him or her through therapeutic exercises.

    A growing number doctors and hospitals are recognizing the value of applications such as Sense.ly. In fact, the San Mateo Medical Center is one of several major hospitals that have recently added Molly to their staff, so to speak. The value of such solutions is particularly striking in handling patients who suffer from long-term conditions that require frequent monitoring, such high blood pressure or diabetes.

    Solutions like Sense.ly also provide a clear cost benefit for providers and insurers, as treating a patient remotely is less costly and generally more efficient than onsite care. In a recent pilot program, the use of Sense.ly reduced patient calls by 28 percent and freed up nearly a fifth of their day for the clinicians involved in the program.

    Most importantly, Sense.ly’s Kinect-powered nurse avatar offers the promise of better health outcomes, the result of more frequent medical monitoring and of patients’ increased involvement in their own care. Something to think about the next time you’re stuck in the doctor’s waiting room.

    The Kinect for Windows Team

     Key links

Page 5 of 17 (169 items) «34567»