• Kinect for Windows Product Blog

    Kinect-enabled VR puts users in space with Earthlight

    • 0 Comments

    The following blog was guest-authored by Russell Grain, a development lead at Opaque Multimedia, a Melbourne (Australia)-based digital design studio specializing in the application of video game technologies in novel domains.

    Earthlight is a first-person exploration game where the players step into the shoes of an astronaut on the International Space Station (ISS). There, some 431 kilometers (about 268 miles) above the Earth, they look down on our planet from the comfort of their own spacesuit. Featuring the most realistic depiction yet of the ISS in an interactive virtual reality (VR) setting, Earthlight demonstrates the limits of what is visually achievable in consumer-oriented VR experiences.

     Opaque Multimedia’s Earthlight game enables players to explore the International Space Station in an interactive VR setting, thanks to the latest Kinect sensor.
    Opaque Multimedia’s Earthlight game enables players to explore the International Space Station in an
    interactive VR setting, thanks to the latest Kinect sensor.

    Our team at Opaque Multimedia developed Earthlight as a technical demo for our Kinect 4 Unreal plug-in, which exposes all the functionality of the latest Kinect sensor in Unreal Engine 4. Our goal was to create something visceral that demonstrated the power of Kinect as an input device—to show that Kinect could enable an experience that couldn’t be achieved with anything else.

    Players explore the ISS from a truly first-person perspective, in which the movement of their head translates directly into the viewpoint of a space suit-clad astronaut. To complete this experience, players interact with the environment entirely through a Kinect 4 Unreal powered avateering solution, pushing and pulling themselves along the surface of the ISS as they navigate a network of handles and scaffolds to reach the top of the communications array.

    Everyone behaves somewhat differently when presented with the Earth hanging below them. Some race straight to the top of the ISS, wanting to propel themselves to their goal as fast as possible. Others are taken with the details of the ISS’s machinery, and some simply relax and stare at the Earth. On average, players take about four minutes to ascend to the top of the station’s communications array.

     By using Kinect, Earthlight enables players to explore the ISS without disruptions to the immersive VR experience that a keyboard, mouse, or gamepad interface would create.
    By using Kinect, Earthlight enables players to explore the ISS without disruptions to the immersive VR
    experience that a keyboard, mouse, or gamepad interface would create.

    As well as being a fantastic tool for building immersion in a virtual game world, Kinect is uniquely positioned to help solve some user interface challenges unique to the VR experience: you can’t see a keyboard, mouse, or gamepad while wearing any current generation virtual reality device. By using Kinect, not only can we overcome these issues, we also increase the depth of the experience.

    The enhanced experience offers a compelling new use case for the fantastic body-tracking capabilities of the Kinect for Windows v2 sensor and SDK 2.0: to provide natural and intuitive input to virtual reality games. The latest sensor’s huge increase in fidelity makes it possible to track the precise movement of the arms. Moreover, the Kinect-enabled interface is so intuitive that, despite the lack of haptic feedback, users still adopt the unique gait and arm movements of a weightless astronaut. They are so immersed in the experience that they seem to forget all about the existence of gravity.

    Earthlight has enjoyed a fantastic reception everywhere it’s been shown—from the initial demonstrations at GDC 2015, to the Microsoft Build conference, the Silicone Valley Virtual Reality meetup, and the recent appearance at We.Speak.Code. At each event, there was barely any reprieve from the constant lines of people waiting to try out the experience.

    We estimate more than 2,000 people have experienced Earthlight, and we’ve been thrilled with their reactions. When asked afterwards what they thought of Earthlight, the almost universal response was “amazing.” We look forward to engendering further amazement as we push VR boundaries with Kinect 4 Unreal.

    Russell Grain, Kinect 4 Unreal Lead, Opaque Multimedia

    Key links

  • Kinect for Windows Product Blog

    "Fortissimo," via Kinect

    • 0 Comments

    Fortissimo, via Kinect

    On May 1, the Seattle Symphony presented the world premiere of Above, Below, and In Between, a composition by kinetic sculptor, sound artist, and composer-in-residence Trimpin. Unique to this performance, conductor Ludovic Morlot directed not only musicians, but also the latest Kinect sensor, which responded to his gestures and translated them into commands for three kinetic instruments: a 24-reedhorn sculpture, a set of concert chimes, and a robotic grand piano.

     The inner workings of the robotic grand piano piqued the curiosity of concertgoers. (credit: Brandon Patoc Photography)
    The inner workings of the robotic grand piano piqued the curiosity of concertgoers.
    (credit: Brandon Patoc Photography)

    The performance took place in the grand lobby of Benaroya Hall, the home of the Seattle Symphony, with the musicians, singers, and the kinetic instruments strategically placed to benefit from the lobby’s architectural and acoustical characteristics. The Kinect sensor was located in front of the conductor’s stand, where it tracked Morlot’s movements and relayed them to the kinetic instruments, allowing the conductor to start and stop the instruments and control their volume with arm and hand gestures.

    Although this unique performance was a one-time event, the Kinect sensor and the kinetic instruments remain in Benaroya Hall, where the public can see and interact with them.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect sensor helps facilitate learning

    • 0 Comments

    We all know that children can learn from videos. But many educators have questioned the effectiveness of such learning when compared to the understanding that young children derive from working with tangible objects. In a recent paper, researchers Nesra Yannier, Ken Koedinger, and Scott Hudson at the Human-Computer Interaction Institute at Carnegie Mellon University tested the instructional effectiveness of what they call mixed-reality games, which combine the virtual and real-world modes of teaching, where the Kinect sensor played an important role.

    In a series of experiments, the researchers compared the effectiveness of virtual lessons to those that used a mixed-reality of virtual plus real-world interactions. The lessons involved teaching 92 children, ages six to eight, some basic physical principles via a game the researchers called EarthShake. The experiment asked the children to draw conclusions after seeing the effects of a simulated earthquake on structures made of building blocks, which stood on a motorized table that shook when triggered. In addition, the children received real-time, interactive feedback synchronized to the fate of the real-world towers, to help them understand the physics principles behind the physical phenomena.

    The Kinect-enabled mixed-reality game achieved significantly better learning outcomes than the video version.

    All of the children witnessed how the real-world towers fared during the table shaking. They also saw a projected image of the shaken towers, but this is where it varied. Some of the children viewed a pre-taped video of the shaking effects on the building-block towers, integrated into a screen-based computer game, while a different subset of the children experienced a Kinect-enabled, synchronized image of the real-time fate of the very towers on the table in front of them. This version relied on the Kinect sensor’s depth camera, whose data was processed via a customized algorithm.

    The results were startling: on tests comparing the childrens’ understanding of structural stability and balance before and after EarthShake, the youngsters who had experienced the Kinect-enabled version showed nearly five times greater improvement in comprehension. This led the researchers to conclude that mixed-reality instruction is more effective than teaching with only videos. They aim to extend their patent-pending method and technology to create a new educational system that bridges the advantages of the physical and virtual worlds via Kinect, with a goal of improving children’s science learning, understanding, and enjoyment.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Dancing waters, Kinect style

    • 0 Comments

    Fountains: indoors or out, these decorative displays of water in motion have long mesmerized onlookers. France’s Sun King, Louis XIV, had fountains placed throughout his palace gardens at Versailles; groundskeepers would activate their assigned fountain as the king approached. If only Louis were around today, he might opt to trigger the fountains and control their displays himself, using the latest Kinect sensor.

    That’s because Beijing Trainsfer Technology Development, a Chinese high-tech company, has developed a system that lets onlookers control a fountain’s display by gesturing with their arms and legs. The latest Kinect sensor and the Kinect for Windows SDK are at the heart of the system, working together to capture precise body positions and gestures, which are then translated into instructions for the computer that controls the fountain display. For example, when you raise your left arm, water might spray from the left side of the fountain. Or kick out your right leg, which might activate a series of smaller jets of water. The controlling gestures can be customized for each fountain, and, as the video below demonstrates, the system can be used in both indoor and outdoor settings.

    Trainsfer began development of Kinect-controlled fountains in February 2013 and 11 months later received a patent from the State Intellectual Property Office of the Peoples Republic of China. Around the same time, Trainsfer learned that Microsoft Research was working on Kinect-controlled fountains as well. Trainsfer’s engineers contacted Microsoft Research and received significant support on perfecting their Kinect-based system. “Our product would not exist without Kinect technology,” says Hui Wang, general manager of Trainsfer. “It has enabled us to create a revolutionary product for the eruptive fountain industry.”

    The system became commercially available in April 2015, and Trainsfer is currently negotiating to install it in one of China’s largest theme parks. The company also expects the product to be popular in outdoor parks and plazas, as well as indoors in major office buildings. Trainsfer also anticipates selling advertising space at the fountains, where the allure of controlling the water display should attract and engage passersby.

    So maybe it won’t be too long before you can do something that would make you the envy of Louis XIV—thanks to Kinect.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Sick children get a dose of Kinect-enabled empowerment

    • 0 Comments

    A seriously ill child is every family’s nightmare. So it’s no wonder that young patients and their families are filled with anxiety when a child requires frequent hospital visits. And while a great bedside manner can help alleviate the stress, a hospital, even a pediatric one, can still be a frightening place for kids and parents—which is why Boston Children’s Hospital installed a Kinect-enabled interactive media wall in its recently renovated lobby.

    The 20-foot-tall, gently curved wall engages the young patients and their families as it displays any of nine scenes, each filled with special effects controlled by the onlookers’ movements. Youngsters quickly discover that a wave of their hand can rearrange stars in a night sky, or that walking will make leaves flutter in a garden scene. Parents, too, get in on the action, eagerly helping their child become immersed in the interactive wonders.

    The Kinect-enabled interactive media wall at Boston Children's Hospital helps give seriously ill children a
    sense of empowerment at a time when their lives seem out of control.

    The media wall was created for Boston Children’s Hospital by the combined talents of designers, engineers, mathematicians, and behavioral specialists at the University of Connecticut (UConn). As Tim Hunter, director of UConn’s Digital Media Center and Digital Media & Design program explains, the experience is intended first and foremost for the youngsters, who are delighted to have a sense of control in a situation where they often feel helpless.

    “Our goal was to create something that would empower physically and emotionally challenged children at a time in their life when most events are beyond their control,” says Hunter. “Doctors and nurses, along with Mom and Dad, dictate most of what’s happening to them—for good reason. We wanted to make something the kids could take control of—something that would be theirs; something they would look forward to when they come to the hospital.”

    “However,” Hunter continues, “we wanted to create something for the family, too. After all, the entire family is going through this, and we wanted to let parents be part of the engagement, to be able to have this group family experience.”

    The wall’s seemingly magical powers are the product of a lot of technology, including 13 Kinect sensors and seven optical cameras, which combine to cover a space of 18 feet by 24 feet (about 5.5 meters by 7.3 meters). The project was begun in 2012, before the Kinect for Windows v2 sensor was available, but the finished installation uses a combination of five v2 and eight original sensors, all mounted overhead. The five Kinect v2 sensors capture the frontal view of the participants, while the eight original Kinect sensors track the participants from behind. Together with the seven optical cameras, they provide a stream of high quality data on movements and gestures, which are then used to animate avatars or otherwise manipulate the onscreen display.

    With all those sensors in place, the system can readily track individual people, with one Kinect sensor handing off the tracking to another as a participant moves through each sensor’s range. The multiplicity of sensors also means that participants, or their on-screen avatars, can readily interact, which makes the experience all the more engaging, especially when patients and parents cooperate.

    Theoretically, the system could capture the movements of as many people as can fit into the area that the 13 sensors monitor. But as Tim Gifford, director of UConn’s Advanced Interactive Technology Center explains, the team designed the software to limit the number of bodies being tracked. For instance, in the media wall’s musical instrument scene, the software allows up to 12 participants to play simultaneously. Likewise, the “star man” experience, in which participants use their gestures to change the shape of a constellation, can accommodate a total of eight participants at a time. Otherwise, says Gifford, “The kids can no longer appreciate what their movements are causing.”

    The interactive media wall opened in late 2014, to rave reviews from patients, families, and hospital staff. And while the technological wizardry is indeed amazing, the real enchantment of the wall is found in the smiles on the faces of seriously ill children, who can take control of their environment and find delight in an immersive experience.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Retail signage gets smart

    • 0 Comments

    The following blog was guest-authored by Justin Miller, an account manager for digital media and biometrics at NEC, a global provider of IT services and products.

    When was the last time that you went into a retail store and—without talking with a salesperson—got more information about a product or service than you could have gleaned online? That sort of memorable customer experience is exactly what the latest generation of digital retail signage can provide, and the latest Kinect sensor is one reason why.

    Retailers have been intrigued with digital signage since the price of large displays started falling in the late 2000s. The shift to digital meant that stores and restaurants could change their pricing, menus, or other details in real time, without needing to replace physical placards. Today, there are essentially three types of customer-facing digital signage: passive, interactive, and intelligent signs.

    Microsoft's Kinect-enabled intelligent signage system drew crowds at the 2015 National Retail Federation trade show.
    Microsoft's Kinect-enabled intelligent signage system drew crowds at the 2015 National Retail
    Federation trade show.

    Passive signage is the kind you’re likely to encounter at a fast-food restaurant. Typically displayed on one or more screens, the message either doesn’t change at all or changes only at set intervals, as when a digital menu switches from breakfast to lunch at 11:00 A.M. Passive signage can be hosted on a local machine or over the Internet, and it’s generally more cost effective than running print jobs every time a business wants to change its offerings or prices. Although it’s very practical, it lacks the interactive element that really draws people in.

    Interactive signage provides a level of user interaction by being “triggered” by an event. Think of iBeacon, the indoor positioning system that can send a coupon to a smartphone when a customer enters his favorite clothing store or can display information about a painting when a museum patron holds her phone next to it. In other cases, sensors trigger the interactive signage, as when, for example, lifting a bottle of Bordeaux from the shelf at the wine store trips a light sensor, which prompts the display of a map of the Bordeaux region along with information about the wine. Such signage is indeed interactive, but it offers only one level of engagement between the individual and the display, and the engagement itself is not always intuitive.

    Intelligent signage, by contrast, creates intuitive, compelling in-store experiences. It encourages customers to interact with and order products on-screen. It also feeds interaction data back to the retailer or business owner. Intelligent signage represents the cutting edge of technology in retail and digital advertising.

    Among the most advanced intelligent signage systems is Microsoft’s Kinect-enabled solution. The Kinect sensor detects a customer’s proximity to the display and perceives his or her interaction with products on a shelf. Different distances and interactions trigger different layers of contextual signage, including static or video advertisement screens, product pricing, product technical specs, user reviews from the web, and more. Moreover, the system incorporates NEC’s facial recognition and demographic software, which enables it to determine anonymous demographic information, including the shopper’s gender and approximate age. The system also records the amount of time customers spend looking at various products, data that helps  advertisers and business owners understand their audience and the efficacy of their signage.

     Microsoft's intelligent signage solution includes data on the age and gender of shoppers who show interest in a product.
    Microsoft's intelligent signage solution includes data on the age and gender of shoppers who
    show interest in a product.

    Intelligent signage systems use advanced yet inexpensive hardware including the Kinect sensor and lightweight PCs, such as those embedded in NEC commercial displays. Moreover, they can be combined with sensors in the ceiling (so-called in-store heat mapping) to give the business owner a broad understanding of how people traverse a particular store. In addition, the intelligent signage data can be linked to point-of-sale data, providing the owner with demographic information on the appeal of various products. This is essentially web analytics for the tangible world, and it’s going to change the way that you experience retail.

    The intelligent signage system also provides analytics on store traffic, helping retailers understand how traffic patterns affect sales.
    The intelligent signage system also provides analytics on store traffic, helping retailers
    understand how traffic patterns affect sales.

    What’s next? With the digital signage market projected to grow a staggering 65% in 2015 alone, it’s more than likely that intelligent signage and audience measurement systems will be arriving soon at a business near you. It all sounds a bit like something out of the movie, Minority Report, but consumers need not worry about losing their anonymity. Businesses can only collect anonymous data about customer demographics unless shoppers opt-in to a customer loyalty program. Overall, intelligent signage will lead not only to greater efficiencies in the retail sector, but also far more interesting in-store experiences for shoppers.

    Justin Miller, Account Manager for Digital Media and Biometrics, NEC

    Key links

  • Kinect for Windows Product Blog

    Kinect-based student projects shine at China Imagine Cup

    • 0 Comments

    Microsoft’s Imagine Cup has become a global phenomenon. Since its inception in 2003, this technology competition for students has grown from about 1,000 annual participants to nearly half a million in 2014. Now the 2015 competition is underway, and projects that utilize Kinect for Windows are coming on strong, as can be seen in the results of the competitions in China. Of the 405 Imagine Cup projects that made it to the second round of the China National Competition, 46 (11 percent) used Kinect for Windows technology.

    A member of Team Hydrz demonstrates how the latest Kinect sensor controls the virtual reality world in Defense of Laputa.
    A member of Team Hydrz demonstrates how the latest Kinect sensor controls the virtual reality
    world in Defense of Laputa, a project that earned the Grand Prize in the Kinect for Windows
    special award category.

    Ten of these Kinect-based projects made it through the national semifinals, comprising 20 percent of the 49 projects that moved on to the national finals, where they competed for prizes in the Innovation, World Citizenship, and Games categories, as well as for three prizes in a special Kinect-technology category. Six of the ten Kinect-enabled projects came away with prizes, including two First Prizes in the Innovation category and two Second Prizes in the World Citizenship category (the top prize in all categories was the Grand Prize).  

    Watch a video overview of the national finals of the China Imagine Cup 2015 competition
    Watch a video overview of the national finals of the China Imagine Cup 2015 competition

    The table below provides information about the winning projects (two of which share a similar name—Laputa—which is a reference to a popular Japanese anime film). As you can see, the Pmomo project earned both a First Prize in the Innovation category and an Excellence Prize in the Kinect for Windows special category.

    Kinect projects that earned prizes in the China Imagine Cup National Finals

    Application name Purpose School/team name Prize(s)
    Pmomo Kinect-enabled projection mapping on moveable objects, enabling the creation of virtual exteriors for real objects; could be used for exhibitions in museums, galleries, and schools Shanghai Jiao Tong University/Team Arere

    First Prize in the Innovation category;

    Excellence Prize in the Kinect for Windows special award category

    Emergency Response Robot Kinect-controlled robot designed to deal with dangerous situations, such as potential explosions or dangerous chemical leaks East China Normal University/ Team Quintessence Second Prize in the World Citizenship category
    Defense of Laputa Virtual 3D game based on Kinect and virtual reality equipment Beijing University of Posts and Telecommunications/ Team Hydrz Grand Prize in the Kinect for Windows special award category
    Interactive Virtual Tour Kinect-based experience that lets users become virtual tourists, interactively immersed in the culture of the host country Southwest University of Science and Technology/Team Totoro Excellence Prize in the Kinect for Windows special award category
    CaneFitter A Kinect-enabled system designed to help elderly users choose a suitable cane and then teach them how to use the cane safely and effectively Institute of Computing Technology, Chinese Academy of Science/Team Trilegs Second Prize in the World Citizenship category
    Laputa An immersive Unity game, in which Kinect body tracking controls your movements in a virtual reality world Hainan University/ Team Stanic Pace First Prize in the Innovation category


    We salute these outstanding students and their mentors for their creativity and innovation, and for demonstrating the versatility of Kinect for Windows.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Australian study looks at the use of technology in rehabilitating brain-injured patents

    • 0 Comments

    A research team with members from seven major Australian hospitals and universities is evaluating the use of technology, including Kinect-enabled games, in helping patients recover from brain injuries. Buoyed by the results of a 60-patient pilot, which found that participants enjoyed using the technology and that it helped improve their balance, the full-blown study involves more than 300 patients, including those who have suffered brain injuries from falls, strokes, and accidents.  

    As regular readers of this blog know, Kinect for Windows technology is already being utilized to good effect in neurological rehabilitation.  As we previously reported, Intel-GE Care Innovations offers a Kinect-powered application that helps elderly patients recover from and avoid future falls.  We have also covered the Kinect-enabled stroke rehabilitation system created by Jintronix.

    These Kinect-based solutions offer hope to countless patients and their therapists.  We look forward to the results of the Australian study and are confident that they will provide further proof of the important role that Kinect applications can play in healthcare.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    RoomAlive Toolkit unveiled at Build 2015

    • 0 Comments

    As highlighted during the Build 2015 Conference, Microsoft is more committed than ever to delivering innovative software, services, and devices that are changing the way people use technology and opening up new scenarios for developers. Perhaps no software reflects that commitment better than the RoomAlive Toolkit, whose release was announced Thursday, April 30, in a Build 2015 talk. The toolkit is now available for download on GitHub.

    The RoomAlive Toolkit enables developers to network one or more Kinect sensors to one or more projectors and, by so doing, to project interactive experiences across the surfaces of an entire room. The toolkit provides everything needed to do interactive projection mapping, which enables an entirely new level of engagement, in which interactive content can come to virtual life on the walls, the floor, and the furniture. Imagine turning a living room into a holodeck or a factory floor—the RoomAlive toolkit makes such scenarios possible.

    This video shows the RoomAlive Toolkit calibration process in action.

    The most basic setup for the toolkit requires one projector linked to one of the latest Kinect sensors. But why limit yourself to just one each? Experiences become larger and more immersive with the addition of more Kinect sensors and projectors and the RoomAlive toolkit provides what you need to get everything setup and calibrated.

    While the most obvious use for the RoomAlive Toolkit is the creation of enhanced gaming experiences, its almost magical capabilities could be a game-changer in retail displays, art installations, and educational applications. The toolkit derives from the IllumiRoom and RoomAlive projects developed by Microsoft Research.

    Over the next several weeks, we will be releasing demo videos that show developers how to calibrate the data from multiple Kinect sensors and how to use the software and tools to create their own projection mapping scenarios. In the meantime, you can get a sense of the creative potential of the RoomAlive Toolkit in the video on Wobble, which shows how a room’s objects can be manipulated for special effects, and the video on 3D Object videos, which shows how virtual objects can be added to the room. Both of these effects are part of the toolkit’s sample app. And please share your feedback, issues, and suggestions over at the project’s home on GitHub.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Encounters with Kinect

    • 0 Comments

    Imagine an interactive digital art performance where the audience becomes the performers and interact not just with other audience members but also with lights, sounds, and professional dancers. That’s what visitors to SummerSalt, an outdoor arts festival in Melbourne, Australia, experienced. The self-choreographed event came courtesy of Encounters, an installation created by the Microsoft Research Centre for Social Natural User Interfaces (SocialNUI, for short), a joint research facility sponsored by Microsoft, the University of Melbourne, and the State Government of Victoria. Held in a special exhibition area on the grounds of the university’s Victorian College of the Arts, Encounters featured three Kinect for Windows v2 sensors installed overhead.

    The installation ran over four Saturday evenings and was attended by more than 1,200 people. Participants experienced dance performances from Victorian College of the Arts students who interacted with the crowd to create social interactions captured by the Kinect sensors. The results included spectacular visual and audio effects, as the participants came to recognize that their movements and gestures controlled the music and sound effects as well as the light displays on an enormous outdoor screen.

    Kinect sensors captured the crowd’s gestures and movements, using them to control audiovisual effects at Encounters, an investigation into social interactions facilitated by natural user interfaces.

    The arresting pubic art installation was powered by the three overhead Kinect v2 sensors, whose depth cameras provide the data critical to the special effects. Each sensor tracked a space approximately 4.5 meters by 5 meters (about 14.75 feet by 16.5 feet), from a height of 5 meters (about 16.5 feet). This enabled the sensors to track the movement of up to 15 people at a time on three axes. The sensors' depth cameras detected when people jumped, which was an important interaction mechanism for the installation. Using the depth camera also overcame the problem of varying lighting conditions. 

    Feeding the depth-camera data into custom software revealed a surprising amount of information about people moving through the space: as already mentioned, it tracked the location of up to 15 people in three axes (X, Y, Z); in addition, it provided information on the participant’s area, their velocity (speed and direction), the length of time they were present, whether they were jumping, whether they were part of a group (and if so, how many people were in that group), and the overall space’s dispersion, crowdedness, and centroid. The technical team achieved this across three separate spaces and maintained frame rates of approximately 30 frames per second.

     From a high-level perspective, the end-to-end image processing process involved four steps:

    • Receipt of the raw depth pixels from the Kinect sensor
    • Preliminary filtering and then construction of an image from the depth data
    • Application of OpenCV to recognize contours (blobs) that represented a first guess at where people were located
    • Calculation via a series of heuristics to derive all the information mentioned in the preceding paragraph

    The technical team experimented with the sensor in different configurations, at different heights, in different lighting conditions, with different flooring, with different sizes of people, and using different cameras in order to work this all out.

    “We really enjoyed working with the Kinect sensor,” says John Downs, a research fellow at the Microsoft Research Centre for SocialNUI and the leader of the technical team on Encounters. “The different types of cameras—RGB, infrared, and depth—gave us a lot of flexibility when we designed Encounters. And as we moved through to the development phase, we appreciated the level of control that the SDK provided, especially the flexibility to process the raw camera images in all sorts of interesting ways. We took advantage of this, and to great effect. Additionally, the entire development process was completed in only six weeks, which is a testament to how simple the SDK is to use.”

    The result of all this development creativity was more than just an amazing public art installation—it was also an intriguing social science investigation. Researchers from the SocialNUI Centre conducted qualitative interviews while members of the public interacted with their Kinect-generated effects, probing for insights into the social implications of the experience. As Frank Vetere, director of SocialNUI, explains, “The Centre explores the social aspects of natural user interfaces, so we are interested in the way people form, come together, and explore the public space. And we are interested in the way people might claim and re-orient the public space. This is an important part of starting to take technological developments outside of our lab and reaching out to the public and other groups within the University.”

    This unique, cross-disciplinary collaboration was a wonderful success, delighting not only the NUI developers and researchers, but the public as well.

    The Kinect for Windows Team

    Key links

     

Page 1 of 15 (147 items) 12345»