• Kinect for Windows Product Blog

    Experience ancient cultures—with help from Kinect

    • 0 Comments

    To movie lovers, the image of the late John Belushi cavorting around the Delta Tau Chi fraternity house in a toga is comedy gold. But while the period costuming was played for laughs in Animal House, the recreation of virtual Roman attire is serious business to researchers Erkan Bostanci, Nadia Kanwal, and Adrian Clark, who are using Kinect for Windows to create augmented reality (AR) experiences that immerse participants in the cultural heritage of the ancient world. Their system not only recreates virtual representations of ancient Roman architecture, but it also virtually dresses participants in togas and Roman helmets, and even equips them with a virtual Roman sword.

    Like previous AR endeavors, theirs combines real-world imagery with computer-generated virtual content. But instead of creating mythical worlds of fantastical beasts, the researchers hope to apply their system (currently in the prototype stage) to recreate archeological treasures at the very sites of the ancient remains, thus allowing visitors to experience to the wonders of antiquity with minimal disturbance to the real-world artifacts.

     As shown here in a laboratory demonstration, the AR system recognizes flat surfaces and augments them with the virtual recreations of architectural features.
    As shown here in a laboratory demonstration, the AR system recognizes flat surfaces and augments them
    with the virtual recreations of architectural features.

    Their AR system uses the Kinect sensor’s color and depth cameras to determine the camera’s position and then to find and virtually augment flat surfaces in the real world in real time. So, for example, the system recognizes rectangular shapes as columns and augments them accordingly. This differs from earlier systems that attempt to augment realty by overlaying synthetic objects on camera images, a process that requires an offline phase to create a map of the environment. The Kinect-enabled system thus eliminates the need for programming a graphics processing unit, which makes it attractive for use with lower-power and mobile computers.

     By using the Kinect sensor's body tracking capabilities, the system virtually clothes participants in Roman togas and helmets.
    By using the Kinect sensor's body tracking capabilities, the system virtually
    clothes participants in Roman togas and helmets.

    But what really captured our imagination was the use of the Kinect sensor’s body tracking capabilities to virtually clothe participants in historically correct garb. As the researchers note, by augmenting the appearance of the participants, the system deepens their immersion into the ancient culture. The prototype system uses Kinect tracking data of a participant’s head, torso, and right hand to virtually superimpose a Roman helmet, toga, and a sword. After all, if you’re strolling around an AR recreation of the Roman Forum, your immersion in the virtual world can be ruined by the sight of your fellow explorers in cargo shorts and t-shirts.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Creating Kinect-enabled apps with Unity 5

    • 0 Comments

    So many blogs, so little time. Ever feel that way when you realize it’s been a long time since you’ve checked in with one of your favorite bloggers? Well, that’s how we’re feeling today, after reading this four-month old blog post from Microsoft Most Valuable Professional (MVP) James Ashley. It seems that while we were busy watching demos of Microsoft HoloLens and wondering what’s next for Cortana, Ashley was busy sussing out the possibilities for using the Kinect for Xbox One sensor (along with the Kinect Adapter for Windows) with the March 2015 release of Unity 5.

     Kinect MVP James Ashley captures himself in a Kinect-enabled Unity 5 app.
    Kinect MVP James Ashley captures himself in a Kinect-enabled Unity 5 app.

    With his usual attention to detail, Ashley takes the reader through nine steps to build a Kinect-enabled application in Unity 5 by using plug-in support available in Unity’s free Personal Edition. As he explains, this plug-in makes it “very easy to start building otherwise complex experiences like point cloud simulations that would otherwise require a decent knowledge of C++.”

    Unity has long been the preferred platform for game developers, but until now, plug-in support was only available to those who purchased a costly Unity Pro license. With such support now included with the free Unity Personal license, there’s no cost barrier to creating Kinect-enabled Unity apps.

    Let the games begin!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Baseball fans have a virtual fit

    • 0 Comments

    Major League Baseball’s 2015 All-Star Game is in the record books, with the victorious American League earning bragging rights and home-field advantage in this year’s World Series. But the American Leaguers weren’t the only winners at Cincinnati’s Great American Ball Park: lots of fans of either league came away with a winning fit in All-Star apparel, thanks to Virtual Mirrors powered by Zugara’s virtual dressing room software and the latest Kinect sensor.

    Virtual Mirror stations were strategically placed near the entrance to the Major League Baseball store at the All-Star FanFest, which was located at the nearby Duke Energy Convention Center. These stations looked like boxes with dressing room mirrors, but above the mirrored surface was a Kinect sensor that captured the shopper’s image. The setup allowed fans of all sizes and ages (men, women and children) to try on All-Star branded jerseys, T-shirts and other baseball-related apparel—virtually.

    Virtual Mirror at the MLB All-Star Game FanFest
    Virtual Mirror at the MLB All-Star Game FanFest

    The latest Kinect sensor’s advanced body tracking capabilities are key to the Virtual Mirror, providing an accurate body map, which the software then uses to automatically size and scale the selected clothing to fit the shopper. What’s more, the Kinect sensor’s ability to accurately follow the movements of 25 skeletal joints enables enhanced motion and gesture tracking, allowing the software to synchronize the apparel to the would-be purchaser’s movements. Add in background subtraction and the insertion of a high-definition virtual background image, and the shopper needn’t imagine how she’d look in that new National League jersey, she can see it right on the screen!

    Although the 2015 All-Star Game is history, the Virtual Mirror technology is poised to take off, as more and more retailers use it to engage with customers. So maybe you don’t want an American League warm-up jacket—how about a nice Oxford shirt and matching tie? Step up to the virtual dressing room and see how they look.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect Studio lets you code on the go

    • 0 Comments

    Imagine you’re wearing multiple hats: you’re a business owner, a software developer, and a devoted spouse and parent. If you’re like Anup Chathoth, this scenario requires no imagining whatsoever. A husband and the father of two young children, Anup is also the CEO and main software developer at UBI Interactive, a Seattle-based startup whose products use the Kinect for Windows v2 sensor to project an interactive computer image onto any surface—walls, tabletops, windows, you name it.

    Running a growing startup company is a fulltime job, and as any parent knows, so is raising a family. So with two fulltime gigs already on his plate, how does Anup find time to develop the next features for Ubi’s software?

    “That’s a real problem,” he acknowledges. “During the day, I’m busy being a CEO, looking at cash flow, scheduling projects, meeting with potential customers. And at night I want to spend as much time as possible with my wife and kids. It doesn’t leave much time for writing code.”

    Using Kinect Studio to develop Ubi without sensor plugged in
    Using Kinect Studio to develop Ubi without sensor plugged in

    Luckily for Anup, Kinect Studio makes coding for Kinect for Windows applications a lot easier to pack into a crowded day. Kinect Studio, which is included in the free Kinect for Windows SDK 2.0, allows a developer to record all the data that’s coming into an application through a Kinect sensor. This means that you can capture the data on a series of body movements or hand gestures and then use that data over and over again to debug or enhance your code. Instead of being moored to a Kinect for Windows setup and having to repeatedly act out the user scenario, you have a faithful record of the color image, the depth data, and the three-dimensional relationships. With this data uploaded to your handy laptop, you can squeeze in a little—or a lot—of Kinect coding whenever time permits.

    Let’s take a quick look at the main features of Kinect Studio. As shown below, it features four windows: a color viewer, a depth viewer, a 3D viewer, and a control window that lets you record and playback the captured image.

     The four windows in Kinect Studio, clockwise from top: control window, color viewer, depth viewer, and 3D viewer
    The four windows in Kinect Studio, clockwise from top: control window, color viewer,
    depth viewer, and 3D viewer

    The color viewer shows exactly what you’d expect: a faithful color image of the user scenario. The depth viewer shows the distance of people and objects in the scene using color: near objects appear red; distant ones are blue; and objects in-between show up in various shades of orange, yellow, and green. The 3D viewer gives you a three-dimensional wire-frame model of the scene, which you can rotate to explore from different perspectives.

    The control, or main, window in Kinect Studio is what brings all the magic together. Here’s where you find the controls to record, save, and play back the captured scenario. You can stop and start the recording by moving the cursor along a timeline, and you can select and save sections.

    Once you’ve recorded the user scenario and saved it to your laptop in Kinect Studio, you can play it over and over while you modify the code. The developers at Ubi, for instance, employ Kinect Studio to record usability sessions, during which end users act out various scenarios employing Ubi software. They can replay, stop, and start the scenarios frame by frame, to make sure their code is behaving exactly as they want. And since the recordings are accessible from a laptop, developers can test and modify their Kinect for Windows application code just about anywhere.

    Using Kinect Studio to perform and analyze user experience studies
    Using Kinect Studio to perform and analyze user experience studies

    For Anup, it means that he can code during the bus ride home or in bed, after his children have gone to sleep. “Kinect Studio doesn’t actually increase the number of hours in the day,” he says, “but it sure feels like it.”

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect powers informed shoppers

    • 0 Comments

    You go to a large grocery store to buy some Fuji apples. The signs tell you which are organic and which aren’t, but that (and the price per weight) is about all the information you’ll get. Now imagine a shopping scenario where you could learn much more about those apples: the orchard where they were grown, what chemicals they were treated with, how long ago they were picked, where they were warehoused, and more. And imagine you could learn all this just by pointing your finger at the fruit.

    Visitors to the Future Food District at Milan Expo 2015 needn’t imagine such a shopping trip—they experienced it in the Grocery Store of the Future, a pavilion that functioned as a real grocery store, complete with 1,500 products on display. What the shoppers might not have noticed were the 200 Kinect for Xbox One sensors strategically placed near the product displays. But while the shoppers might not have seen the Kinect sensors, the sensors certainly saw them, capturing their body images and measuring their distance from the product display.

     Two hundred fifty strategically placed Kinect for Xbox One sensors enabled shoppers to obtain detailed product information simply by pointing their finger.
    Two hundred fifty strategically placed Kinect for Xbox One sensors enabled shoppers
    to obtain detailed product information simply by pointing their finger.

    Then, when a Kinect sensor detected a shopper pointing at an item, an overhead, mirrored display presented an array of detailed product information, extracted from a Microsoft Azure cloud database. The gesture recognition resulted from a custom WFP application created by business solutions providers Accenture and Avanade. The developers chose the Kinect for Xbox One sensor because it provides precise body tracking and has the ability to detect shoppers at medium depth from the product displays. They also valued the high reliability of the sensors, which were mounted in a crowded environment and functioning 24 hours a day.

    So the next time you’re in the produce aisle puzzling over which apple to buy, think about those lucky patrons at the Milan Expo. And be comforted by the knowledge that the technology that powered the Grocery Store of the Future is here today. 

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect helps detect PTSD in combat soldiers

    • 0 Comments

    “War is hell.” So proclaimed U.S. General William Tecumseh Sherman, whose experiences during the American Civil War surely made him an expert on the subject. But for some soldiers and veterans, the hell doesn’t stop when combat ends. It lingers on as post-traumatic stress disorder (PTSD), a condition that can make it all but impossible to lead a normal daily life.

    According to the U.S. Department of Veterans Affairs, PTSD affects 11 to 20 percent of veterans who have served in the most recent conflicts in Afghanistan and Iraq. It’s no wonder, then, that DARPA (the Defense Advanced Research Projects Agency, a part of the U.S. Department of Defense), wants to detect signs of PTSD in soldiers, in order to provide treatment as soon as possible.

    One promising DARPA-funded PTSD project that has garnered substantial attention is SimSensei, a system that can detect the symptoms of PTSD while soldiers speak with a computer-generated “virtual human.” SimSensei is based on the premise that a person’s nonverbal communications—things like facial expressions, posture, gestures and speech patterns (as opposed to speech content)—are as important as what he or she says verbally in revealing signs of anxiety, stress and depression.

    The Kinect sensor plays a prominent role in SimSensei by tracking the soldier’s body and posture. So, when the on-screen virtual human (researchers have named her Ellie, by the way) asks the soldier how he is feeling, the Kinect sensor tracks his overall movement and changes in posture during his reply. These nonverbal signs can reveal stress and anxiety, even if the soldier’s verbal response is “I feel fine.”

    SimSensei interviews take place in a small, private room, with the subject sitting opposite the computer monitor. The Kinect sensor and other tracking devices are carefully arranged to capture all the nonverbal input. Ellie, who has been programmed with a friendly, nonjudgmental persona, asks questions in a quiet, even-tempered voice. The interview begins with fairly routine, nonthreatening queries, such as “Where are you from?” and then proceeds to more existential questions, like “When was the last time you were really happy?” Replies yield a host of verbal and nonverbal data, all of which is processed algorithmically to determine if the subject is showing the anxiety, stress and flat affect that can be signs of PTSD. If the system picks up such signals, Ellie has been programmed to ask follow-up questions that help determine if the subject needs to be seen by a human therapist.

    The algorithms that underlie SimSensei’s diagnostic abilities derive from MultiSense, an ambitious project of the Institute for Creative Technology (ICT) at the University of Southern California. Created under the leadership of computer science researcher Louis-Philippe Morency, MultiSense aims to capture and model human nonverbal behaviors. Morency and psychologist Albert Rizzo, ICT’s director of medical virtual reality, then led a two-year, multidisciplinary effort that resulted in SimSensei. Rizzo points out that SimSensei’s value as an assessment tool is immensely important to the military, as there are too few trained human evaluators to screen all the troops and veterans who are at risk of PTSD.

    Giota Stratou, one of ICT’s key programmers of SimSensei, provided details on the role of the Kinect sensor. “We used the original Kinect sensor and SDKs 1.6 and 1.7, particularly to track the points and angles of rotation of skeletal joints, from which we constructed skeleton-based features for nonverbal behavior. We included in our analysis features encoded from the skeleton focusing on head movement, hand movement and position, and we studied overall value by integrating in our distress predictor models.”

    Although the current version of SimSensei uses the original Kinect sensor, Stratou and her colleagues plan to employ the Kinect for Xbox One sensor and the Kinect for Windows SDK 2.0 in the next version, eager to take advantage of the latest sensor’s enhanced body and facial tracking capabilities. Meanwhile, the first version of SimSensei has been tested with a small sample of soldiers in the field under funding from DARPA, and in a treatment study with survivors of sexual trauma in the military, funded by the Department of Defense. The results thus far have been promising, leading us to hope that someday soon we will see SimSensei widely used in both military and civilian settings, where its abilities to decode nonverbal communication could detect untreated psychological distress.

    The Kinect for Windows Team

     Key links

  • Kinect for Windows Product Blog

    Kinect-enabled PC game scores big in 2015 Imagine Cup

    • 0 Comments

    Team TerraBite: the name evokes mixed metaphors of earth (terra), predation (bite), and technology (terabyte). And it’s the perfect moniker for the squad from Abertay University, whose apocalyptically themed, Kinect-enabled PC game, Project Cyber, made it all the way to the World Semifinals of the 2015 Imagine Cup. On the way, they garnered accolades for the game’s look and feel, its clever narrative, and its creative use of the latest Kinect sensor.

    Project Cyber is a Kinect v2 experience built for the PC. The player takes on the role of a “white hat” hacker, who must go deeper and deeper into cyberspace to save the world from destruction. It features an episodic narrative; turn-based, first-person gameplay; futuristic 3D environments and evolving, dynamic audio. Players progress through various levels, each of which is built like a puzzle that can be solved by observing enemy movements and by using programs that are earned during gameplay. These programs include one for uninstalling an enemy, while another creates new tiles that can bridge gaps in the geometric world of cyberspace. The game also features a companion app, which enables users to create their own levels and offers game-related lore and tips. 

    Although the game can be controlled with a mouse and keyboard, it was designed for the latest Kinect sensor, and it’s at its best when played via Kinect gesture controls. One of the team’s primary objectives was to use Kinect to make a game that can be played while relaxing—physically, if not mentally. Thus they avoided overly complicated, extensive gestures or maneuvers, and instead opted for those that can be executed while seated.

    The team also wanted to give players the sense that they are protagonists in a sci-fi film, moving their hands through the air to navigate cyberspace. As a result, they made the most of the hand-tracking ability of the v2 sensor and SDK 2.0. By moving their hands in front of the sensor, players advance through a cyberspace composed of hexagonal tiles. Since the game is laid out in a hexagonal grid, turns are always at a set distance—turning your hand to the left, for example, lets you look at the tile one to the left.

    SDK 2.0 helped the team get a prototype running quickly. The tools and samples in the SDK gave them a good understanding of what the latest Kinect sensor can do, while Kinect Studio and the Visual Gesture Builder utilities enabled them to create the custom gestures and motions that control game play. They also took advantage of Kinect’s confidence ratings for gestures, using them to determine the level of accuracy with which a player would need to execute a gesture in order to produce the desired gameplay effect. The team also made good use of the Kinect Unity wrapper, which allowed them to see results within the game immediately.

    Team TerraBite’s road to the Imagine Cup Semifinals began with their winning the gaming category in the People’s Choice awards sponsored by Stuff magazine, a UK publication devoted to technology gadgets and gear. That prize put them in the UK National Finals of the Imagine Cup, where they won the Games category—and where they met and networked with many other talented competitors.

     Core members of Team TerraBite: Gerda Gutmane, David Hudson, Maximilian Mantz and Anthony Devlin
    Core members of Team TerraBite, from left to right: Gerda Gutmane, David Hudson, Maximilian Mantz
    and Anthony Devlin. Associate members of the team, not shown, include Andrew Ewen, Dale Smith,
    Lewis Doran, Scott Brown, Tom Marcham and Yana Hristova.

    Then it was off to the Imagine Cup World Semifinals, competing against some of the best student talent from around the globe. Although Team TerraBite’s winning streak ended there, they received valuable feedback on their game and priceless experience in pitching their ideas to a wide audience—skills that will no doubt come in handy should they decide to refine their game and market it. So keep an eye out for the Kinect-enabled Project Cyber—and your chance to save the world, one gesture at a time.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows clicks at Eyeo

    • 0 Comments

    We recently attended the Eyeo Festival in Minneapolis, Minnesota, and what an eye-opening experience it was. For those who aren’t familiar with Eyeo, it’s a creative coding conference that brings together coders, data lovers, artists, designers and game developers. It features one-of-a-kind workshops, theater style sessions and interaction with open-source activists and pioneers.

    This year marked the fifth anniversary of the Eyeo Festival, and the Kinect for Windows team was pleased to be among the workshop presenters. Our workshop, “Incorporating Kinect v2 into Creative Coding Experiences,” drew 35 participants—none of whom had worked with the latest Kinect sensor. In fact, only four of them had worked with the original Kinect sensor, so the workshop offered a great opportunity to acquaint these developers with the creative potential of Kinect.

    The event provided resources, links and training, preparing the participants to incorporate Kinect into their future endeavors. As part of their workshop fee, each received a Kinect for Xbox One sensor, a Kinect Adapter for Windows and a Windows To Go drive. As the workshop progressed, the participants got deeply engaged in the Kinect technology—experimenting, asking questions and writing playful scripts.

     Participants in the Kinect v2 workshop got down to some serious creative coding.
    Participants in the Kinect v2 workshop got down to some serious creative coding.

    Even more Eyeo attendees learned about Kinect for Windows and related technologies from James George, Jesse Kriss and Béatrice Lartigue, three of the festival’s featured speakers. George, one of the most influential artist in the coding community, spoke about the photography of the future and how he has used the Kinect sensor’s depth and color cameras to create images that can be explored in three dimensions. Kriss, who designs and builds tools for artists and scientists, discussed his work on introducing Microsoft HoloLens to the scientists at in the Human Interfaces Group at NASA’s Jet Propulsion Lab. Lartigue, a new media artist and designer, talked about Les métamorphoses de Mr. Kali, her Kinect installation at London’s Barbican Centre, describing how she used Kinect for Windows to reduce the space between us and our environment in her interactive works. These compelling speakers left the audience eager to experiment with Kinect for Windows—and anxious for the release of Microsoft HoloLens.

    Eyeo provided a great opportunity to reach the creative coding community, showing them how Kinect for Windows can be a potent tool in their work.

    The Kinect for Windows Team

    Key links

     

  • Kinect for Windows Product Blog

    Salon gives devs tips and tricks for Kinect for Windows

    • 0 Comments

    On the evening of May 29, 2015, nearly 100 top developers and investors gathered in Beijing for Microsoft China’s Second Kinect for Windows Salon. This five-hour event presented advanced techniques for working with the latest Kinect sensor and the Kinect for Windows SDK 2.0, and helped attendees tackle complex coding problems. The gathering included developers from Samsung and other companies, those who work independently, and students from such prestigious schools as Beihang University (BUAA) and Tsinghua University, along with investors looking for projects or companies to support.

     Nearly 100 devs gathered to learn about using the latest features of Kinect for Windows.
    Nearly 100 devs gathered to learn about using the latest features of Kinect for Windows.

    Attendees were especially excited by a demo of the RoomAlive toolkit, which had made its public debut only weeks earlier at Build 2015. Developers spoke enthusiastically about the potential to create immersive augmented reality experiences by using this open-source SDK, which will enable them to calibrate a network of multiple Kinect sensors and video projectors.

    A session on 3D Builder also generated intense interest, thanks in large part to the presence of 3D Impression Technology, a 3D printing studio whose representatives answered questions about using 3D Builder, creating 3D scans with the Kinect sensor, and printing out 3D models. Attendees even went home with keychains featuring freshly printed 3D models of the latest Kinect sensor.

     Scans of the latest Kinect sensor (top) were used to print 3D models that were attached to souvenir keychains.
    Scans of the latest Kinect sensor (top) were used to print 3D models that were attached to souvenir keychains.

    Perhaps the most enriching aspect of the salon was the opportunity to develop new relationships. With such a broad cross-section of the Kinect for Windows developer community in attendance, the salon provided fertile ground for networking, cooperative learning, and recruiting (much to the delight of the students). It also provided investors with leads on projects and companies worth funding. We look forward to seeing some creative Kinect for Windows applications as a result of these relationships.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect helps apply color to complex surfaces

    • 0 Comments

    We’ve talked about making 3D models from data captured by the Kinect sensor—we’ve even pointed you to a free Windows Store app that lets you do it easily. But one drawback of most home 3D printers is that their output is monochromatic. So that nifty 3D model of your dad that you wanted to give him on Father’s Day will have to be hand painted—unless you want your father memorialized as a pasty grey figurine.

    This video demonstrates the computational hydrographic printing process developed by researchers at Zhejiang University and Columbia University.

    Now, some ingenious researchers at Zhejiang University and Columbia University have come up with a relatively inexpensive way to apply color to your home-made 3D models, even when the model includes complex surface textures. And in a nice cost-effective coincidence, a key piece of their system is the very Kinect sensor that you can use to take the original 3D scans.

    Setup includes a "gripper" mechanism (shown here holding a 3D mask), a water basin to hold the hydrographic color film, and a Kinect sensor to enable precise registration of the colors to the models surface.
    Setup includes a "gripper" mechanism (shown here holding a 3D mask), a water basin to hold the hydrographic color film, and a Kinect sensor to enable precise registration of the colors to the models surface.

    The researchers’ system employs hydrographic printing, a technique for transferring color inks to object surfaces. The setup includes a basin for holding the water and color film that are the heart of the hydrographic printing process, a mechanism for gripping and dipping the 3D model, and a Kinect sensor to measure the object's location and orientation. The researchers use Kinect Fusion to reconstruct a point cloud of the 3D model; they then run this data through an algorithm they’ve devised to provide precise registration of the colors on the object’s surface shapes and textures. The results, as seen in the video above, are nothing short of amazing.

    The Kinect for Windows Team

    Key links

Page 1 of 17 (161 items) 12345»