• Kinect for Windows Product Blog

    No bones about it: Kinect for Windows v2 skeletal tracking vastly better

    • 0 Comments

    You can read about the improvements that Kinect for Windows v2 offers over its predecessor, but seeing the differences with your own eyes is really, well, eye-opening—which is why we’re so pleased by this YouTube video posted by Microsoft MVP Josh Blake of InfoStrat. In it, Blake not only describes the improvements in skeletal tracking provided by the v2 sensor and the preview SDK 2.0 (full release of SDK 2.0 now available), he actually demonstrates the differences by showing side-by-side comparisons of himself and others being tracked simultaneously with the original sensor and the more robust v2 sensor.

    As Blake shows, the v2 sensor tracks more joints, with greater anatomical precision, than the original sensor. His video also highlights the major improvements in hand tracking that the v2 sensor and SDK 2.0 provide, and, with the help of two colleagues, he demonstrates how Kinect for Windows v2 can track more bodies than was possible with the original sensor and prior releases of the SDK.

    When asked how the improved skeletal-tracking capabilities can be utilized, Blake responded, “It helps improve several different scenarios. The more accurate anatomical precision is particularly useful in health and rehabilitation apps, as well as for controlling virtual avatars more accurately.” He also finds great potential in the enhanced hand-tracking capabilities, noting that “recognizing the two-finger point pose in addition to the hand open and hand closed poses means we have more options for developing interesting deep interactions.”

    Finally, Blake points out that the ability to track the movements of up to six individuals will be valuable in a variety of situations, such as showroom scenarios or workplace applications that involve multiple people. “Before, users had a hard time understanding why the application would respond to two people but not more, or how to get it to switch to a new person,” he says. “The support for six full skeletons also means that we don’t have to compromise in how many people can interact with an application or experience at once.“

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    This nurse makes house calls, thanks to Kinect for Windows

    • 0 Comments

    Telemedicine has become one of the hot trends in healthcare, with more and more patients and doctors using smartphones and tablets to exchange medical information. The convenience of not having to travel to the doctor’s office or clinic is a big part of the appeal—as is the relief of not wasting valuable time thumbing through outdated waiting-room magazines when an appointment runs late. And for patients living in isolated or underserved areas, telemedicine offers care that might otherwise be unattainable. Despite these advantages, telemedicine can be coldly impersonal, lacking the comfort of interacting with another human being.

    Silicon Valley-based Sense.ly is working to bring a human face to telemedicine. The company’s Kinect-powered “nurse avatar” provides personalized patient monitoring and follow-up care—not to mention a friendly, smiling face that converses with patients in an incredibly lifelike manner. The nurse avatar, affectionately nicknamed Molly, has access to a patient’s records and asks appropriate questions related directly to the patient’s past history or present complaints. She has a pleasant, caring demeanor that puts patients at ease. Interacting with her seems surprisingly natural, which, of course, is the goal.

    Sense.ly's nurse avatar, Molly, responds to patients' speech and body movements.
    With the help of Kinect for Windows technology, Sense.ly's nurse avatar, called Molly, can respond
    to patients’ speech and body movements.

    By using Kinect for Windows technology, Sense.ly enables Molly to recognize and respond to her patient’s visual and spoken inputs. The patient stands or sits in from of a Kinect sensor, which captures his or her image and sends it to Molly. Does the patient have knee pain? She can show Molly exactly where it hurts. Is the patient undergoing treatment for bursitis that limits his range of motion? He can raise his affected arm and show Molly whether his therapy is achieving results. In fact, the Kinect sensor’s skeletal tracking capabilities allow Sense.ly to measure the patient’s range of motion and to calculate how it has changed from his last session. What’s more, with Kinect providing a clear view of the patient, Molly can help guide him or her through therapeutic exercises.

    A growing number doctors and hospitals are recognizing the value of applications such as Sense.ly. In fact, the San Mateo Medical Center is one of several major hospitals that have recently added Molly to their staff, so to speak. The value of such solutions is particularly striking in handling patients who suffer from long-term conditions that require frequent monitoring, such high blood pressure or diabetes.

    Solutions like Sense.ly also provide a clear cost benefit for providers and insurers, as treating a patient remotely is less costly and generally more efficient than onsite care. In a recent pilot program, the use of Sense.ly reduced patient calls by 28 percent and freed up nearly a fifth of their day for the clinicians involved in the program.

    Most importantly, Sense.ly’s Kinect-powered nurse avatar offers the promise of better health outcomes, the result of more frequent medical monitoring and of patients’ increased involvement in their own care. Something to think about the next time you’re stuck in the doctor’s waiting room.

    The Kinect for Windows Team

     Key links

  • Kinect for Windows Product Blog

    Dancing the night away with Kinect v2

    • 0 Comments

    Visitors to Seattle’s 2014 Decibel Festival expected the avant-garde. After all, this four-day event celebrated innovations in electronic music, visual art, and new media. But even the most jaded attendees must have been surprised to encounter the Cube, a 4-foot-square block of transparent acrylic that catapulted them into an interactive dance party.

    The Cube uses four Kinect v2 sensors to detect the dancers and trace their movements, integrating them into the visual experience.
    The Cube uses four Kinect v2 sensors to detect the dancers and trace their movements,
    integrating them into the visual experience. (Photo: Scott Eklund)

    Powered by five computers and four Kinect v2 sensors working together, the Cube drew in curious onlookers, capturing their images and incorporating them into the installation. As participants stood in front of it, the Cube reacted, pulsating to music and tracing the movements of those around it. The Kinect sensors could detect up to three people on each side of the Cube. And thanks to the transparent nature of the structure, participants could see others through the Cube, so a dancer on one side of the Cube could react to the movements of a partner on another side. In fact, the hands of dancers on opposite sides appeared to be linked by virtual ribbons. Their individual dance moves thus merged into sinuous visual collaborations, enabling the Cube to create a virtually connected dance whose participants were in different physical spaces.

    A key technical challenge in creating the Cube was to link together the four Kinect sensors inside the structure, so that the devices could “talk” to each other. Abram Jackson, a program manager with Microsoft Exchange Server who helped with the technical engineering, described the problem. “We had to take all four of the Kinect sensors, map out a cohesive view of the room to keep track of where the people were, even if they changed to a different sensor, so the images displayed on the Cube would still make sense to that person,” he explains.

    The innovative dance-party coding work was done in conjunction with Stimulant, a Seattle-based digital design firm. The Cube is thus a prime example of the kind of innovation that occurs when the creative development community takes hold of the Kinect v2 hardware and its SDK (software development kit).  As Rick Barraza, senior technical evangelist at Microsoft, observed, “It’s about evolutionary innovation versus revolutionary innovation. We won’t reach that next level until we encourage creativity.” Barraza and his colleagues actively encourage hackers of all stripes—developers, designers, art directors, and hobbyists—to experiment with Microsoft products. He even organized an Ambient Creativity Hackathon this past summer, which inspired an eclectic group of hackers to let their imaginations soar during three days of experimentation with Kinect for Windows v2.

    What’s next in this process of evolutionary innovation?  Well, for the Cube, the next goal is to scale up to an even bigger size, or to link multiple Cubes together so they can communicate with one another.  Whatever happens, we’ll be eager to report on it!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Working the angles…with Kinect

    • 0 Comments

    Angles are all around us: the hands of a clock, the blades of a pair of scissors, the corner of a countertop. But while angles are ubiquitous in our environment, acquiring an abstract understanding of angles and their measurement, which is critical to our ability to solve geometric and trigonomic problems, is challenging for many youngsters. Happily, recent research using a Kinect sensor shows promise in helping elementary-school students rise to this challenge.

    University of Vermont Assistant Professor Carmen Petrick works with undergraduate education majors on movements that are used to help elementary school children learn geometry. (Photo credit: Andy Duback)
    Carmen Petrick Smith, assistant professor of mathematics education at the University of Vermont (center),
    works with undergraduate education majors (left to right) Tegan Garon, Sam Scrivani, and Kiersten Barr
    on movements that are used to help elementary school children learn geometry.
    (Credit: Andy Duback; used by permission)

    Professor Carmen Petrick Smith at the University of Vermont and Professor Barbara King at Florida International University exposed 20 third- and fourth-grade students to a Kinect-enabled learning experience designed to help the youngsters discover key properties of angles by moving their own bodies. The students’ understanding of angles was measured by a test administered prior to the Kinect experience. With that baseline data in place, the students positioned themselves in front of the Kinect sensor and made various angles with their arms.

    The Kinect sensor captured image and depth data about the students' arm positions, using the data to create an onscreen graphic representation in which arrows duplicated the angle formed by a student’s arms. In addition, the program turned the monitor screen one of four colors, depending on whether the student’s arms formed an acute, right, obtuse, or straight angle. Next, the experience changed the lengths of the arrows in the graphic representation and added a virtual protractor that measured the angle in the graphic display. These visual cues helped students see that the degree of an angle is not dependent on the length of its “arms” (a common misconception among young pupils). Moreover, the addition of the protractor also helped the students acquire a sense of how angles are measured in degrees.

    Following the Kinect experience, the students were again tested on their understanding of angles. Overall, students showed a statistically significant improvement after the angle activity. The Kinect program logged information about the students’ arm movements at a rate of 30 frames per second, resulting in an average of 6,619.6 recorded frames per student. This copious detailed data was crucial in analyzing the students’ learning and how their comprehension was (or in some cases, wasn’t) aided by the body movements.

    The idea that body movements can enhance cognitive understanding has been established in a number of experiments, and this study provides further evidence of the value of incorporating kinesthetic learning in the curriculum. We’re delighted to see yet another example of how the Kinect sensor can be a valuable educational tool.

                    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Ubi’s Kinect-powered touchscreens: an affordable solution for the classroom

    • 0 Comments

    Victor Cervantes was searching for a solution. As IT director for COBAEP, a public high school system in of the Mexican state of Puebla, he was committed to introducing digital technology into the system’s 37 high schools. Cervantes firmly believed that the use of technology would both improve students’ learning and prepare them for the tech-heavy demands of college and the modern workplace.

    Ubi Interactive uses Kinect for Windows to turn virtually any surface into a touchscreen.
    Ubi Interactive uses Kinect for Windows to turn virtually any surface into a touchscreen.

    The problem was finding a technology solution that was pedagogically sound and user friendly—and that wouldn’t bust his budget. He considered interactive white boards, but was put off by their high price tag and the steep learning curve for teachers. He was already exploring the potential of Kinect for Windows when he learned about the educational promise of Ubi Interactive, an innovative, Kinect-based system that turns virtually any surface into a touchscreen.

    He contacted Anup Chathoth, co-founder and CEO of Ubi, and arranged for a month-long trial of the product. Cervantes soon realized that Ubi was just what he was seeking. The product would allow teachers to project teaching materials onto their existing classroom whiteboard, turning it into a fully interactive touchscreen. Teachers and students could then page through the content with simple, intuitive touch gestures. Moreover, by using an Ubi Pen, a specialized stylus that runs on the Ubi Annotation Tool software app, students and teachers could mark up materials right on their giant touchscreen and save their annotations to the digital file. 

    Cervantes recognized that the immersive, fun experience of Ubi would engage students and draw them into the learning process. And he liked the simplicity of the product; the fact that it uses intuitive hand gestures and the familiar action of writing with a pen meant that teachers could master the system almost effortlessly. Moreover, he appreciated the broad applicability of the application. It could work on any digital materials, including published educational products, materials created by the teacher, homework submitted by the students, websites, and any Microsoft Office documents.

    Finally, he loved the price: the only hardware requirements are PCs running Windows 8.0 or 8.1 and projectors (both of which can be used for other purposes), and Kinect for Windows sensors, which he found remarkably affordable. When combined with the cost of the Ubi software licenses, the total system costs—hardware and software—were far lower than the price tag for interactive whiteboard systems. Moreover, since the Ubi solution can work with any Windows-compatible or web-based educational materials, it was much more flexible than an interactive whiteboard system.

    So last fall, Cervantes bought an Ubi license and set up a pilot in one classroom, testing the interest of students, teachers, and administrators. Teachers found Ubi easy to use, and they liked the fact that Ubi could handle both new and existing teaching materials. Students were intrigued by the technology and excited by the prospect of having a supersized “tablet” in the classroom. They began urging their teachers to use Ubi more often. 

    Bolstered by the endorsements of both teachers and students, Cervantes received approval for a wider deployment in April 2014. He decided to place Ubi in each of the 231 classrooms and 10 computer labs at 20 of COBAEP’s schools, in time for the beginning of the 2014–2015 school year. Each classroom setup consists of a computer running Windows 8.0, a projector, and a Kinect for Windows sensor, and, of course, the Ubi software and pens. The entire installation process took only a month, and was finished in July 2014.

    The following month, 500 COBAEP teachers attended workshops where they were trained on Ubi, using their own teaching materials. “The teachers found Ubi very intuitive,” says Cervantes, “and the workshops gave them the opportunity to explore how to use the system in their classroom.” With the start of the new school year in September, some 18,000 students began using Ubi. “Ubi Interactive is working smoothly for both teachers and students,” reports Cervantes. “It makes learning truly dynamic,” he adds.

    The Kinect sensor tracks the teacher’s movements as he interacts with Windows 8 on the touchscreen.
    The Kinect sensor tracks the teacher’s movements as he interacts with Windows 8 on the touchscreen.

    Cervantes’ enthusiasm for the Kinect-powered software comes as no surprise to Ubi’s Chathoth. “Ever since our product was released last year, educators have been some of the most enthusiastic adopters. We already had schools in more than 80 countries using Ubi, so we knew that the product is educationally sound and popular with students and teachers. What’s really exciting to us these days are the enhancements we’ve been able to make to Ubi by using the Kinect for Windows v2 sensor and SDK 2.0.”

    “The Kinect v2 sensor gives us more precise image resolution, which allows us to recognize more gestures and makes Ubi even more responsive,” said Chathoth. “For instance, with the new sensor, Ubi can now recognize a person’s fingertip—something we could not do with the original sensor—and it gives us the ability to better understand depth relationships. With new Active IR images from the Kinect v2 sensor, we are also able to enable seamless switching between pen and hand interaction.”

    Chathoth also notes that the Kinect v2 sensor also enabled a new Ubi feature: a simple way to control any Windows application by using gestures. “A user can turn toward the Kinect sensor and control the interactive display by simply waving their hands in the air,” he explains. “If the user hovers over a spot and makes a fist, Ubi will tell the Windows application that the user is touch-activating that interactive part of the onscreen display. This is especially useful for teachers, allowing them to roam more freely while presenting a lesson.”

    All of which makes Cervantes eager to deploy Ubi in the remaining unequipped classrooms. “We’ve had great success with Kinect for Windows and Ubi software, and we plan to put the v2 version in the classrooms at our other 17 schools over the coming year. This has been a great partnership with Ubi Interactive.”

    So, there you have it: problem solved.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect powers cost-effective stroke therapy

    • 0 Comments

    Imagine how your life would change if you couldn’t easily move the arm and hand on one side of your body. For more than 2 million Americans, this scenario requires no imagination: it's their reality, brought on by a stroke, brain injury, multiple sclerosis, cerebral palsy, or some other neurological condition. Physical therapy can help these people, but professional treatments are often time-limited and expensive, and they frequently lack sufficient intensity to create lasting change.

    Recognizing the need for an effective, affordable home therapy, a team at The Ohio State University has developed a therapeutic game called Recovery Rapids—and we’re pleased to note that the Kinect sensor is an important part of this ingenious solution. Recovery Rapids is based on constraint-induced movement therapy (CI therapy), a method shown to produce improvement in patients regardless of their age or how long ago the injury occurred. CI therapy discourages use of the unaffected arm, and focuses on using the weakened intensively to complete prescribed tasks.

    With Recovery Rapids, the Ohio State team has designed a home-based computer game that provides convenient access to CI therapy at a low cost. The player, who can be seated or standing, controls an onscreen kayaker, using movements of his or her weaker hand and arm to perform such tasks as paddling the kayak, navigating around barriers, and plucking bottles from the river. The Kinect sensor captures body movements of the player and replicates them on the screen, so the player gets immediate feedback. The Kinect-based game requires no joystick or controller, so it’s ideal for people who have no experience with video games. What’s more, Recovery Rapids’ intensive rehabilitation can be customized to the needs and limitations of the individual so that it focuses on specific movements and increases in difficulty as the player's function improves.

    The first prototype of the game was based on the original Kinect for Windows sensor. The original sensor captured the movements of the arm and shoulder, but the game required a special glove fitted with sensors to detect hand movements. The latest version of Recovery Rapids takes advantage of the enhanced body tracking of the Kinect v2 sensor, especially the new sensor’s ability to track finger movements, which eliminates the need for the special glove and thus lowers the cost of the equipment substantially.

    "Detecting hand motion is very important to our product,” says David Maung, one of the Ohio State researchers. “With the original Kinect for Windows sensor, we had to use external hardware to capture finger motion and wrist rotation. The Kinect v2 sensor allowed us to eliminate this hardware, opening the door for an electronically downloadable product."

    A clinical trial on 11 participants with long-lasting arm weakness due to a stroke has shown the Recovery Rapids game to be as effective as traditional CI therapy in improving motor speed. Participants improved the performance of such tasks as picking up a pencil or drinking from a cup, increasing by an average of five completions per minute after two weeks of game play. Many participants also showed significant improvements in range of motion and arm use. In fact, 9 of the 11 participants rated Recovery Rapids as more enjoyable than other forms of rehab, and 10 of 11 said the game was more effective than other rehabilitation therapy they’d received.

     The game’s tasks provide targeted motion therapy. For example, collecting bottles from the river strengthens the arm through elbow flexion movements.
    The game’s tasks provide targeted motion therapy. For example, collecting bottles from the river
    strengthens the arm through elbow flexion movements.

    One participant, a stroke survivor, said this of Recovery Rapids: “It has the potential to develop self-motivation and self-determination better than any therapist or coach. The Recovery Rapids gaming system allows the participants to track their own progress as they compete against themselves. They can adjust the game to make it even more challenging as they reach new plateaus in their recovery.”

    The Ohio State team is eager to move Recovery Rapids out of the research setting and into the hands of rehab patients, and to that end, they have established Games That Move You, a public benefit corporation. Their immediate goals are to develop an interface that will allow patients more flexibility in customizing their therapy and to build a system that tracks the user’s performance over time, thus providing feedback to both the patient and the physical therapist. They also hope to create more content and obstacles, which will add greater variety to the game play.  

    We’re cheering them on, delighted to see yet another example of how Kinect technology can help improve the quality of life for people around the world.

                    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect-powered kiosk enlisted in fight against Ebola

    • 0 Comments

    Kinect’s gesture-enabled interactions can be entertaining (think games), profitable (think marketing), or educational (think kinesthetic learning). And sometimes, they can be potentially lifesaving!

    Such was the case when the British Army asked CDS, a communications-solution company based in the UK, to create a touch-free interactive kiosk for use by possible Ebola victims. Part of a government-sponsored mission named Operation Gritrock, the kiosk will use video to provide critical information to people possibly infected by the deadly virus.

    Recognizing that the Ebola outbreak in West Africa is already a global threat to public health, the UK is sending 800 army personnel to help combat its spread. The Kinect-enabled video kiosk will be an important part of this mission, as it will allow likely Ebola victims to learn about symptoms and care options without contaminating the kiosk equipment with their sweat or blood—body fluids that can harbor the Ebola organism.

    British army medics train for deployment to West Africa, where they will fight the Ebola outbreak. A Kinect-enabled patient kiosk will help them screen potential Ebola victims.
    British army medics train for deployment to West Africa, where they will fight the Ebola outbreak.
    A Kinect-enabled patient kiosk will help them screen potential Ebola victims.

    Having investigated the technological options, CDS selected the Microsoft Kinect for Windows v2 sensor to deliver a sensory, gesture-driven solution. This involved developing a customized Windows 8.1 application using the Kinect application program interface (API). The Microsoft UK Developer Experience team fully supported CDS on the development of the kiosk solution, which was slated to be field tested in January 2015.

    Mike Collier, CDS technical director, said, “It is always gratifying to work on solutions that make a difference, and this project has certainly been a project to be proud of for all associated with it.”

    The Kinect for Windows Team

    Key links:

  • Kinect for Windows Product Blog

    Connecting with beer lovers, Kinect-style

    • 0 Comments

    A lost puppy and his Clydesdales stablemates may have commanded the advertising spotlight during Super Bowl XLIX, but for our money, the real marketing magic from brewer Anheuser-Busch was on display at the “House of Whatever,” a gigantic tent set up for three days outside the stadium. Inside was a huge bar, behind which hung a Kinect v2 sensor oriented toward the crowd of thirsty football (and beer) fans, with a large video screen above it.

    As patrons walked into view of the sensor, the screen served up signage asking, “Can we can interest you in a drink?” Stepping up a little closer, the fan was presented onscreen options among the freshly poured, free glasses of Anheuser-Busch beers sitting on the bar. As the thirsty patron happily picked up a beverage, the screen displayed the choice, along with anonymous age and gender analytics of all visitors that day and a pie chart showing which beers had been the most popular.

    The Kinect v2 sensor recognizes patrons as they approach the bar, prompting the application to offer them a beer.
    The Kinect v2 sensor recognizes patrons as they approach the bar, prompting the application to
    offer them a beer.

    The patron was then offered a chance to raise the glass and say “cheers,” at which point the Kinect sensor captured the image and displayed it onscreen. The fan could then retrieve that photo using a QR code and Instagram it with hashtag #getkinected, in order to be in the running to win a new Microsoft Surface Pro. 

    The Kinect-enabled system was developed by Microsoft and incorporates world-leading biometric technology from NEC, which uses face recognition and measures the age, gender, and total headcount of patrons. NEC and Microsoft have been working together closely on this new breed of interactive retail systems, which offers a compelling shopping experience for customers and invaluable backend demographic and engagement data for the retailer. A version of the system was displayed earlier in January at the National Retail Federation’s (NRF) annual event in New York City.

    The system can even recognize previous customers—provided the retailer has obtained express permission to store the customer’s facial image, as, say, part of a loyalty program. This feature allows the retailer to serve up ads and offers that tie directly to that patron’s past purchases. (It also recognizes store employees, allowing the system to ignore their presence.)

    Puppies and horses may engender warm feelings, but the customer analytics enabled by the Kinect-NEC system can generate on-the-spot sales and one-to-one customer interactions.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Graffiti experience pulls in shoppers

    • 0 Comments

    The following blog was guest authored by Dr. Neil Roodyn, a Microsoft MVP and the founder and director of nsquared, a company dedicated to using software to solve business problems.

    What does a maker of high-end consumer electronics do to reinforce its image as a pioneer in tech innovation? That was the question facing our client, the Australian subsidiary of one of the world’s foremost makers of TVs, smartphones, and appliances. The client was preparing to launch a branded store in Melbourne Central Mall, and executives wanted this, their flagship Australian retail shop, to show customers what's possible with the company’s technology. They decided on a large-scale, interactive experience that would support multiple users, hosted on a multiscreen display in front of the store, and they turned to our team at nsquared to provide the technology solution.

    We worked with the company to develop a new activity called Graffiti, powered by the Kinect for Windows v2 sensor. As shoppers walk past the display, their shadows appear on the interactive wall. This engages potential customers, who soon discover that they can grab a virtual aerosol can and "paint" the wall as the Kinect sensor tracks their spraying movements. Their virtual spray-painting slowly reveals the underlying graffiti image. Once they've sprayed most of the image, they are encouraged to check out the artist information on the left side of the display, which leads toward the store. The idea, of course, is to promote the client’s technological prowess while cleverly funneling shoppers into the store’s entrance.

    Twelve screens on the display’s front make up the interactive wall, which measure 5 meters wide by 2.1 meters tall (about 16 feet by 7 feet), making the graffiti experience the first application that I know of to use the Kinect v2 sensor in such a massive, wide-screen array in a public space. I believe that it’s also the first repeating, multiuser, interactive experience in a permanent public space in Australia. While it runs no more frequently than the store’s other digital signage, the graffiti experience increases the level of customer engagement by using Kinect for Windows v2 and Windows 8.1.

     The massive, interactive graffiti wall engages potential customers and helps cement the sponsor's image as a tech innovator.
    The massive, interactive graffiti wall engages potential customers and helps cement the sponsor's
    image as a tech innovator.

    A single Kinect v2 sensor mounted at the top center of the display can recognize up to six users at once, with all of them spraying on the display. Sound effects of the spraying emanate from three speakers mounted in the ceiling above the screen. If the players shake a can, they hear the clicking sounds that a real can of spray paint makes when agitated. Users tell us that these kinds of little touches are what make the experience so lifelike and engaging.

    Since its deployment in October 2014, the Graffiti application has attracted a steady parade of curious users. By providing a fun and interactive activity in front of their store, our client has done more than just capture the attention of passersby—it has actively involved them in a new and memorable experience.

    Neil Roodyn, Director and Founder, nsquared

    Key links

  • Kinect for Windows Product Blog

    Kinect-enabled solutions offer insights on retail customers

    • 0 Comments

    If you wanted to see the latest, most innovative technology for retailers, the 2015 National Retail Federation (NRF) Expo was the place to be. This annual event, popularly known as the “Big Show,” took place January 11–13 in New York City. For the third consecutive year, the Kinect for Windows team was pleased to participate in the Microsoft booth, along with our colleagues from the Microsoft Retail Industry team.

    At the Microsoft booth, Justin Miller from NEC demoed their digital signage solution, which combines Kinect v2 depth tracking with NEC's demographic and facial recognition software.
    At the Microsoft booth, Justin Miller from NEC demoed their digital signage solution, which combines
    Kinect v2 depth tracking with NEC's demographic and facial recognition software.

    It has been exciting to see the changes that have taken place with Kinect in retail, particularly in the area of shopper analytics. At the 2013 and 2014 NRF events, interest in Kinect focused on the sensor’s unique capability to enable touchless interactions with large-screen displays. So we saw Kinect used to create innovative, interactive digital signs and kiosks, and even virtual fitting rooms. While those scenarios continue to offer compelling enhancements to the shopping experience, more and more developers are recognizing the potential of Kinect in collecting and analyzing shopper behavior, as demonstrated by the Kinect-enabled solutions from AVA retail and NEC that were on display in the Microsoft booth.

    AVA retail demoed two fascinating products: Path Tracker and SmartShelf.

    • Path Tracker was powered by six Kinect v2 sensors mounted some 30 feet (about 9 meters) above our booth, providing complete coverage of the exhibit floor. The sensors offered real-time detection and analysis of the path of every visitor to the Microsoft booth, thus demonstrating how a retailer could determine shoppers’ paths in the store. By showing the areas visited and how much time visitors spent there, it demonstrated how it could provide retailers with shopper counts and a heat map of where in the store their customers spend the most time.
    • Unlike the panoramic views employed by Path Tracker, SmartShelf focused on a particular retail shelf. By using the high-fidelity depth-sensing capabilities of a downward-facing Kinect sensor to monitor every customer interaction with products on that shelf, it provided real-time data on which products customers picked up and how long they were held. In addition, SmartShelf triggered digital signage that provided details and marketing materials for the products selected, and it sent mobile alerts to the tablet PCs of the sales staff (in this case, the booth personnel), letting them know, in real time, which products the customers were viewing.

    NEC Enterprise Biometrics also had an exciting demo in our booth: a unique digital signage solution that combined Kinect v2 depth tracking with NEC’s demographic and facial recognition software. It works like this: the Kinect sensor detects a shopper’s proximity to and engagement with digital signage in a product endcap or kiosk, launching an attract screen if the shopper appears to being walking away from the display. When a shopper stops and approaches the screen, a loop of product marketing messages begins to play. At this point, the Kinect sensor forwards data on the shopper’s face to the NEC biometric software, which determines the gender and approximate age of the customer, using this information to serve up age- or gender-appropriate product recommendations. While these targeted product recommendations are important for driving sales, the NEC solution is equally—if not more—valuable for the data analytics it provides, as it tracks the estimated age and gender of all the shoppers who walk by the kiosk, not just those who stop to engage. (It also recognizes store employees and does not include their data.) It also tracks the effectiveness of the messaging throughout the attraction, engagement, and purchase stages, to determine if the display is connecting with the target demographic for that particular product.

    As these innovative demos show, Kinect-based solutions can offer brick-and-mortar retailers the kinds of customer analytics and insights that their online counterparts take for granted. It’s going to be an exciting year!

    Michael Fry, Business Development and Partner, Kinect for Windows

    Key links

Page 5 of 17 (161 items) «34567»