Kinect’s gesture-enabled interactions can be entertaining (think games), profitable (think marketing), or educational (think kinesthetic learning). And sometimes, they can be potentially lifesaving!
Such was the case when the British Army asked CDS, a communications-solution company based in the UK, to create a touch-free interactive kiosk for use by possible Ebola victims. Part of a government-sponsored mission named Operation Gritrock, the kiosk will use video to provide critical information to people possibly infected by the deadly virus.
Recognizing that the Ebola outbreak in West Africa is already a global threat to public health, the UK is sending 800 army personnel to help combat its spread. The Kinect-enabled video kiosk will be an important part of this mission, as it will allow likely Ebola victims to learn about symptoms and care options without contaminating the kiosk equipment with their sweat or blood—body fluids that can harbor the Ebola organism.
British army medics train for deployment to West Africa, where they will fight the Ebola outbreak. A Kinect-enabled patient kiosk will help them screen potential Ebola victims.
Having investigated the technological options, CDS selected the Microsoft Kinect for Windows v2 sensor to deliver a sensory, gesture-driven solution. This involved developing a customized Windows 8.1 application using the Kinect application program interface (API). The Microsoft UK Developer Experience team fully supported CDS on the development of the kiosk solution, which was slated to be field tested in January 2015.
Mike Collier, CDS technical director, said, “It is always gratifying to work on solutions that make a difference, and this project has certainly been a project to be proud of for all associated with it.”
The Kinect for Windows Team
A lost puppy and his Clydesdales stablemates may have commanded the advertising spotlight during Super Bowl XLIX, but for our money, the real marketing magic from brewer Anheuser-Busch was on display at the “House of Whatever,” a gigantic tent set up for three days outside the stadium. Inside was a huge bar, behind which hung a Kinect v2 sensor oriented toward the crowd of thirsty football (and beer) fans, with a large video screen above it.
As patrons walked into view of the sensor, the screen served up signage asking, “Can we can interest you in a drink?” Stepping up a little closer, the fan was presented onscreen options among the freshly poured, free glasses of Anheuser-Busch beers sitting on the bar. As the thirsty patron happily picked up a beverage, the screen displayed the choice, along with anonymous age and gender analytics of all visitors that day and a pie chart showing which beers had been the most popular.
The Kinect v2 sensor recognizes patrons as they approach the bar, prompting the application to offer them a beer.
The patron was then offered a chance to raise the glass and say “cheers,” at which point the Kinect sensor captured the image and displayed it onscreen. The fan could then retrieve that photo using a QR code and Instagram it with hashtag #getkinected, in order to be in the running to win a new Microsoft Surface Pro.
The Kinect-enabled system was developed by Microsoft and incorporates world-leading biometric technology from NEC, which uses face recognition and measures the age, gender, and total headcount of patrons. NEC and Microsoft have been working together closely on this new breed of interactive retail systems, which offers a compelling shopping experience for customers and invaluable backend demographic and engagement data for the retailer. A version of the system was displayed earlier in January at the National Retail Federation’s (NRF) annual event in New York City.
The system can even recognize previous customers—provided the retailer has obtained express permission to store the customer’s facial image, as, say, part of a loyalty program. This feature allows the retailer to serve up ads and offers that tie directly to that patron’s past purchases. (It also recognizes store employees, allowing the system to ignore their presence.)
Puppies and horses may engender warm feelings, but the customer analytics enabled by the Kinect-NEC system can generate on-the-spot sales and one-to-one customer interactions.
The following blog was guest authored by Dr. Neil Roodyn, a Microsoft MVP and the founder and director of nsquared, a company dedicated to using software to solve business problems.
What does a maker of high-end consumer electronics do to reinforce its image as a pioneer in tech innovation? That was the question facing our client, the Australian subsidiary of one of the world’s foremost makers of TVs, smartphones, and appliances. The client was preparing to launch a branded store in Melbourne Central Mall, and executives wanted this, their flagship Australian retail shop, to show customers what's possible with the company’s technology. They decided on a large-scale, interactive experience that would support multiple users, hosted on a multiscreen display in front of the store, and they turned to our team at nsquared to provide the technology solution.
We worked with the company to develop a new activity called Graffiti, powered by the Kinect for Windows v2 sensor. As shoppers walk past the display, their shadows appear on the interactive wall. This engages potential customers, who soon discover that they can grab a virtual aerosol can and "paint" the wall as the Kinect sensor tracks their spraying movements. Their virtual spray-painting slowly reveals the underlying graffiti image. Once they've sprayed most of the image, they are encouraged to check out the artist information on the left side of the display, which leads toward the store. The idea, of course, is to promote the client’s technological prowess while cleverly funneling shoppers into the store’s entrance.
Twelve screens on the display’s front make up the interactive wall, which measure 5 meters wide by 2.1 meters tall (about 16 feet by 7 feet), making the graffiti experience the first application that I know of to use the Kinect v2 sensor in such a massive, wide-screen array in a public space. I believe that it’s also the first repeating, multiuser, interactive experience in a permanent public space in Australia. While it runs no more frequently than the store’s other digital signage, the graffiti experience increases the level of customer engagement by using Kinect for Windows v2 and Windows 8.1.
The massive, interactive graffiti wall engages potential customers and helps cement the sponsor's image as a tech innovator.
A single Kinect v2 sensor mounted at the top center of the display can recognize up to six users at once, with all of them spraying on the display. Sound effects of the spraying emanate from three speakers mounted in the ceiling above the screen. If the players shake a can, they hear the clicking sounds that a real can of spray paint makes when agitated. Users tell us that these kinds of little touches are what make the experience so lifelike and engaging.
Since its deployment in October 2014, the Graffiti application has attracted a steady parade of curious users. By providing a fun and interactive activity in front of their store, our client has done more than just capture the attention of passersby—it has actively involved them in a new and memorable experience.
Neil Roodyn, Director and Founder, nsquared
If you wanted to see the latest, most innovative technology for retailers, the 2015 National Retail Federation (NRF) Expo was the place to be. This annual event, popularly known as the “Big Show,” took place January 11–13 in New York City. For the third consecutive year, the Kinect for Windows team was pleased to participate in the Microsoft booth, along with our colleagues from the Microsoft Retail Industry team.
At the Microsoft booth, Justin Miller from NEC demoed their digital signage solution, which combines Kinect v2 depth tracking with NEC's demographic and facial recognition software.
It has been exciting to see the changes that have taken place with Kinect in retail, particularly in the area of shopper analytics. At the 2013 and 2014 NRF events, interest in Kinect focused on the sensor’s unique capability to enable touchless interactions with large-screen displays. So we saw Kinect used to create innovative, interactive digital signs and kiosks, and even virtual fitting rooms. While those scenarios continue to offer compelling enhancements to the shopping experience, more and more developers are recognizing the potential of Kinect in collecting and analyzing shopper behavior, as demonstrated by the Kinect-enabled solutions from AVA retail and NEC that were on display in the Microsoft booth.
AVA retail demoed two fascinating products: Path Tracker and SmartShelf.
NEC Enterprise Biometrics also had an exciting demo in our booth: a unique digital signage solution that combined Kinect v2 depth tracking with NEC’s demographic and facial recognition software. It works like this: the Kinect sensor detects a shopper’s proximity to and engagement with digital signage in a product endcap or kiosk, launching an attract screen if the shopper appears to being walking away from the display. When a shopper stops and approaches the screen, a loop of product marketing messages begins to play. At this point, the Kinect sensor forwards data on the shopper’s face to the NEC biometric software, which determines the gender and approximate age of the customer, using this information to serve up age- or gender-appropriate product recommendations. While these targeted product recommendations are important for driving sales, the NEC solution is equally—if not more—valuable for the data analytics it provides, as it tracks the estimated age and gender of all the shoppers who walk by the kiosk, not just those who stop to engage. (It also recognizes store employees and does not include their data.) It also tracks the effectiveness of the messaging throughout the attraction, engagement, and purchase stages, to determine if the display is connecting with the target demographic for that particular product.
As these innovative demos show, Kinect-based solutions can offer brick-and-mortar retailers the kinds of customer analytics and insights that their online counterparts take for granted. It’s going to be an exciting year!
Michael Fry, Business Development and Partner, Kinect for Windows
Tension mounts as the place kicker trots onto the field. With only seconds left on the clock and a score of 12 to 14, this three-point field goal attempt will decide the game’s outcome. And who is the intrepid kicker, on whose kicking prowess so much rests? It’s any of a host of fans, each eager to boot the game-winning goal, thanks to some Kinect for Windows magic. The virtual field goal experience was a huge hit with football fans—watch the video below and see it in action.
Yes, thousands of fans lined up for a chance to kick virtual field goals this past season at FedEx Field during home games of Washington D.C.’s National Football League (NFL) franchise and at the annual Army-Navy grudge match at Baltimore’s M&T Bank Stadium. Part of an interactive Social Media Lounge, the virtual field goal setup consisted of a gaming wall, an enormous, high-definition screen composed of nine monitors and a Kinect for Windows v2 sensor. The Kinect sensor tracked the kicker’s motion as he or she approached the virtual ball, gave it a boot, and then followed through with, one hopes, the classic high leg swing. Using this data, the Kinect-powered app computed the velocity, distance, and accuracy of the attempt and determined whether the kicker was the hero of the play. Unlike the NFL players, the fans got a little help: each kicker was allowed five attempts to send the ball through the uprights.
The Social Media Lounge was the brainchild of high-tech marketing firm MVP Interactive. In addition to the gaming wall, the Lounge enabled fans to create virtual bobble head figurines in their own likeness and to send email images of their kick attempts and bobble heads or post the images to Facebook or Twitter. The Army-Navy game event alone generated more than 67,000 social impressions (emails, Facebook posts and clicks, and Twitter clicks).
It’s “thumbs up” from this fan as he prepares to attempt a virtual field goal for the Washington NFL team.
The events were sponsored by multinational brewing company Anheuser-Busch at the Washington games and insurer USAA at the Army-Navy game. Anheuser-Busch used the experience to promote their Bud Light brand among digital-savvy millennials, a much-sought-after demographic for the brewer, while USAA, which offers financial products to military families, took advantage of the event to make a positive connection with potential customers.
Anthony DiPrizio, chief technology officer at MVP Interactive, praises the capabilities of the Kinect for Windows v2 sensor. “At MVP Interactive, we work with and develop many cutting-edge technologies. We've found that the Kinect v2 sensor rivals and outperforms most commercial-grade sensors, which are 10 times the cost of the Kinect v2. We're very pleased with what Microsoft and the Kinect community have done to allow us to create interactions like the virtual field goal kick. The relentless effort of their development team to push out new SDK versions on a regular basis is truly game changing. We look forward to continuing to work with the device and push the boundaries of what's possible.”
In October, we shipped the public release of the Kinect for Windows v2 sensor and its software development kit (SDK 2.0). The availability of the v2 sensor and SDK 2.0 means that we will be phasing out the sale of the original Kinect for Windows sensor in 2015.
The move to v2 marks the next stage in our journey toward more natural human computing. The new sensor provides a host of new and improved features, including enhanced body tracking, greater depth fidelity, full 1080p high-definition video, new active infrared capabilities, and an expanded field of view. Likewise, SDK 2.0 offers scores of updates and enhancements, not the least of which is the ability to create and publish Kinect-enabled apps in the Windows Store. At the same time that we publicly released the v2 sensor and its SDK, we also announced the availability of the Kinect Adapter for Windows, which lets developers create Kinect for Windows applications by using a Kinect for Xbox One sensor. The response of the developer community to Kinect v2 has been tremendous: every day, we see amazing apps built on the capabilities of the new sensor and SDK, and since we released the public beta of SDK 2.0 in July, the community has been telling us that porting their original solutions over to v2 is smoother and faster than expected.
The original Kinect for Windows sensor was a milestone achievement in the world of natural human computing. It allowed developers to create solutions that broke through the old barriers of mouse and keyboard interactions, opening up entirely new commercial experiences in multiple industries, including retail, education, healthcare, education, and manufacturing. The original Kinect let preschoolers play educational games by simply moving their arms; it coached patients through physical rehabilitation; it gave shoppers new ways to engage with merchandise and even try on clothes. The list of innovative solutions powered by the original Kinect for Windows goes on and on.
We hope everyone will embrace the latest Kinect technology as soon as possible, but we understand that some business customers have commitments to the original sensor and SDK. If you’re one of them and need a significant number of original Kinect for Windows sensors, please contact us as soon as possible. We will do our best to fill your orders, but no more original sensors will be manufactured after the current stock sells out.
All of us on the Kinect for Windows team are grateful to all of you in the community who jumped on this technology and showed us what it could do. We know that your proven track record doing great things with the original technology will only get better with v2—the improvements in quality from the original Kinect for Windows sensor to the v2 device are truly immense. And so, we’re cheered by the prospect of seeing all the amazing solutions you’ll create with the new and improved Kinect for Windows.
Every December, British shoppers look forward to the creative holiday ad campaign from John Lewis, a major UK department store chain. It’s been a tradition for a number of years, is seen by millions of viewers in the UK annually, and won a coveted IPA Effectiveness Award in 2012. The retailer’s seasonal campaign traditionally emphasizes the joy of giving and the magic of Christmas, and this year’s ads continue that tradition, with a television commercial that depicts the loving relationship between a young boy and his pet penguin, Monty.
But the iconic British retailer has added a unique, high-tech twist to the 2014 campaign: Monty’s Magical Toy Machine, an in-store experience that uses the Kinect for Windows v2 sensor to let kids turn their favorite stuffed toy into an interactive 3D model. The experience deftly plays off the TV ad, whose narrative reveals that Monty is a stuffed toy that comes alive in the boy’s imagination.
Monty’s Magical Toy Machine experience, which is available at the John Lewis flagship store on London’s Oxford Street, plays to every child’s fantasy of seeing a cherished teddy bear or rag doll come to life—a theme that runs through children’s classics from Pinocchio to the many Toy Story movies. The experience has been up and running since November 6, with thousands of customers interacting with it to date. Customers have until December 23 to enjoy the experience before it closes.
The toy machine experience was the brainchild of Microsoft Advertising, which had been approached by John Lewis to come up with an innovative, technology-based experience based on the store’s holiday ad. “We actually submitted several ideas,” explains creative solutions specialist Art Tindsley, “and Monty’s Magical Toy Machine was the one that really excited people. We were especially pleased, because we were eager to use the new capabilities of the Kinect v2 sensor to create something truly unique.”
John Lewis executives loved the idea and gave Microsoft the green light to proceed. "We were genuinely excited when Microsoft presented this idea to us,” says Rachel Swift, head of marketing for the John Lewis brand. “Not only did it exemplify the idea perfectly, it did so in a way that was both truly innovative and charming.”
Working with the John Lewis team and creative agency adam&eveDDB, the Microsoft team came up with the design of the Magical Toy Machine: a large cylinder, surrounded by three 75-inch display screens, one of which is topped by a Kinect for Windows v2 sensor. It is on this screen that the animation takes place.
The enchantment happens here, at Monty's Magical Toy Machine. Two of the enormous displayscreens can be seen in this photo; the screen on the left has a Kinect for Windows v2 sensormounted above and speakers positioned below.
The magic begins when the child’s treasured toy is handed over to one of Monty’s helpers. The helper then takes the toy into the cylinder, where, unseen by the kids, it is suspended by wires and photographed by three digital SLR cameras. The cameras rotate around the toy, capturing it from every angle. The resulting photos are then fed into a customized computer running Windows 8.1, which compiles them into a 3D image that is projected onto the huge screen, much to the delight of the toy’s young owner, who is standing in front of the display. This all takes fewer than two minutes.
Suspended by wires, the toy is photographed by three digital SLR cameras (two of which arevisible here) that rotate around the toy and capture its image from every angle.
The Kinect for Windows v2 sensor then takes over, bringing the toy’s image to life by capturing and responding to the youngster’s gestures. When a child waves at the screen, their stuffed friend wakens from its inanimate slumber—magically brought to life and waving back to its wide-eyed owner. Then, when the child waves again, their toy dances in response, amazing and enchanting both kids and parents, many of whom cannot resist dancing too.
The Kinect for Windows SDK 2.0 plays an essential role in animating the toy. Having added a skeletal image to the toy, the developers used the Kinect for Windows software development kit (SDK) 2.0 to identify key sequences of movements, thus enabling the toy to mimic the lifelike poses and dances of a human being. Because the actions map to those of a human figure, Monty’s Magical Toy Machine works best on toys like teddy bears and dolls, which have a bipedal form like that of a person. It also functions best with soft-textured toys, whose surface features are more accurately captured in the photos.
The entire project took two months to build, reports Tindsley. “We began with scanning a real toy with Kinect technology, mapping it to create a surface representation (a mesh), then adding in texture and color. We then brought in a photogrammetry expert who created perfect 3D images for us to work with,” Tindsley recalls.
Then came the moment of truth: bringing the image to life. “In the first trials, it took 12 minutes from taking the 3D scans of the toy to it ‘waking up’ on the screen—too long for any eager child or parent to wait,” said Tindsley. “Ten days later, we had it down to around 100 seconds. We then compiled—read choreographed and performed—a series of dance routines for the toy, using a combination of Kinect technology and motion capture libraries,” he recounts.
None of this behind-the-scenes, high tech matters to the children, who joyfully accept that somehow their favorite stuffed toy has miraculously come to life. Their looks of surprise and wonder are priceless.
And the payoff for John Lewis? Brand loyalty and increased traffic during a critical sales period. As Rachel Swift notes, “The partnership with Microsoft allowed us to deliver a unique and memorable experience at a key time of year. But above all,” she adds, “the reward lies in surprising and delighting our customers, young and old.” Just as Monty receives the perfect Christmas gift in the TV ad, so, too, do the kids whose best friends come to life before their wondering eyes.
“I’ve fallen … and I can’t get up.” That line, from a low-budget 1980s TV commercial hawking a personal medical emergency call button, has been fodder for countless comedians over the years. But falls among the elderly are anything but a laughing matter, especially to Maureen Glynn, the director of behavioral innovation programming at Intel-GE Care Innovations.
“Falls are a major health concern among the elderly,” she says, and the statistics certainly back her up. In fact, the U.S. Centers for Disease Control and Prevention reports that each year one of three Americans over the age of 65 takes a spill, and the results can be devastating: broken bones, permanent disabilities, and complications that can lead to death. In fact, falls are the leading cause of fatal and nonfatal injuries among older adults, with studies documenting that 20 to 30 percent of the elderly who fall suffer moderate to severe injuries. In 2003, for example, about 13,700 Americans 65 years or older died from falls, and another 1.8 million were treated in emergency departments for nonfatal fall injuries. Treating elderly patients who have fallen costs about $30 billion annually in the United States today, and experts estimate that that amount could more than double by 2020, given the aging population of Baby Boomers.
Under the watchful eye of the Kinect sensor, a patient performs her physical therapy regimen from the comfort and convenience of her own home.
What’s more, once an elderly individual has suffered a fall, he or she is much more likely to fall again without some sort of intervention. “I had a 76-year-old family member who fell five times, enduring repeated broken bones,” Glynn recounts. And while broken bones are no fun for anyone, they pose special problems in the elderly, whose ability to heal is often diminished. Seniors who break a hip—a common injury in falls among the elderly—may end up spending considerable time in the hospital and rehab, and may never attain full functionality again. Such sufferers become physically inactive, which, notes Glynn “can lead to chronic mental and physical disease.”
Glynn’s employer is determined to change this dismal picture. As its name clearly indicates, Intel-GE Care Innovations is a joint venture of two industry titans. Founded in 2011, the company seeks to transform the way care is delivered by connecting patients in their homes with care teams—thus enabling patients to live independently whenever possible. Augmenting the technological strengths of its parent companies with deep knowledge of the healthcare system, Intel-GE Care Innovations collects, aggregates, and analyzes data to provide insights that connect providers, payers, caregivers, and consumers—and brings the care continuum into the patient’s home. For example, the company has established the Care Innovations Validation Institute to improve standards for measuring and promoting remote care management solutions and services.
One of the company’s latest products, RespondWell from Care Innovations, takes direct aim at the problem of falls among the elderly. As Glynn observes, “As a company dedicated to helping patients receive the healthcare they need while maintaining as much independence as possible, we saw the need for a home-based solution that helps older people recover from and avoid falls.”
The Kinect sensor monitors the patient’s performance, correcting improperly executed movements and awarding points for those done appropriately.
Responding to this need led them to partner with RespondWell, a healthcare IT software company that, as CEO John Grispon explains, “specializes in activating patients and driving efficiencies. We motivate patients to follow through with their physical therapy, by making the activities interactive and engaging.” The company’s antecedents were in the gaming world, having created one of the first fitness games for the original Xbox, back in 2005. From there, RespondWell moved into the physical therapy industry, determined to do for rehab what they had done for fitness: getting people up and moving by making the often onerous rehab exercises interactive and entertaining.
Both Intel-GE Care Innovations and RespondWell saw Kinect as the logical platform for addressing fall prevention and rehabilitation among seniors. Recognizing how difficult it can be for older people to make daily visits to their therapist’s office, the teams at Intel-GE Care Innovations and RespondWell have created an interactive program that lets patients exercise in the comfort of their own home while providing Kinect-based gesture monitoring to ensure that they are performing their exercises correctly. The solution is sold to therapists and other healthcare providers.
It works like this: the therapist evaluates the patient and then designs a program of exercises that are intended to restore functions and, equally important, to prevent future falls. The patient learns the movements under the watchful eye of the therapist—and the unblinking lens of the Kinect sensor, which faithfully tracks the patient’s skeletal positions throughout the exercises.
Using data captured by the Kinect sensor, the physical therapist can track a patient’s progress and adjust the exercise regimen as necessary.
At home, patients call up their personalized exercise program on a Windows tablet or PC, which is connected to a Kinect sensor. They then perform the exercises, again under the view of the Kinect sensor, and the system analyzes their movements and provides instructions to correct any mistakes. The system not only corrects errors, but it rewards good performance with points, adding a competitive element that many patients find highly motivating. Glynn praises this positive reinforcement element of the solution, pointing out that it motivates patients without being overly gamified. She also points out that the solution not only monitors and coaches patients in the comfort of their own home, but that it also sends data about the performance to their therapist, who can adjust the exercises as needed.
RespondWell from Care Innovations was developed on the Kinect v2 sensor, which Grispon enthusiastically endorses, “especially the enhanced field of view, which lets us get a good look at the patient even when he’s very close to the sensor.” He also praises the v2 sensor’s improved picture resolution and enhanced skeletal tracking, both of which boost the solution’s ability to precisely record patient movements. When asked how easy it was to port the code from the original Kinect sensor to the Kinect v2 sensor, Grispon quotes his lead developer, who says that the process was easy and offers this advice to other devs, “Smoothing is your friend—use it.” (Smoothing is a feature in SDK 2.0 that makes it easier for developers to recognize joints in the Kinect image.)
Currently in pilot testing, RespondWell from Care Innovations is expected to be generally available by January 2015. Patients in the pilot program report that the solution makes physical therapy more enjoyable, while therapists are delighted that patients are more motivated to do their home exercises and that they are performing them more accurately. Both Glynn and Grispon stress that RespondWell from Care Innovations fills a vital need in our healthcare system. A system that not only helps seniors recover from falls but also helps prevent future tumbles could curb medical costs and offer the elderly a much improved quality of life. We’re pleased that Kinect can play a role in this effort.
What science fiction fan doesn’t love the idea of telekinesis? We never tire of the illusion, but there’s nothing illusory about Microsoft tech evangelist Mike Taulty’s ability to move a ball without touching it, using only simple hand gestures.
While Taulty calls his app “hacking for fun,” we see its potential in the real world. Imagine, for instance, how much more engaged a person could be with a digital display in a shopping center or public space if they could manipulate products or objects themselves. Imagine this simple application applied to a museum installation, an advertising display in a retail store, or a gaming arcade. We may not be able to control objects with our minds, but Kinect for Windows gives us the next best thing.
Earlier this month, we traveled north for a developer hackathon at the Burnaby campus of the British Columbia Institute of Technology (BCIT), located in the heart of the Vancouver metropolitan area. The event, which was hosted by BCIT and co-sponsored by our team along with Occipital (makers of the Structure Sensor), drew nearly 100 developers, all eager to spend their weekend hacking with us. We were astonished by their creativity and energy—and their ability to cram so much hardware on each table!
Attendees hunched over keyboards and displays, hard at work on their projects.
Team Bubble Fighter won the top prize ($250, five Kinect for Windows v2 sensors and carrying bags, and a Structure Sensor) for their game Bubble Fighter. The game, reminiscent of Street Fighter, allows two people to play against one another over a network connection. Game play includes special moves triggered by gestures, including jumping to make your avatar leap over projectiles.
Players really had to jump to avoid projectiles in Bubble Fighter.
Team NAE took second place ($100, five Kinect for Windows v2 sensors and carrying bags, and a Structure Sensor) for their Public Speech Trainer, which helps users improve their public speaking, including training to avoid bad posture (think crossed arms) and distracting gestures. The app also provides built-in video recording, enabling users to review of all their prior training sessions.
The Eh-Team grabbed third place (five Kinect for Windows v2 sensors and carrying bags) for The Fitness Clinic, an app that provides real-time feedback on a user’s form during popular gym workouts.
We loved the great turnout, even if it meant that the hackers had just enough room on the table for all their gear.
Other projects presented
Team WelCam demonstrated their fall detection app.
Thanks our gracious hosts at BCIT, to all the attendees who came to hack and share ideas, and to our co-sponsor Occipital. I hope to see everyone again at a future event.
Ben Lower, Developer Community Manager, Kinect for Windows
____________________*Denotes projects awarded an Honorable Mention by the judges