Every December, British shoppers look forward to the creative holiday ad campaign from John Lewis, a major UK department store chain. It’s been a tradition for a number of years, is seen by millions of viewers in the UK annually, and won a coveted IPA Effectiveness Award in 2012. The retailer’s seasonal campaign traditionally emphasizes the joy of giving and the magic of Christmas, and this year’s ads continue that tradition, with a television commercial that depicts the loving relationship between a young boy and his pet penguin, Monty.
But the iconic British retailer has added a unique, high-tech twist to the 2014 campaign: Monty’s Magical Toy Machine, an in-store experience that uses the Kinect for Windows v2 sensor to let kids turn their favorite stuffed toy into an interactive 3D model. The experience deftly plays off the TV ad, whose narrative reveals that Monty is a stuffed toy that comes alive in the boy’s imagination.
Monty’s Magical Toy Machine experience, which is available at the John Lewis flagship store on London’s Oxford Street, plays to every child’s fantasy of seeing a cherished teddy bear or rag doll come to life—a theme that runs through children’s classics from Pinocchio to the many Toy Story movies. The experience has been up and running since November 6, with thousands of customers interacting with it to date. Customers have until December 23 to enjoy the experience before it closes.
The toy machine experience was the brainchild of Microsoft Advertising, which had been approached by John Lewis to come up with an innovative, technology-based experience based on the store’s holiday ad. “We actually submitted several ideas,” explains creative solutions specialist Art Tindsley, “and Monty’s Magical Toy Machine was the one that really excited people. We were especially pleased, because we were eager to use the new capabilities of the Kinect v2 sensor to create something truly unique.”
John Lewis executives loved the idea and gave Microsoft the green light to proceed. "We were genuinely excited when Microsoft presented this idea to us,” says Rachel Swift, head of marketing for the John Lewis brand. “Not only did it exemplify the idea perfectly, it did so in a way that was both truly innovative and charming.”
Working with the John Lewis team and creative agency adam&eveDDB, the Microsoft team came up with the design of the Magical Toy Machine: a large cylinder, surrounded by three 75-inch display screens, one of which is topped by a Kinect for Windows v2 sensor. It is on this screen that the animation takes place.
The enchantment happens here, at Monty's Magical Toy Machine. Two of the enormous displayscreens can be seen in this photo; the screen on the left has a Kinect for Windows v2 sensormounted above and speakers positioned below.
The magic begins when the child’s treasured toy is handed over to one of Monty’s helpers. The helper then takes the toy into the cylinder, where, unseen by the kids, it is suspended by wires and photographed by three digital SLR cameras. The cameras rotate around the toy, capturing it from every angle. The resulting photos are then fed into a customized computer running Windows 8.1, which compiles them into a 3D image that is projected onto the huge screen, much to the delight of the toy’s young owner, who is standing in front of the display. This all takes fewer than two minutes.
Suspended by wires, the toy is photographed by three digital SLR cameras (two of which arevisible here) that rotate around the toy and capture its image from every angle.
The Kinect for Windows v2 sensor then takes over, bringing the toy’s image to life by capturing and responding to the youngster’s gestures. When a child waves at the screen, their stuffed friend wakens from its inanimate slumber—magically brought to life and waving back to its wide-eyed owner. Then, when the child waves again, their toy dances in response, amazing and enchanting both kids and parents, many of whom cannot resist dancing too.
The Kinect for Windows SDK 2.0 plays an essential role in animating the toy. Having added a skeletal image to the toy, the developers used the Kinect for Windows software development kit (SDK) 2.0 to identify key sequences of movements, thus enabling the toy to mimic the lifelike poses and dances of a human being. Because the actions map to those of a human figure, Monty’s Magical Toy Machine works best on toys like teddy bears and dolls, which have a bipedal form like that of a person. It also functions best with soft-textured toys, whose surface features are more accurately captured in the photos.
The entire project took two months to build, reports Tindsley. “We began with scanning a real toy with Kinect technology, mapping it to create a surface representation (a mesh), then adding in texture and color. We then brought in a photogrammetry expert who created perfect 3D images for us to work with,” Tindsley recalls.
Then came the moment of truth: bringing the image to life. “In the first trials, it took 12 minutes from taking the 3D scans of the toy to it ‘waking up’ on the screen—too long for any eager child or parent to wait,” said Tindsley. “Ten days later, we had it down to around 100 seconds. We then compiled—read choreographed and performed—a series of dance routines for the toy, using a combination of Kinect technology and motion capture libraries,” he recounts.
None of this behind-the-scenes, high tech matters to the children, who joyfully accept that somehow their favorite stuffed toy has miraculously come to life. Their looks of surprise and wonder are priceless.
And the payoff for John Lewis? Brand loyalty and increased traffic during a critical sales period. As Rachel Swift notes, “The partnership with Microsoft allowed us to deliver a unique and memorable experience at a key time of year. But above all,” she adds, “the reward lies in surprising and delighting our customers, young and old.” Just as Monty receives the perfect Christmas gift in the TV ad, so, too, do the kids whose best friends come to life before their wondering eyes.
The Kinect for Windows Team
“I’ve fallen … and I can’t get up.” That line, from a low-budget 1980s TV commercial hawking a personal medical emergency call button, has been fodder for countless comedians over the years. But falls among the elderly are anything but a laughing matter, especially to Maureen Glynn, the director of behavioral innovation programming at Intel-GE Care Innovations.
“Falls are a major health concern among the elderly,” she says, and the statistics certainly back her up. In fact, the U.S. Centers for Disease Control and Prevention reports that each year one of three Americans over the age of 65 takes a spill, and the results can be devastating: broken bones, permanent disabilities, and complications that can lead to death. In fact, falls are the leading cause of fatal and nonfatal injuries among older adults, with studies documenting that 20 to 30 percent of the elderly who fall suffer moderate to severe injuries. In 2003, for example, about 13,700 Americans 65 years or older died from falls, and another 1.8 million were treated in emergency departments for nonfatal fall injuries. Treating elderly patients who have fallen costs about $30 billion annually in the United States today, and experts estimate that that amount could more than double by 2020, given the aging population of Baby Boomers.
Under the watchful eye of the Kinect sensor, a patient performs her physical therapy regimen from the comfort and convenience of her own home.
What’s more, once an elderly individual has suffered a fall, he or she is much more likely to fall again without some sort of intervention. “I had a 76-year-old family member who fell five times, enduring repeated broken bones,” Glynn recounts. And while broken bones are no fun for anyone, they pose special problems in the elderly, whose ability to heal is often diminished. Seniors who break a hip—a common injury in falls among the elderly—may end up spending considerable time in the hospital and rehab, and may never attain full functionality again. Such sufferers become physically inactive, which, notes Glynn “can lead to chronic mental and physical disease.”
Glynn’s employer is determined to change this dismal picture. As its name clearly indicates, Intel-GE Care Innovations is a joint venture of two industry titans. Founded in 2011, the company seeks to transform the way care is delivered by connecting patients in their homes with care teams—thus enabling patients to live independently whenever possible. Augmenting the technological strengths of its parent companies with deep knowledge of the healthcare system, Intel-GE Care Innovations collects, aggregates, and analyzes data to provide insights that connect providers, payers, caregivers, and consumers—and brings the care continuum into the patient’s home. For example, the company has established the Care Innovations Validation Institute to improve standards for measuring and promoting remote care management solutions and services.
One of the company’s latest products, RespondWell from Care Innovations, takes direct aim at the problem of falls among the elderly. As Glynn observes, “As a company dedicated to helping patients receive the healthcare they need while maintaining as much independence as possible, we saw the need for a home-based solution that helps older people recover from and avoid falls.”
The Kinect sensor monitors the patient’s performance, correcting improperly executed movements and awarding points for those done appropriately.
Responding to this need led them to partner with RespondWell, a healthcare IT software company that, as CEO John Grispon explains, “specializes in activating patients and driving efficiencies. We motivate patients to follow through with their physical therapy, by making the activities interactive and engaging.” The company’s antecedents were in the gaming world, having created one of the first fitness games for the original Xbox, back in 2005. From there, RespondWell moved into the physical therapy industry, determined to do for rehab what they had done for fitness: getting people up and moving by making the often onerous rehab exercises interactive and entertaining.
Both Intel-GE Care Innovations and RespondWell saw Kinect as the logical platform for addressing fall prevention and rehabilitation among seniors. Recognizing how difficult it can be for older people to make daily visits to their therapist’s office, the teams at Intel-GE Care Innovations and RespondWell have created an interactive program that lets patients exercise in the comfort of their own home while providing Kinect-based gesture monitoring to ensure that they are performing their exercises correctly. The solution is sold to therapists and other healthcare providers.
It works like this: the therapist evaluates the patient and then designs a program of exercises that are intended to restore functions and, equally important, to prevent future falls. The patient learns the movements under the watchful eye of the therapist—and the unblinking lens of the Kinect sensor, which faithfully tracks the patient’s skeletal positions throughout the exercises.
Using data captured by the Kinect sensor, the physical therapist can track a patient’s progress and adjust the exercise regimen as necessary.
At home, patients call up their personalized exercise program on a Windows tablet or PC, which is connected to a Kinect sensor. They then perform the exercises, again under the view of the Kinect sensor, and the system analyzes their movements and provides instructions to correct any mistakes. The system not only corrects errors, but it rewards good performance with points, adding a competitive element that many patients find highly motivating. Glynn praises this positive reinforcement element of the solution, pointing out that it motivates patients without being overly gamified. She also points out that the solution not only monitors and coaches patients in the comfort of their own home, but that it also sends data about the performance to their therapist, who can adjust the exercises as needed.
RespondWell from Care Innovations was developed on the Kinect v2 sensor, which Grispon enthusiastically endorses, “especially the enhanced field of view, which lets us get a good look at the patient even when he’s very close to the sensor.” He also praises the v2 sensor’s improved picture resolution and enhanced skeletal tracking, both of which boost the solution’s ability to precisely record patient movements. When asked how easy it was to port the code from the original Kinect sensor to the Kinect v2 sensor, Grispon quotes his lead developer, who says that the process was easy and offers this advice to other devs, “Smoothing is your friend—use it.” (Smoothing is a feature in SDK 2.0 that makes it easier for developers to recognize joints in the Kinect image.)
Currently in pilot testing, RespondWell from Care Innovations is expected to be generally available by January 2015. Patients in the pilot program report that the solution makes physical therapy more enjoyable, while therapists are delighted that patients are more motivated to do their home exercises and that they are performing them more accurately. Both Glynn and Grispon stress that RespondWell from Care Innovations fills a vital need in our healthcare system. A system that not only helps seniors recover from falls but also helps prevent future tumbles could curb medical costs and offer the elderly a much improved quality of life. We’re pleased that Kinect can play a role in this effort.
What science fiction fan doesn’t love the idea of telekinesis? We never tire of the illusion, but there’s nothing illusory about Microsoft tech evangelist Mike Taulty’s ability to move a ball without touching it, using only simple hand gestures.
While Taulty calls his app “hacking for fun,” we see its potential in the real world. Imagine, for instance, how much more engaged a person could be with a digital display in a shopping center or public space if they could manipulate products or objects themselves. Imagine this simple application applied to a museum installation, an advertising display in a retail store, or a gaming arcade. We may not be able to control objects with our minds, but Kinect for Windows gives us the next best thing.
Earlier this month, we traveled north for a developer hackathon at the Burnaby campus of the British Columbia Institute of Technology (BCIT), located in the heart of the Vancouver metropolitan area. The event, which was hosted by BCIT and co-sponsored by our team along with Occipital (makers of the Structure Sensor), drew nearly 100 developers, all eager to spend their weekend hacking with us. We were astonished by their creativity and energy—and their ability to cram so much hardware on each table!
Attendees hunched over keyboards and displays, hard at work on their projects.
Team Bubble Fighter won the top prize ($250, five Kinect for Windows v2 sensors and carrying bags, and a Structure Sensor) for their game Bubble Fighter. The game, reminiscent of Street Fighter, allows two people to play against one another over a network connection. Game play includes special moves triggered by gestures, including jumping to make your avatar leap over projectiles.
Players really had to jump to avoid projectiles in Bubble Fighter.
Team NAE took second place ($100, five Kinect for Windows v2 sensors and carrying bags, and a Structure Sensor) for their Public Speech Trainer, which helps users improve their public speaking, including training to avoid bad posture (think crossed arms) and distracting gestures. The app also provides built-in video recording, enabling users to review of all their prior training sessions.
The Eh-Team grabbed third place (five Kinect for Windows v2 sensors and carrying bags) for The Fitness Clinic, an app that provides real-time feedback on a user’s form during popular gym workouts.
We loved the great turnout, even if it meant that the hackers had just enough room on the table for all their gear.
Other projects presented
Team WelCam demonstrated their fall detection app.
Thanks our gracious hosts at BCIT, to all the attendees who came to hack and share ideas, and to our co-sponsor Occipital. I hope to see everyone again at a future event.
Ben Lower, Developer Community Manager, Kinect for Windows
____________________*Denotes projects awarded an Honorable Mention by the judges
Someday, people might reminisce about the days when ATMs were the state of the art in offsite banking. That day might come sooner than expected, thanks in part to Kinect for Windows technology. Diebold, Incorporated, a worldwide leader in integrated service solutions, has unveiled a prototype of a standalone banking platform that promises to make self-service banking more convenient, intuitive, and secure, for both the customer and the financial institution. Called the Responsive Banking Concept, the prototype is equipped with touch screens and sensing devices that simplify and protect offsite banking transactions. Kinect for Windows is one of the key underlying technologies, providing motion and voice sensing to help recognize customers and provide personalized service for even complex transactions. The Responsive Banking Concept prototype made its debut on November 2 at the 2014 Money 20/20 conference, a global event for financial service industries. Designed to bring secure, personalized financial services to places where customers work and play, the full-scale Responsive Banking Concept could be implemented in airports, shopping malls, and other high-traffic areas, while smaller modular versions could be placed in retail shops. So someday soon, a Kinect-enabled installation might be your new personal banker.
The Kinect for Windows Team
Today, we're extremely excited to announce some major news about Kinect:
You can find more details about these developments in Microsoft Technical Fellow Alex Kipman's post on the Official Microsoft Blog. As Alex says, "these updates are all part of our desire to make Kinect accessible and easy to use for every developer."
We recently traveled to the Netherlands’ capital for our latest developer hackathon. The venue, Pakhuis de Zwijger, a former refrigerated warehouse located on one of Amsterdam’s many canals, made for a very unique setting. Developers from all over Europe came for the 28-hour event, which was hosted by Dare to Difr. There were some very innovative projects, and we couldn’t have been happier with the energy of the attendees and the quality of their work.
The participants’ energy and creativity resulted in innovative projects during the Kinect for Windows hackathon in Amsterdam (September 5–6, 2014).
Team Hoog+Diep took the top prize (€1000 and one Windows v2 sensor and carrying bag per team member) for their app My First Little Toy Story 3D, which allows users to capture playful adventures with favorite toys and share them as videos with friends. The app tracks the movement of dinosaurs and helicopters while the user plays with them, then it “magically” makes the user disappear from the video before sharing it.
Team Hoog+Diep took first place for their augmented play app, My First Little Toy Story 3D.
Team AK earned second place (€500 and one Windows v2 sensor and carrying bag per team member) for Clara, an app that provides real-time analytics for a retail store, showing how many shoppers came through and providing insights on customer behavior and product popularity.
Team motognosis won third place (one Windows v2 sensor and carrying bag per team member) for their work on In exTremory, a “catch-the-shape” game for tremor analysis in clinical, rehabilitation, and home scenarios.
Other projects presented
Developers from all over Europe came for the 28-hour event.
Our next hackathon will take place in Vancouver, British Columbia, November 8–9; registration opens in October, so keep an eye on our blog.
Thanks to all the attendees of the Amsterdam event and to our wonderful hosts at Dare to Difr. I look forward to watching the projects progress and to seeing you all again at a future event!
_________________*Denotes projects awarded an Honorable Mention by the judges
Today we released another update to the Kinect for Windows SDK 2.0 public preview. This release contains important product improvements that add up to a more stable and feature-rich product. This updated SDK lets you get serious about finalizing your applications for commercial deployment and, later this year, for availability in the Windows Store. Please install, enjoy, and let us know what you think.
A member of team Kwartzlab++ demonstrates his team's project VR Builder at the Kinect Hackathon in Kitchener, Ontario.
Last week, we headed north to Canada for the latest stop on our Kinect Hackathon world tour: a three-day event (August 8–10) in Kitchener, Ontario, where developers gathered to develop applications* using Kinect for Windows v2. One of the three cities that make up the Regional Municipality of Waterloo, Kitchener has a booming tech community, fueled in part by the renowned computer science program at the University of Waterloo. So it was no surprise that the Kitchener attendees exhibited boundless energy and enormous creativity. Equally impressive was the hospitality of the people in Kitchener, especially Jennifer Janik and Rob Soosaar of Deep Realities, who were awesome hosts.
And the winners* are…
Team CleanSweep took first place.
Hard at work: members of team BearHunterNinja (left) and team Titan (right)
Other projects* presented
Thanks to everyone who came to the event in Kitchener. I hope to see you at another event in the future!
_____________________*The names of the hackathon projects and teams are determined solely by the participants and are not intended to be used commercially.
As Microsoft Most Valuable Professional (MVP) James Ashley points out in a recent blog, it’s a whole lot easier to create 3D movies with the Kinect for Windows v2 sensor and its preview software development kit (SDK 2.0 public preview). For starters, the v2 sensor captures up to three times more depth information than the original sensor did. That means you have far more depth data from which to construct your 3D images.
The next big improvement is in the ability to map color to the 3D image. The original Kinect sensor used an SD camera for color capture, and the resulting low-resolution images made it difficult to match the color data to the depth data. (RGB+D, a tool created by James George, Jonathan Porter, and Jonathan Minard, overcame this problem.) Knowing that the v2 sensor has a high-definition (1080p) video camera, Ashley reasoned that he could use the camera's color images directly, without a workaround tool. He also planned to map the color data to depth positions in real-time, a new capability built into the preview SDK.
Ashley shot this 3D video of his daughter Sophia by using Kinect for Windows v2 and a standard laptop.
Putting these features together, Ashley wrote an app that enabled him to create 3D videos on a standard laptop (dual core Intel i5, with 4 GB RAM and an integrated Intel HD Graphics 4400). While he has no plans at present to commercialize the application, he opines that it could be a great way to bring real-time 3D to video chats.
Ashley also speculates that since the underlying principle is a point cloud, stills of the volumetric recording could be converted into surface meshes that can be read by CAD software or even turned into models that could be printed on a 3D printer. He also thinks it could be useful for recording biometric information in a physician’s office, or for recording precise 3D information at a crime scene, for later review.
Those who want to learn more from Ashley about developing cool stuff with the v2 sensor should note that his book, Beginning Kinect Programming with Kinect for Windows v2, is due to be published in October.