The Kinect for Windows team recently traveled to New York City for a terrific hackathon. In partnership with NUI Central, we had the pleasure of sponsoring a 27-hour event, June 21–22, focused on creating innovative applications and experiences using the forthcoming Kinect for Windows v2 sensor and SDK 2.0.
The event drew more than 100 participants, from as far away as the Czech Republic and Dubai. You will find a summary of the teams and projects below; we were blown away by what we saw. We were also impressed by the tremendous interest in the experimental near-field firmware that we brought. This firmware turns the Kinect for Windows v2 sensor into a near-field device that can be used at close range (from 10cm to 1m).
Lots of participants told us they wanted to get their hands on this near-field firmware. Others said how pleased they were just to have access to our engineering team. In fact, there was so much interest and energy at this event that we have set up some additional ones this summer. We would love to see you at one of our upcoming in-person events, or to have you join us online for our virtual training event on Tuesday, July 15. Please see the in-person event information below.
Upcoming in-person events
All of these events will be attended by members of the Kinect for Windows engineering team and will feature prizes, food, and, best of all, access to Kinect for Windows v2 sensors (including experimental near-field).
July 18–19, 2014: Dallas Entrepreneur Center (Dallas, TX, United States)
Computer Visionaries is hosting; we are sponsoring. It’s going to be so hot outside that you’ll want to be hunkered down inside at our event.
July 26–27, 2014: Microsoft Headquarters (Redmond, WA, United States)
This event will be here on our main campus. We will have our engineers out in full force to support this event.
August 8–10, 2014: Kitchener Studio Space (Kitchener, ON, Canada)
We are sponsoring Deep Realities and hosting a weekend hackathon in the Waterloo Region.
New York City hackathon summary
New York City hackathon participants
As noted earlier, this event was hosted by our partner, NUI Central, in Manhattan (New York). The top three teams were:
First place: lightspeed. Their application, called K4B, used a Kinect for Windows v2 sensor to scan large ceiling spaces in order to map HVAC and electrical details so that renovators can get accurate, detailed as-built measurements.
First-place winners, team lightspeed
Second place: adabiits. Their application, Thing T. Thing, included a robotic hand that waves when someone walks by and can be controlled by using finger tracking.
Second-place winners, team adabiits
Third place: Body Labs. Their application, ScanAdHoc, combined multiple Kinect for Windows v2 sensors wirelessly over WebSockets, enabling accurate 3D body scans to be used for fitting clothes.
Third-place winners, Body Labs
Other teams that presented projects:
Kinect for Windows MVP András Velvárt helps team adabiits.
Hannes Hofman and Chris Schaller from Metrilus brought their finger-tracking library to the event.
Congratulations to all the participants, whose creativity highlighted the immense possibilities in the Kinect for Windows v2 sensor and SDK. If you haven’t pre-ordered the v2 sensor yet, there is still time, but don’t wait too long.
Ben Lower, Kinect for Windows developer community manager Contact Ben on Twitter at @benlower
Today, we're extremely excited to announce some major news about Kinect:
You can find more details about these developments in Microsoft Technical Fellow Alex Kipman's post on the Official Microsoft Blog. As Alex says, "these updates are all part of our desire to make Kinect accessible and easy to use for every developer."
The Kinect for Windows Team
As you might imagine, working in a nuclear power plant provides special challenges. One crucial aspect for any project is the need to minimize employee exposure to radiation by applying a standard known as As Low As Reasonably Achievable—ALARA for short.
How this works: Plant ALARA managers work with the maintenance groups to estimate how much time is required to perform a task and, allowing for exposure limits, they determine how many employees may be needed to safely complete it. Today, that work is typically done with pen and paper. But new tools from Siemens PLM Software that incorporate the Kinect for Windows sensor could change this by providing a 3-D virtual interactive modeling environment.
Kinect for Windows is used to capture realistic movement for use in the Siemens Teamcenter solution for ALARA radiation planning.
The solution, piloted at a US nuclear power plant last year, is built on Siemens’ Teamcenter software, using its Tecnomatix process simulate productivity product. Siemens PLM Software Tecnomatix provides virtual 3-D human avatars—“Jack” and “Jill”—that are integrated to model motion-controlled actions input with a Kinect for Windows sensor. This solution is helping to usher in a new era of industrial planning applications for employee health and safety in the nuclear industry.
"We're really revolutionizing the industry," said Erica Simmons, global marketing manager for Energy, Oil, and Gas Industries at Siemens PLM Software. "For us, this was a new way to develop a product in tandem with the industry associations. We created a specific use case with off-the-shelf technology and tested and validated it with industry. What we have now is a new visual and interactive way of simulating potential radiation exposure which can lead to better health and safety strategies for the plant."
Traditional pencil-and-paper planning (left) compared to the Siemens PLM Software Process Simulate on Teamcenter solution (right) with “Jack” avatar and Kinect for Windows movement input.
The Siemens Tecnomatix process planning application, integrated with the Kinect for Windows system, will give nuclear plant management the ability to better manage individual worker radiation exposure and optimize steps to reduce overall team exposure. As a bonus, once tasks have been recorded by using “Jack,” the software can be used for training. Employees can learn and practice an optimized task by using Kinect for Windows and Siemens “Jack” and “Jill”—safely outside of the radiation zone—until they have mastered it and are ready to perform the actual work.
"We wanted to add something more for the user of this solution in addition to our 3-D human avatars and the hazard zones created by our visual cartography; this led us to exploring what we could do with the Kinect for Windows SDK for this use case," said Dr. Ulrich Raschke, director of Human Simulation Technologies at Siemens PLM Software. “User feedback has been good so far; the addition of the Kinect for Windows system adds another level of interactivity to our application."
This Siemens solution grew out of a collaborative effort with Electric Power Research Institute (EPRI) and Fiatech industry association, which identified the need for more technologically advanced estimation tools for worker radiation dosage. Kinect for Windows was incorporated when the developers were tailoring the avatar system to the solution and exploring ways to make the user experience much more interactive.
"Collaboration with several key stakeholders and industry experts led to this innovative solution," said Phung Tran, senior project manager at EPRI. "We're pleased the industry software providers are using it, and look forward to seeing the industry utilize these new tools."
“In fact,” Tran added, “the tool is not necessarily limited to radiation work planning. It could help improve the management and execution of many operation, maintenance, and project-based tasks.”
Kinect for Windows team
Traditional digital animation techniques can be costly and time-consuming. But KinÊtre—a new Kinect for Windows project developed by a team at Microsoft Research Cambridge—makes the process quick and simple enough that anyone can be an animator who brings inanimate objects to life.
KinÊtre uses the skeletal tracking technology in the Kinect for Windows software development kit (SDK) for input, scanning an object as the Kinect sensor is slowly panned around it. The KinÊtre team then applied their expertise in cutting-edge 3-D image processing algorithms to turn the object into a flexible mesh that is manipulated to match user movements tracked by the Kinect sensor.
Microsoft has made deep investments in Kinect hardware and software. This enables innovative projects like KinÊtre, which is being presented this week at SIGGRAPH 2012, the International Conference and Exhibition on Computer Graphics and Interactive Techniques. Rather than targeting professional computer graphics (CG) animators, KinÊtre is intended to bring mesh animation to a new audience of novice users.
Shahram Izadi, one of the tool's creators at Microsoft Research Cambridge, told me that the goal of this research project is to make this type of animation much more accessible than it's been—historically requiring a studio full of trained CG animators to build these types of effects. "KinÊtre makes creating animations a more playful activity," he said. "With it, we demonstrate potential uses of our system for interactive storytelling and new forms of physical gaming."
This incredibly cool prototype reinforces the world of possibilities that Kinect for Windows can bring to life and even, perhaps, do a little dance.
Peter Zatloukal, Kinect for Windows Engineering Manager
This year’s Microsoft TechForum provided an opportunity for Craig Mundie, Microsoft Chief Research and Strategy Officer, to discuss the company’s vision for the future of technology as well as showcase two early examples of third-party Kinect for Windows applications in action.
Mundie was joined by Don Mattrick, President of the Microsoft Interactive Entertainment Business, and his Chief of Staff, Aaron Greenberg, who demonstrated both of the third-party Kinect for Windows applications, including the Pathfinder Kinect Experience. This application enables users to stand in front of a large monitor, and use movement, voice, and gestures to walk around the 2013 Nissan Pathfinder Concept, examining the exterior, bending down and inspecting the wheels, viewing the front and back, and then stepping inside to experience the upholstery, legroom, dashboard, and other details.
Nissan worked with IdentityMine and Critical Mass to create the Kinect-enabled virtual experience, which was initially shown at the Chicago Auto Show in early February. The application is continuing to be refined, taking advantage of the Kinect natural user interface to enable manufacturers to showcase their vehicles in virtual showrooms.
“Using motion, speech, and gestures, people will be able to get computers to do more for them,” explain Greenberg. “You can imagine this Pathfinder solution being applied in different ways in the future - at trade shows, online, or even at dealerships - where someone might be able to test drive a physical car, while also being able to visualize and experience different configurations of the car through its virtual twin, accessorizing it, changing the upholstery, et cetera.”
Also demonstrated at TechForum was a new kind of shopping cart experience, which was developed by mobile application studio Chaotic Moon. This application mounts a Kinect for Windows sensor on a shopping cart, enabling the cart to follow a shopper - stopping, turning, and moving where and when the shopper does.
Chaotic Moon has tested their solution at Whole Foods in Austin, Texas, but the application is an early experiment and no plans are in place for this application to be introduced in stores anytime soon. Conceivably, Kinect-enabled carts at grocery stores, shopping malls, or airports could make it easier for people to navigate and perform tasks hands free. “Imagine how an elderly shopper or a parent with a stroller might be assisted by something like this,” notes Greenberg.
“The Kinect natural user interface has the potential to revolutionize products and processes in the home, at work, and in public places, like retail stores,” continues Greenberg. “It’s exciting to see what is starting to emerge.”
In March, ten startups will converge on Seattle to start developing commercial and gaming applications that utilize Kinect's innovative natural user interface (NUI). As part of the Microsoft Kinect Accelerator program, they will have three months and a wealth of resources—including access to Microsoft and industry mentors—to develop, and then present their applications to angel investors, venture capitalists, Microsoft executives, media, and influential industry leaders.
Since launching in late November, the Kinect Accelerator has received hundreds of applications from over forty countries, proposing transformative, creative innovations for healthcare, fitness, retail, training/simulation, automotive, scientific research, manufacturing, and much more.
Applications are still being accepted, and the Kinect Accelerator team encourages you to apply. Learn more about the application process.
The Kinect Accelerator program is powered by TechStars, one of the most respected technology accelerator programs in the world. Microsoft is working with TechStars to leverage the absolute best startup accelerator methodologies, mentors, and visibility. If you are considering building a business based on the capabilities of Kinect, this is a great opportunity for you.
Dave Drach, Managing Director, Microsoft Emerging Business Team, explains that the Kinect Accelerator program is looking for creative startups that have a passion for driving the next generation of computing. “Starting in the spring of 2012, they will have three months to bring their ideas to life. What will emerge will be applications and business scenarios that we’ve not seen before,” comments Drach.
Read more about the Kinect Accelerator program.
A unique clinic for treating children with cancer and blood disorders, alex’s place is designed to be a warm, open, communal space. The center—which is located in Miami, Florida—helps put its patients at ease by engaging them with interactive screens that allow them to be transported into different environments—where they become a friendly teddy bear, frog, or robot and control their character’s movements in real time.
"As soon as they walk in, technology is embracing them," said Dr. Julio Barredo, chief of pediatric services at alex's place in The Sylvester Comprehensive Cancer Center, University of Miami Health Systems.
The clinic—which opened its doors in May 2012—was conceived of and designed with this in mind, and the Kinect for Windows digital experience was part of the vision from day one. Created by Snibbe Interactive, Character Mirror was designed to fit naturally within this innovative, unconventional treatment environment. The goal is to help reinforce patients' mind-body connection with engaging play and entertainment, as well as to potentially reduce their fear of technology and the treatments they face. As an added benefit, nurses can observe a child's natural range of movement during play and more easily draw out answers to key diagnostic questions.
"I find the gestural interactive experiences we created for alex's place in Miami among the most worthwhile and satisfying in our history," said Scott Snibbe, founder and CEO of Snibbe Interactive. "Kids in hospitals are feeling lonely, scared, and bored, not to mention sick. Partnering with Alex Daly and Dr. Barredo, we created a set of magical experiences that encourage healthy, social, and physical activity among the kids.
"Kids found these experiences so pleasing that they actually didn't want to leave after their treatments were complete," Snibbe added. "We are very excited to roll out these solutions to more hospitals, and transform healthcare through natural user interfaces that promote social play and spontaneous physical therapy."
The following blog post was guest authored by Ana Isabel Zorrilla, project manager at EIC BBK-Dravet Syndrome Foundation, a Spanish nonprofit organization dedicated to the treatment and cure of Dravet syndrome and related disorders.
Imagine what it’s like to go for weeks on end without a decent night’s sleep. For Julian Isla of Madrid, Spain, this scenario requires no imagining. He has lived it, spending night after sleepless night lying awake, listening for the telltale signs that his child is having a seizure. You see, Julian’s son, Sergio, suffers from Dravet syndrome, a severe form of epilepsy. Children with Dravet syndrome experience frequent seizures; from an average of one crisis per week in the mildest cases, to one or more a day—or even multiple seizures per hour—in the most severe instances. These attacks occur more frequently while a child sleeps, so parents often struggle to stay awake seven nights a week, prepared to give their child emergency medical treatment in the event of a prolonged seizure.
When he was born, Sergio was a healthy baby. The seizures began when he was four months old, and he was diagnosed as having Dravet syndrome after experiencing several long-lasting seizures and multiple admissions to the pediatric intensive care unit. After receiving his son’s diagnosis five years ago, Julian connected with other Dravet families, and together they explored the idea of creating an organization to promote research on the condition. They got in touch with the Dravet Syndrome Foundation (DSF) in the United States, and out of that connection, the Spanish Delegation of the DSF was born. Julian serves as the executive chairman of the Spanish Delegation, which has nine employees, including me. Our organization is involved in multiple research projects, including a search for new drugs that can treat Dravet syndrome.
Technology has been pivotal for our group; this year, for example, we conducted the world’s first genetic tests for Dravet syndrome, using state-of-the-art technologies. Today, children with Dravet syndrome are being diagnosed by a new generation of genetic tests running on Microsoft Azure that the Spanish Delegation pioneered. Our group’s technological bent is hardly surprising given Julian’s background: with a degree in computer science and some 20 years of professional experience in the IT sector, he currently works for Microsoft in Madrid, where he manages a team of software consultants.
As soon as Julian heard about Kinect for Windows, he began exploring the use of the technology to help Dravet families. He was aware of the technology’s potential for medical applications, thanks to interactions with colleagues in the Microsoft Madrid offices. About the same time, the Spanish Delegation created EIC BBK, a development center focused on e-health applications. Julian proposed that the center investigate the use of Kinect for Windows to monitor Dravet children while they slept. With the Kinect sensor serving as a sentinel, Julian thought the beleaguered parents might get some much needed sleep themselves.
The use of monitors to detect seizures is not groundbreaking: there have been multiple studies and projects on seizure monitor systems. What is new and exciting is the Kinect sensor’s exquisite sensitivity, the result of its multisensory inputs. “With a color camera, an infrared detector, and an array of microphones, the Kinect sensor can detect physical movement and acoustical changes with tremendous accuracy,” says Julian. He adds, “The affordability of the Kinect sensor is another huge advantage.”
By the end of 2013, EIC BBK had started the “Night Seizure Monitor” project, a research initiative that uses Kinect for Windows. This project’s aim is to track the child’s movements while sleeping. When the Kinect sensor detects movements that follow a seizure pattern, an alarm warns parents that their child might be having a seizure. This solution provides dual benefits: when a seizure is detected, the monitor system ensures that the child gets medication right away to reduce the length and intensity of the episode. And when no seizures occur, the monitor enhances family’s quality of life, because parents are able to enjoy a restful sleep.
At the outset, developers programmed the Kinect sensor to be able to detect the movements of a child even if he or she was in a darkened room and lying under a blanket or comforter, above. Then, they added the ability to spot seizures that begin with abrupt movements or loud vocalizations, below.
Using data collected by its color camera and depth sensor, Kinect for Windows detects seizures by comparing changes in the child’s body position between two sequential frames. If these changes are frequent, the seizure alarm sounds. In addition, the sensor’s microphone plays a role in recognizing the seizures, as the system has been programmed to respond to the shouts that typically accompany the onset of a Dravet seizure. The Kinect-based solution can process this sound and calculate the child’s location in the room.
As participants in the Kinect for Windows Developer Preview program, our developers here at EIC BBK have been testing a preview version of the Kinect for Windows v2 sensor and SDK since December 2013, exploring the technology’s potential to improve upon the existing Night Seizure Monitor research. They are especially pleased with the v2 sensor’s infrared capabilities—which provide an even higher quality image—and its wider field of vision and greater depth range. These enhanced features should make the Night Seizure Monitor even more valuable.
Moreover, the developers are eagerly awaiting the general availability of the Kinect for Windows v2 sensor and SDK, which promise enhanced discrimination of facial expressions. The developers believe the enhanced face tracking capability will help the monitor detect those seizures that do not present limb shaking but rather are manifested by movements of the eyes and mouth.
The Night Seizure Monitor initiative is a great example of how needs can promote creativity. Julian had a problem at home, and rather than accepting it as unavoidable, he decided to seek out a solution.
It also shows the power of teamwork: Julian has received enormous support from colleagues at Microsoft; right now a dozen of them are helping our organization as volunteers.
Finally, it demonstrates how technology can empower people. Julian sums up the experience eloquently, observing that “When you have a child with special needs, everything seems filled with problems. You feel impotent. I’ve had the privilege of using technology for a project that will, we believe, improve the lives of many young patients and provide a sense of control to their families. I feel proud to work for a company whose technology can make such a difference in people’s lives.”
While the Night Seizure Monitor is still a development project, we hope to have a fully functional prototype available for testing with Dravet Foundation families by the end of 2014. After that, our goal is to make the monitor available to Dravet families around the world. But as wonderful as this development will be, it is but a way station in the Dravet Syndrome Foundation’s ultimate mission: to find a cure for this disorder. Despite the difficulty of this quest, DSF supporters, volunteers, and workers are laboring tirelessly to achieve it. We call it our “moonshot,” taking inspiration from US President John Kennedy’s audacious mission to send a manned mission to the moon. Our moonshot represents the dream of parents who will never give up.
Ana Isabel Zorrilla, project manager, EIC-BBK– Dravet Syndrome Foundation
Students, teachers, researchers, and other educators have been quick to embrace Kinect’s natural user interface (NUI), which makes it possible to interact with computers using movement, speech, and gestures. In fact, some of the earliest Kinect for Windows applications to emerge were projects done by students, including several at last year’s Imagine Cup.
One project, from an Imagine Cup team in Italy, created an application for people with severe disabilities that enables them to communicate, learn, and play games on computers using a Kinect sensor instead of a traditional mouse or keyboard. Another innovative Imagine Cup project, done by university students in Russia, used the Kinect natural user interface to fold, rotate, and examine online origami models.
To encourage students, educators, and academic researchers to continue innovating with Kinect for Windows, special academic pricing on Kinect for Windows sensors is now available in the United States. The academic price is $149.99 through Microsoft Stores.
If you are an educator or faculty with an accredited school, such as a university, community college, vocational school, or K-12, you can purchase a Kinect for Windows sensor at this price.
Find out if you qualify, and then purchase online or visit a Microsoft store in your area.
Since our announcement of Kinect for Windows version 1.5 in “What’s Ahead: A Sneak Peek” there have been a few questions that have come up that I wanted to answer.
There have been some folks who have thought that 1.5 included new hardware. Version 1.5 is our new software release that is coming out in the same timeframe that we launch the current Kinect for Windows hardware in 19 additional countries. We will upgrade our software at a faster rate than we refresh our hardware.
We have built version 1.5 of our software with 1.0 compatibility at top of mind. Applications built using 1.0 will work on the same machine with an application built using 1.5 – this is something that we plan to do always, insuring that solutions built using older runtimes can always run side by side with solutions using new runtimes. Furthermore, we have maintained API compatibility for developers – applications that are currently being built using the 1.0 SDK can be recompiled using the 1.5 SDK without any changes required. No one has to wait for 1.5 to get a Kinect for Windows sensor or to start coding using the current SDK!
I love the enthusiasm for the 1.5 SDK and runtime, the new speech languages, and for the new countries we’re launching in – we can’t wait to deliver it to you.
Craig EislerGeneral Manager, Kinect for Windows