A marathon came to London early this year, but it wasn’t the usual 26 miles of pavement pounding. This was a different kind of endurance event, one involving 36 hours of coding with the Kinect v2 sensor.
The event, which took place March 21–22, was organized by Dan Thomas of Moov2, who had approached my colleagues and me in the Microsoft UK Developer Experience team a few months earlier, wondering if we could help put together a Kinect v2 hackathon. Of course we said yes, and with assistance from quite a few friends, the London Kinect Hack was off and running.
After many weeks of planning and hard work on the part of Dan and his team, the event came together and a site opened to distribute tickets. The site featured a snazzy logo (later emblazoned on T-shirts that were distributed at the event), and the hackathon’s allotted 100 tickets sold out within two days.
The London hackathon featured a snazzy logo that adorned participants' T-shirts.
Clearly there was a lot of interest, but we worried about actual turnout—always a risk with a weekend event, when other diversions compete for participants’ time. Moreover, we hoped that all the registrants appreciated that this was a coding event.
So, did the developers come to Kinect Hack London? Did they code? Did they have fun and deliver some great work? Absolutely—see for yourself in this video:
The hackathon was an unqualified success: more than 80 developers turned up for the weekend, coming from not only the UK but also France, Belgium, Holland, Germany, and even Mexico. In addition, a number of people came through on “spectator” tickets, eager to see what was happening.
Over the course of the next two days, teams were formed, Kinect sensors were loaned, laptops were borrowed, bugs were squashed, and sleep was (mostly) ignored. Twitter got a serious workout as teams tweeted their progress, while burgers, curry, and pasta disappeared, along with much coffee and a little beer.
Thirty-six hours later, the indefatigable hackers had produced a long list of projects to pitch during the show-and-tell that closed the event. This was a very relaxed, fun couple of hours, with participants getting to see and try out what the other teams had made. Here’s the full roll call of projects (many of which are featured in the video above):
Kinect Pong (Dave)
A variation of the classic game controlled by doing exercises—squats or press-ups (push-ups, to you Yanks). You can see my colleague Andrew Spooner demonstrating this in the video above.
Sphero Slalom (Victoria, Matthew, Hannah, and Phaninder)
Kinect-captured gestures steered a Sphero (a remote-controlled ball) through a challenging course.
Flight of Light (James)
A four-player Unity game was adapted to support Kinect input, with players spreading their wings and leaning to the left or right to control their avatar.
Vase Maker(a different James)
Not happy with regular home accessories, this one-man team used the Kinect sensor’s camera and Open Frameworks to create a host of weird and wonderful psychedelic, 3D vase visualisations from such props as a shopping bag.
Functional Movement Screen (Chris, Mustafa, Glenn, and Matthew)
Like a watchful gym teacher, this app used Kinect body tracking to analyze the quality with which participants performed a set of exercises.
Skelxplore (James and Leigh)
This app used Kinect to explore a user’s skeletal system and musculature.
Do It For Walt (Joe and Sam)
Developers from Disney prototyped an interactive theme park guide, which featured artificial reality that let users assume their favorite movie characters.
Kinect + Oculus (Tom)
Impressive visuals and sensations ensued when the Kinect sensor brought the body into a view presented by the Oculus Rift, as real limbs combined with augmented challenges.
Flappy Box (Chi and Bryan)
A Flappy Bird-like game, this app enabled up to six players to control a flying bird by jumping and crouching in front of the Kinect sensor.
Music Machine (Jon)
In this multi-person experience, users’ bodies controlled the mix of a set of parts from a music track.
Skynect (Rick, Elizabeth, Tatiana, and Sankha)
This app brought Kinect into the world Skype calls.
Kinect Talks (Fernando)
Intended as a tool to assist a five-year-old suffering from cerebral palsy, this app utilized simple body movements to create voice outputs.
Hole in the Wall (James, Alex, Scott, and Andrew)
In this Unity game, players used gestures to push shaped blocks into an advancing 3D wall.
Box Sizer (Alex, Michael, Tim, and Navid)
Designed for use by shipping companies, Box Sizer uses the Kinect sensor’s camera and depth detection to measure the volume of cardboard boxes.
Multi-Kinect Server (Julien)
This app combined output from multiple Kinect sensors over a network, creating a multi-sensor view of all the tracked bodies on a single monitor screen.
Bubblecatch (Sam, David, and Mark)
In this WPF-powered multiplayer game, players had to catch bubbles and avoid explosives.
Kinect Juggling (Phil and Joe)
A tool to teach juggling, this app used Kinect data to track the path of a juggled ball and analyze the accuracy of juggler.
Kinect Kombat (Gareth, Yohann, and Rene)
A prototype first-person game, this app let combatants hurl virtual fireballs.
Helicar & Lewis (Joel, James, and Thomas)
Intended to help children visualise their imagined environments, this app placed 3D characters in a modelled 3D world.
Kinect Shooter(Kunal and Shabari)
This app provided a gun-wielding shoot-‘em-up experience.
3D Fuser (Claudio and Maruisz)
Need to map Kinect sensor data onto 3D models? This app did it.
The event was not organized as a competition, but Dan put together a small judging panel and three teams received special prizes at the end of the hackathon:
But really, everyone was a winner. With help from the US Kinect team, Dan had many additional prizes to give away during impromptu games. More than two dozen developers went home with a Kinect sensor of their own; others received Raspberry Pi 2 devices and starter kits or Spheros, the latter donated by—you guessed it—Sphero.
This event demonstrated that the Kinect v2 sensor is an inspirational piece of hardware for hackers, and Dan’s team did a wonderful job of a creating a “by community, for community” event. Everyone had a great time, as witness the incredibly positive feedback at the event and on Twitter (search #KinectHackLondon).
Here are a few of the participants’ write-ups: some even include the code they produced:
Huge thanks to Dan Thomas and the team at Moov2 for putting this hackathon together. It was a great piece of work and a lot of fun to be involved in. Thanks also to the UK Microsoft colleagues who helped out, especially Paul Lo and Andrew Spooner.
Above all, many thanks to all the participants who made this weekend so outstanding.
Mike Taulty, Tech Evangelist, Microsoft UK Developer Experience Team
As we discussed in a recent blog, the Kinect v2 sensor and SDK 2.0 enables developers to create Kinect-powered Windows Store apps, opening up an entirely new market for your Kinect for Windows applications. Now on GitHub you can find the Kinect 2 Hands on Labs, a tutorial series that teaches you, step by step, how to build a Windows Store 8.1 app that uses almost every feature of the new sensor.
The lab is a complete introduction, covering everything from setting up the Kinect sensor to utilizing its major features. Included are hands-on lessons about using the infrared, color, and depth data; creating a body mask; displaying body data; removing backgrounds; using the face library; creating hand cursor interactions; employing Kinect Studio; building gestures; adding speech recognition; and tracking multiple users.
The Kinect 2 Hands on Labs include lessons on every major feature of the latest sensor, complete withillustrations such as this one from the lesson on using depth data.
Each lesson includes code to help you build samples, providing a true hands-on learning experience. For example, here is part of the code included in the lesson on how to assemble a body mask:
If you’re thinking about tapping into the Windows Store market with your own Kinect app, this tutorial series is a great place to start.
The Kinect for Windows Team
Watching Cole throw his arms and shoulders into playing a video game, you might never guess that he suffers from a severe muscular disease. But he does. Cole has Duchenne muscular dystrophy (DMD), a genetic disorder that results in progressive muscle degeneration. DMD patients, almost all of whom are boys, seldom live beyond early adulthood, and most are wheelchair bound by their early teens.
Dedicated medical researchers are testing a host of experimental treatments that might slow or even halt the disease’s otherwise relentless progress. Currently, most clinical trials limit admission to patients who can walk unassisted for six straight minutes. The distance the boy can walk in six minutes is used as a baseline; if the distance increases during the course of the treatment, it indicates that the experimental therapy is having a positive effect.
Unfortunately, the six-minute-walk requirement rules out a lot of boys who still have considerable upper-body strength but cannot walk the requisite six minutes. Physical therapists Linda Lowes and Lindsay Alfano at Nationwide Children’s Hospital are working to get more boys accepted into clinical trials by developing a simple, reliable measure of upper body abilities that could be used as an alternative to the walk test. And Kinect for Windows v2 is playing a critical role in their efforts.
ACTIVE-seated uses Kinect for Windows to measure upper-body muscle strength in boys withDuchenne muscular dystrophy.
Lowes, Alfano, and their colleagues have devised a Kinect-enabled video game in which seated DMD patients control the action by vigorous arm and shoulder movements. Called ACTIVE-seated (the acronym stands for Ability Captured Through Interactive Video Evaluation), the game not only measures upper-extremity abilities but does so while motivating the patient to perform his best.
ACTIVE-seated uses Kinect for Windows’ capabilities to record accurate data on the patient’s upper-extremity reach and range. The gamer—that is, the patient—is seated at a special table, some distance from a video monitor that displays the game. Taking advantage of the body tracking options in the Kinect software development kit (SDK), the researchers use the Kinect sensor’s infrared camera to track the position of the patient’s head, trunk, and arms as he plays the game. By identifying points on the head and sternum, both shoulders, and each arm, the researchers can measure the patient’s maximal upper-extremity movement in three planes: horizontal (left and right), vertical (table top to overhead), and depth (forward toward the camera).
Players can choose between two different games based on their interests. Both games were developed with input from the boys, who obviously know what pre-teen males enjoy. They overwhelmingly agreed that something “gross” would be best. Based on this recommendation, one game involves a spider invasion, in which the boys squish the spiders, which crunch realistically and ooze green innards. The second game, designed for the more squeamish, involves digging for jewels in a cave.
“You should see the faces of new patients light up when they hear that they’re going to be playing a video game instead of undergoing another boring set of tests,” says Lowes. The allure of a video game increases the patients’ motivation, which, in turn, improves the reliability of the results. When asked to perform uninspiring tests day after day, boredom sets in, and the desultory results don’t measure true functional ability. But when it comes to playing a video game, boredom isn’t a problem.
ACTIVE-seated is currently in testing, and a recent study of 61 DMD patients found that scores in the game correlated highly with parent reports of daily activities and mobility. Lowes and her colleagues are hopeful that these results will help convince the U.S. Food and Drug Administration to use the game as an alternative test for admission to DMD clinical trials.
“Jane” had a problem: a so-called frozen shoulder, which made it painful to use her left arm. The pain, which had begun mysteriously eight months earlier, affected nearly every aspect of Jane’s life, making it difficult for her to perform routine tasks at her office job and at home.
She had tried a number traditional and alternative treatments, from massage therapy and stretching to acupuncture-like intramuscular stimulation and a soft-tissue treatment called myofascial release. None of these treatments provided meaningful relief, and Jane abandoned each out of disappointment. Emotionally exhausted by the seemingly incurable pain, Jane was prescribed antidepressants by her physician.
Then, as what he called a “last resort,” Jane’s physician referred her to chiropractor Ryan Comeau, one of the founders of Kinetisense, a Canadian company that has pioneered the use of Kinect for Windows v2 to record and track progress during physiotherapy for joint and range-of-motion problems.
Kinetisense’s software takes advantage of the Kinect v2 sensor’s ability to accurately record the exact position of body joints during therapeutic sessions. Unlike traditional methods of measuring joint angles, the Kinetisense system measures true joint values—based on the actual position of the bones—rather than approximating the angles formed by the external body parts.
Kinetisense uses the Kinect v2 sensor to record the exact position of the body joints during therapy, providing an unparalleled level of accuracy.
Kinetisense algorithms obtain the positions of the joints and calculate the exact angle of any given joint at any time. And it does this in less than half a second, without resorting to imprecise hand tools, such as inclinometers and goniometers, or expensive wearable equipment. The patient simply stands or sits in front of the v2 sensor and the Kinetisense software performs all of the necessary calculations with remarkable accuracy and speed. And because the sensor is measuring the true positions of the joints, Kinetisense provides accurate joint analysis even when patients unintentionally try to extend their range of motion by leaning, rather than relying solely on joint movement.
The objective accuracy of the Kinetisense measurements allows the practitioner to adjust the treatment and reach a more realistic prognosis. What’s more, Kinetisense helps with patient compliance, which is a well-documented problem in physiotherapy. And while there are several reasons for noncompliance, Comeau notes that, like Jane, “Many patients with range of motion problems drop out of therapy when they fail to discern a meaningful lessening of their pain. But because pain is a very subjective matter, many people perceive it as an “all or nothing” proposition—either the pain is gone or it has not lessened at all. People may, in fact, be experiencing real benefits from their therapy, but fail to realize it because they are not yet completely pain-free. Kinetisense helps by providing both the patient and the practitioner with graphs that demonstrate real progress in range of motion, even when the patient has yet to sense the improvement in terms of pain reduction. The realization that therapy is working is incredibly reinforcing to patients, who are then much more likely to continue their treatment."
The precise measurements of the joint angles enable Kinetisense to chart improvements in the patient's range of motion. The quantifiable therapeutic results allow patients to see irrefutable evidence of improvement.
Kinetisense meets the longstanding need for objectivity and evidence-based rehabilitation care—a boon to both the patient and the practitioner. And as for Jane, she’s continuing her treatment and shows ongoing improvement. She’s been able to reduce her antidepressant dosage by half, and has referred several friends and family members to Comeau’s practice.
At Microsoft, we are committed to providing more personal computing experiences. To support this, we recently extended Kinect’s value and announced the Kinect Adapter for Windows, enabling anyone with a Kinect for Xbox One to use it with their PCs and tablets. In an effort to simplify and create consistency for developers, we are focusing on that experience and, starting today, we will no longer be producing Kinect for Windows v2 sensors.
Kinect for Xbox One sensor
Over the past several months, we have seen unprecedented demand from the developer community for Kinect sensors and have experienced difficulty keeping up with requests in some markets. At the same time, we have seen the developer community respond positively to being able to use the Kinect for Xbox One sensor for Kinect for Windows app development, and we are happy to report that Kinect for Xbox One sensors and Kinect Adapter for Windows units are now readily available in most markets. You can purchase the Kinect for Xbox One sensor and Kinect Adapter for Windows in the Microsoft Store.
Kinect Adapter for Windows
The Kinect Adapter enables you to connect a Kinect for Xbox One sensor to Windows 8.0 and 8.1 PCs and tablets in the same way as you would a Kinect for Windows v2 sensor. And because both Kinect for Xbox One and Kinect for Windows v2 sensors are functionally identical, our Kinect for Windows SDK 2.0 works exactly the same with either.
Microsoft remains committed to Kinect as a development platform on both Xbox and Windows. So while we are no longer producing the Kinect for Windows v2 sensor, we want to assure developers who are currently using it that our support for the Kinect for Windows v2 sensor remains unchanged and that they can continue to use their sensor.
We are excited to continue working with the developer community to create and deploy applications that allow users to interact naturally with computers through gestures and speech, and continue to see the Kinect sensor inspire vibrant and innovative commercial experiences in multiple industries, including retail, education, healthcare, education, and manufacturing. To see the latest ways that developers are using Kinect, we encourage you to explore other stories in the Kinect for Windows blog.
Michael Fry, Senior Technology Evangelist for Kinect for Windows, Microsoft
What do you do after you’ve built a great app? You make it even better. That’s exactly what Carl Franklin, a Microsoft Most Valuable Professional (MVP), did with GesturePak. Actually, GesturePak is both a WPF app that lets you create your own gestures (movements) and store them as XML files, and a .NET API that can recognize when a user has performed one or more of your predefined gestures. It enables you to create gesture-controlled applications, which are perfect for situations where the user is not physically seated at the computer keyboard.
GesturePak v2 simplifies the creation of gesture-controlled apps. This image shows the app in edit mode.
Franklin’s first version of GesturePak was developed with the original Kinect for Windows sensor. For GesturePak v2, he utilized the Kinect for Windows v2 sensor and its related SDK 2.0 public preview, and as he did, he rethought and greatly simplify the whole process of creating and editing gestures. To create a gesture in the original GesturePak, you had to break the movement down into a series of poses, then hold each pose and say the word “snapshot,” during which a frame of skeleton data was recorded. This process continued until you captured each pose in the gesture, which could then be tested and used in your own apps.
GesturePak v2 works very differently. You merely tell the app to start recording (with speech recognition), then you perform the gesture, and then tell it to stop recording. All of the frames are recorded. This gives you a way to play an animation of the gesture for your users.
GesturePak v2 still uses the same matching technology as version 1, relying on key frames (called poses in v1) that the user matches in series. But with the new version, once you've recorded the entire gesture, you can use the mouse wheel to "scrub" through the movement and pick out key frames. You also can select which joints to match simply by clicking on them. It's a much easier and faster way to create a gesture than the interface of GesturePak v1, which required you to select poses by using voice and manual commands.
Carl Franklin offered these words of technical advice for devs who are writing WPF apps:
If you want to capture video, use SharpAVI (http://sharpavi.codeplex.com/)
If you want to convert the AVI to other formats, use FFmpeg (http://ffmpeg.org/)
When building an app with multiple windows/pages/user controls that use the Kinect sensor, only instantiate one instance of a sensor and reader, then bind to the different windows
Initialize the Kinect sensor object and all readers in the Form Loaded event handler of a WPF window, not the constructor
Another big change is the code itself. GesturePak v1 is written in VB.NET. GesturePak v2 was re-written in C#. (Speaking of coding, see the green box above for Franklin’s advice to devs who are writing WPF apps.)
Franklin was surprised by how easy it was to adapt GesturePak to Kinect for Windows v2. He acknowledges there were some changes to deal with—for instance, “Skeleton” is now “Body” and there are new JointType additions—but he expected that level of change. “Change is the price we pay for innovation, and I don't mind modifying my code in order to embrace the future,” Franklin says.
He finds the Kinect for Windows v2 sensor improved in all categories. “The fidelity is amazing. It can track your skeleton in complete darkness. It can track your skeleton from 50 feet away (or more), and with a much wider field of vision. It can tell whether your hands are open, closed, or pointing,” Franklin states, adding, “I took full advantage of the new hand states in GesturePak. You can now make a gesture in which you hold out your open hand, close it, move it across your chest, and open it again.” In fact, Franklin credits the improvements in fidelity with convincing customers who had been on the fence. “They’re now beating down my door, asking me to build them a next-generation Kinect-based app.”
Austin, Texas: capital of the Lone Star State, home to the Texas Longhorns, and host of not one but two Kinect for Windows hackathons in the past few weeks. We were blown away—like Texas tumbleweeds, you know—by the ingenuity and talent on display at these Austin events.
NUI Central Kinect for Windows Hackathon
Developers, UI/UX designers, and enthusiasts gathered in Austin for 24 hours of coding ingenuity using Kinect for Windows v2 on February 21. Austin Mayor Steve Adler kicked off the event, reminding everyone of Austin’s role as a technology hub and challenging the hackers to create their best innovations. Sponsored by Microsoft, the event was held at WeWork, a shared office space for startups; the venue offered a comfortable lounge and private offices for the hardworking devs, who coded through the night.
The WeWork offices in Austin’s Historic District provided an inviting space for all-night hacking.
All of that coding resulted in some truly innovative Kinect for Windows applications (and some bleary-eyed hackers). The output ranged from games to medical applications to productivity enhancers. It was tough to choose the winners, but, steeled in our resolve by some Texas-strength black coffee, our panel of judges selected the top three apps. Each winning team received a cash prize and Kinect for Windows v2 sensors.
First place went to AR Sandbox, an onscreen, augmented-reality playground based on the infrared data collected by the Kinect sensor. When users manipulated a hand-held infrared reflective cube, the cube’s onscreen image transformed into a rubber duck or puppy. The app also created virtual rainstorms of rubber ducks and puppies. The user was able to interact with the ducks and puppies as onscreen objects.
Coming in second was the Advanced Coma Patient Monitoring System, which is intended to keep watch on comatose patients, generating alerts and recording events to a video file.
The third-place winner was I'm Hungry, an app that integrates Kinect and Skype, allowing callers to play a mini-game during a Skype call.
Inspired by the resourcefulness on display at the NUI Central Kinect for Windows Hackathon, we were eager to get back to Austin for the SXSW Music Hackathon. Luckily, we had fewer than four weeks to wait.
SXSW Music Hackathon Championship
Wednesday, March 18, found the Kinect for Windows team back in Austin for the start of the 2015 SXSW Music Hackathon Championship, where world-class hackers, designers, and programmers competed to create innovations for musicians, the music industry, and, of course, the fans. With their programming know-how and a collection of music-tech APIs they could use, competing teams had 24 hours to work on their prototypes and compete for the $10,000 Grand Prize. Among the Microsoft APIs available to the hackers were the Kinect for Windows SDK and the recently released Microsoft Band SDK.
Developers got a chance to learn about the APIs and meet the sponsors before the hackers pitched their ideas to recruit team members. Once the teams were formed, everyone quickly set to work creating music innovations.
The Kinect v2 sensor and the Microsoft Band added a unique flair to the hackathon. Teams tested their apps throughout the night by dancing in front of the Kinect sensor—when they weren’t busy doing laps to check their heart rate with the Band. These Microsoft products brought an interactive element that intensified the energy level throughout the night.
The SXSW Music Hackathon Championship was a beehive of coding activity, as developers raced the clock to create music apps.
Adding to the excitement of the late-night hackathon was a surprise performance by Boyfriend69, a talented entertainer who drew the developers to the front of the room, where she mingled and danced with them. Her show gave off a high-voltage vibe that kept the devs working through the night in true hackathon spirit.
Entertainer Boyfriend69’s surprise performance got the hackers up and mingling.
As dawn broke on March 19, the developers had fewer than eight hours to finish their projects before presenting them to qualify for the finals. While the last minutes of hacking ticked away, the teams feverishly polished their presentations. Here are the apps that emerged from the hackathon’s 24 hours of frenzied creativity:
This one-man team used Rdio and last.fm to create a QR code that aggregates listening data for display on an Apple Watch. When a user scans the code from another watch, Dandelion surfaces the song being listened to, using Rdio to play full songs or using other services to present 30-second previews.
MusicMap.io, an Austin-based team, is similar to Apple’s Meerkat app, but for music. MusicMap allows anyone to broadcast geo-tagged video and plot it on a map. With this service, users can discover new music from all over the world. MusicMap uses Stream.me as a live streaming service.
KYM (an acronym for Know Your Music), presented by Vince Davis, goes through the existing library on a user’s phone and gathers relevant information about the music by using APIs from various sources. Users can also hook up the app to Apple TV or the Apple Watch, so when they’re listening to music at home, the app shows relevant tweets from the artist.
SetStory aims to solve a problem in festival logistics. Currently, no tool exists that quantitatively evaluates the potential of an event's success based on its artists. By using OpenAura to grab information from various social feeds, SetStory calculates a quantifiable score that gives festival promotors and organizers a reliable gauge of an event's financial viability.
Groupie helps users find promising new artists in their local city. Users can also look at data from other cities, in case they want to discover the hot new bands from places near and far. Groupie uses the Rdio API to play the music and the Echonest API to look up the band's locale.
Bandarama is a workout tool that provides video and audio feedback on the user’s exercise performance. If you're running, for example, and your heart rate slows down, the tempo of the music will slow down, too, signaling you to pick up the pace. Team members Boris Polania and Guillermo Zambrano ran in circles around the room to demonstrate that once you start running faster again, the tempo of the music speeds back up and an applause sound effect provides extra motivation.
Divebomb uses the Kinect for Xbox One sensor to bring users into the music through virtual reality. As songs play, notes fly across the screen and the user can move his or her avatar to hit the notes as they race across the screen.
Mashr takes two different songs and then mashes them together by using the Gracenote API. It also ties into the Musicnote API, which helps determine if two different songs will work well together.
(List and descriptions from William Gruger, social/streaming charts manager for Billboard)
The judges faced a tough job, as only five of these presenters would advance to the finals on Friday. But the intrepid judges were up to the task, selecting Bandarama, Mashr, MusicMap, KYM, and Dandelion to advance.
On Friday, a celebrity panel of judges, consisting of Ty Roberts (Gracenote), Alex White (Next Big Sound), Jonathan Dworkin (Warner Music Group), Bryan Calhoun (Blueprint), Eric Sheinkop (Music Dealers), Jonathan Hull (Facebook), Todd Hansen (SXSW), and Marc Ruxin (Rdio) reviewed the finalists’ projects and selected the winner.
Dandelion took top honors, winning the 2015 SXSW Music Hackathon and its $10,000 grand prize. But the big winners are music lovers, who will undoubtedly enjoy some of the great innovations created by the event’s hackers, sponsors, and artists.
Microsoft unveiled some exciting new APIs at the SXSW Music Hackathon. These included the Neon Hitch API, which enabled artist-in-residence Neon Hitch to close out herstage show with a Kinect v2-enabled creative visual accompaniment to her song “Sparks.” Meanwhile, artist-in-residence Robert DeLong worked with Ableton and Microsoft, two of the hackathon's major sponsors, to turn his body into an instrument, which he then used on stage during his shows, including his set at the YouTube space. Another novel API was DJ Windows 98, an homage to the long-gone Microsoft operating system. It used a vintage CRT monitor controlled by the audience via Kinect for Windows.
As we left Austin for the second time in less than a month, we carried away memories of the creative energy we witnessed at both the NUI Central Kinect Hackathon and the 2015 SXSW Music Hackathon Championship.
While we’ve always thought that Kinect for Windows was a work of art, figuratively speaking, we are delighted to see the art world embracing the Kinect sensor as a creative tool. Two highly imaginative artistic uses of Kinect for Windows recently caught our attention, and we want to share them with you.
The first is a series of photographs by Israeli artist Assaf Evron, displayed at the Andrea Meislin Gallery in New York City from March 7 to April 25, 2015. Titled Visual Pyramid after Alberti, Evron’s striking photos show the interplay of light on everyday objects. The light is actually from the infrared spectrum emitted by the Kinect sensor. Using a separate infrared camera, Evron captures the Kinect-emitted infrared light as it’s reflected off the objects he’s photographing. The resulting images are a bold purple with a dense overlay of points of reflected infrared light.
This photograph, which captures reflected infrared light emitted by a Kinect sensor, is part of artist’s Assaf Evron’s Visual Pyramid after Alberti, 2013–2014.
(Copyright Assaf Evron. Photograph courtesy Andrea Meislin Gallery, New York.)
The photographs were inspired by the aesthetic philosophy of Renaissance thinker Leon Batista Alberti, who described a theory of linear perspective in his 1436 treatise Della pittura (On Painting). Alberti provided the mathematical underpinnings of perspective, showing how to render a three-dimensional illusion on a two-dimensional canvas. Evron’s photographs demonstrate Alberti’s theory in dramatic fashion.
Once you’ve stopped pondering Alberti’s ideas, we have a new brainteaser for you: what do you get when you mix performance art, experimental filmmaking, and an avant-garde music composition? Well, throw in two Kinect v2 sensors, some computers, and the right software, and you get as-phyx-i-a, an otherworldly movie that, in the words of creators, “…is centered in an eloquent choreography that stresses the desire to be expressive without bounds.”
The work of co-directors Maria Takeuchi and Frederico Phillips, the three-minute film renders the sinuous dancing of performance artist Shiho Tanaka as a glowing array of light points and spidery connections, all set to a haunting electronic score. The visuals and music are both eerie and beautiful, as the dancer’s image, which seems both digital and human simultaneously, moves gracefully across the screen.
Phillips was responsible for the visuals, which captured some 30 minutes of Tanaka’s dancing as a mesh of point-cloud data using two Kinect v2 sensors. The data from both sensors was combined and then styled with various 3D tools to create the ethereal images on the final film. Composer Takeuchi used a variety of digital and analogue techniques to create the original sound track that accompanies the visuals.
As Visual Pyramid after Alberti and as-phyx-i-a show, Kinect for Windows can be a potent artistic tool in the right creative hands.
Virtual golfers may be eagerly anticipating the upcoming EA Sports PGA Tour for Xbox One, which is slated to include Kinect for Windows motion controls. But real golfers—the ones who actually hit the links—don’t have to wait to enjoy the golfing benefits of Kinect. They can use the power of the Kinect sensor to capture, analyze, and improve their golf swing, thanks to two golf-swing analysis products from Belgium-based Guru Training Systems: My Swinguru and Swinguru Pro.
Designed for use by serious amateur golfers, My Swinguru detects flaws in the user’s golf swing and offers remedies. The golfer simply takes a swing in front of the Kinect sensor, which captures the entire motion in 3D. There are no wires or markers to interfere with the golfer’s swing—just one Kinect sensor that provides state-of-the-art, three-dimensional motion capture.
Designed for use by amateur golfers, My Swinguru uses a single Kinect sensor to capture the golfer'sswing in three dimensions.
The data collected by the Kinect sensor is crunched by a Windows PC running the Swinguru software, which uses a unique combination of synchronized 2D and 3D captures to measure key elements of the swing at any moment, something not possible with traditional video techniques. The golfer receives immediate feedback that detects flaws and recommends remedial drills. And My Swinguru automatically records the golfer’s swings for comparative replay.
Designed for use by professional golf instructors, Swinguru Pro provides simultaneous top, side, and front views on the same screen. Pause, forward, and back controls allow instructors to drill down frame-by-frame. Each training session, with all its swing data, is automatically saved, so it can be replayed and compared to earlier or later sessions. The Pro version also allows swing motions to be recorded as a series of pictures, which enables a sophisticated “match-your-posture” function. This function freezes the golfer’s set up, 9 o’clock, and top-of-backswing positions for comparison and direct feedback. In addition, Swinguru Pro provides balance tracking, including a view that shows the golfer’s center of mass displacement during the swing. It also includes automated drawing tools, which make it easy for users to compare body position in swing after swing.
As demonstrated in this video, Swinguru Pro provides additional analyses for use by teaching professionals.Like My Swinguru, it uses just one Kinect sensor, which enables wireless motion capture.
Enhanced with Kinect for Windows v2
Initially developed for use with the original Kinect for Windows sensor, both versions of Swinguru have now been adapted for use with Kinect for Windows v2. Guru Training Systems CEO Sabastien Wulf is delighted with the improvements enabled by the new sensor. “The Kinect v2 sensor is a revolution for our use in sports motion analysis. Not only does the v2 sensor use a wider angle time-of-flight camera, which allows us to reduce the minimum distance from the sensor for full body tracking, it also increases the image resolution tremendously, which enables a much enhanced user experience. What’s more, its new infrared time-of-flight depth sensor, combined to its new infrared illuminator, makes it so much more resistant to direct sunlight for 3D full body tracking.”
With help from Kinect for Windows and Swinguru, this golf season could be your best yet.
In case you hadn't noticed, the Windows Store added something really special to its line-up not too long ago: its first Kinect applications. The ability to create Windows Store applications had been a longstanding request from the Kinect for Windows developer community, so we were very pleased to deliver this capability through the latest Kinect sensor and the public release of the Kinect for Windows software development kit (SDK) 2.0.
The ability to sell Kinect solutions through the Windows Store means that developers can reach a broad and heretofore untapped market of businesses and consumers, including those with an existing Kinect for Xbox One sensor and the Kinect Adapter for Windows. Here is a look at three of the first developers to have released Kinect apps to the Windows Store.
Nayi Disha – getting kids moving and learning
You wouldn’t think that Nayi Disha needs to broaden its market—the company’s innovative, Kinect-powered early education software is already in dozens of preschools and elementary schools in India and the United States. But Nayi Disha co-founder Kartik Aneja is a man on a mission: to bring Nayi Disha’s educational software to as many young learners as possible. “The Windows Store gives us an opportunity to reach beyond the institutional market and into the home market. What parent doesn’t want to help their child learn?” asks Aneja, somewhat rhetorically. In addition, deployment in the Windows Store could help Nayi Disha reach schools and daycare centers beyond those in the United States and India.
Parents and teachers who discover Nayi Disha in the Windows Store will be impressed by its creative approach to learning. Based on Howard Gardner’s widely acclaimed theory of multiple intelligences, Nayi Disha appeals to children who learn best through movement, music, and storytelling. Each lesson teaches an important skill, such as from learning to count and recognizing common foods.
Comparisons with Kaju, available in the Windows Store, teaches children about number values. Here, we see "Gator" swimming through the app's main menu.
These lessons are imparted through stories featuring Kaju, a friendly space-traveling alien, whose adventures and misadventures get the kids up and moving—and learning. For example, in one story Kaju is ejected from his spaceship and lands on an interstellar number line. To return to the spacecraft, he must jump sequentially from digit to digit, until he reaches a specified number. But here’s the rub: Kaju only jumps if the kids jump and call out the correct numbers. This, of course, is where the Kinect sensor comes into play. The sensor sees the children jumping and hears them counting, and Kaju responds accordingly. Watching a roomful of preschoolers joyfully leap and count as they work to get their alien friend back to his space capsule, you can see how Nayi Disha makes learning fun. The youngsters are acquiring important skills, but all they know is they’re having fun. In fact, their identification with Kaju is so strong that one little girl referred to Aneja as “Kaju’s papa.”
YAKiT: bringing animation to the masses
It doesn’t take much to get Kyle Kesterson yakking about YAKiT—the co-founder and CEO of the Seattle-based Freak’n Genius is justifiably proud of what his company has accomplished in fewer than three years. “We started with the idea of enabling anybody to create animated cartoons,” he explains. But then reality set in. “We had smart, creative, funny people,” he says, “but we didn’t have the technology that would allow an untrained person to make a fully animated cartoon. We came up with a really neat first product, which let users animate the mouth of a still photo, but it wasn’t the full-blown animation we had set our sights on.”
Then something wonderful happened. Freak’n Genius was accepted into a startup incubation program funded by Microsoft’s Kinect for Windows group, and the funny, creative people at YAKiT began working with the developer preview version of the Kinect v2 sensor.
Now, Freak’n Genius is poised to achieve its founders’ original mission: bringing the magic of full animation to just about anyone. Its Kinect-based technology takes what has been highly technical, time consuming, and expensive and makes it instant, free, and fun. The user simply chooses an on-screen character and animates it by standing in front of the Kinect v2 sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character. What’s more, with the ability to create Windows Store apps, Kinect v2 stands to bring Freak’n Genius’s full animation applications to countless new customers.
YAKiT's Kinect-powered app makes it possible for anyone to create humorous animations in real time.
“When we tested the Kinect-based product with users, they loved it,” says Kesterson. “We had a couple of teenaged girls create animated foods—like apples and broccoli—for a school report on nutrition. They got so into the animation that soon they were making fruit-attacking zombies, and before we knew it, they were on the floor from laughing so hard. Their mother said to me ‘I’ve got to get this.’ That’s when I knew that we’d have a winner in the Windows Store.”
3D Builder: commoditizing 3D printing
As any tech-savvy person knows, 3D printing holds enormous potential—from industry (think small-batch manufacturing) to medicine (imagine “bio-printing” of body parts) to agriculture (consider bio-printed beef). Not to mention its rapid emergence as source of home entertainment and amusement, as in the printing of 3D toys, gadgets, and gimcracks. It was with these capabilities in mind that, last year, Microsoft introduced the 3D Builder app, which allows users to make 3D prints easily from a Windows 8.1 PC.
Now, 3D Builder has taken things to the next level with the incorporation of the Kinect v2 sensor. “The v2 sensor generates gorgeous 3D meshes from the world around you,” says Kris Iverson, a principal software engineer in the Windows 3D Printing group. “It not only provides precise depth information, it captures full-color images of people, pets, and even entire rooms. And it scans in real scale, which can then be adjusted for output on a 3D printer.”
3D Builder uses Kinect v2 to create accurate, three-dimensional models, ready for 3D printing.
Beyond its scanning fidelity, the Kinect-enabled version of 3D Builder also lets users automatically refine and repair their 3D models prior to printing. In addition, it gives them to power to manipulate the print image, combining or deleting objects or slicing them into pieces. Users can see the reconstruction as it happens and revise it on the fly.
The Kinect-enabled version of 3D Builder is now available in the Windows Store, opening up the enhanced possibilities of three-dimensional printing to both home users and a wider audience of professionals. For home users, the app enables the creation of 3D portraits and gadgets that are more realistic. Hobbyists can print through their Windows 8.1 computer directly to their own 3D printer, or they can send the data via the cloud to 3D printing service. For professionals, most of whom will likely use cloud-based printing services, 3D Builder offers the potential to print in range of materials, including plastics, ceramics, and metals.
While home enthusiasts seem the most likely first adopters of the new app, the appeal to professionals is clear. For example, Iverson recounts an experience when he was showing 3D Builder at Maker Faire New York last year. An event planner asked him where she might get the app, which she mused would be perfect for creating 3D mementos at weddings and bar mitzvahs. To Iverson, this is just the tip of the iceberg. “The Kinect v2 version of 3D Builder and its availability in the Windows Store really puts the pieces together, making a complex technology super simple for anyone.”
Nayi Disha, YAKiT, and 3D Builder represent just a thin slice of the potential for Kinect apps in the Windows Store. Whether the apps are educational, entertainment, or tools, as in these three vignettes, or intended for healthcare, manufacturing, retailing, or other purposes, Kinect v2 and the Windows Store offer a new world of opportunity for both developers and users.