Blurry images: they’re a nuisance when they wreck your holiday photographs or ruin that video of your sister’s wedding. But they’re much more than a nuisance in the world of medical imaging. Blurry medical images can result in costly repeat scans or, worse yet, can lead to misdiagnoses.
As anyone who’s ever been x-rayed or scanned knows, it’s hard to hold still while a medical image is made. It’s harder yet for patients who suffer from dementia, many of whom cannot control their head movements during PET (positron emission tomography) scans of their brain. Such scans can be invaluable in determining brain function and the effectiveness of treatments, especially since the recent development of PET tracers that can image the characteristic protein plaques and inflammatory processes found in the brains of Alzheimer’s patients.
PET brain scans reveal beta amyloid and tau protein structures characteristic of Alzheimer’s disease,but head movements can render the scans unusable. Researchers hope to use Kinect for Windowsand computer vision software to remedy this problem.
Now Imanova, a UK-based international imaging center, is looking to solve the problem of blurred PET scans—and the latest Kinect for Windows technology is a big part of their proposed solution. Britain’s Medical Research Council recently awarded Imanova and Imperial College London a grant to integrate the Kinect for Windows v2 sensor into PET scanners, in order to detect patient movement in real time during scans.
The Kinect sensor will be mounted above the patient's head in the PET scanner.
The latest Kinect sensor’s state-of-the-art 3D camera does not require special lighting or direct contact with the patient, but it can effectively capture even slight movements, the effects of which can then be removed by applying computer vision algorithms during the reconstruction of the diagnostic image. Coupled with the latest high-resolution medical scanners, the Kinect-enabled system should produce images uncontaminated by movement. Medical researchers will compile clinical data to demonstrate the accuracy and usability of the technology in a real-world imaging environment.
This project demonstrates how Kinect technology can serve as a low-cost but highly sophisticated tool for medical use. Not only does the Imanova system promise significant benefit to dementia patients, it also has the potential to improve the understanding of a range of neurological conditions.
The Kinect for Windows Team
Declines in cognitive function are among the most debilitating problems that affect senior citizens. Various studies have reported that physical training designed to prevent falls can have a positive effect on cognition in older adults. With that in mind, Japanese researchers at Kyoto University’s medical school employed Kinect for Windows to create Dual-Task Tai Chi, a unique game concept in which elderly participants use various body movements and gestures—captured by the Kinect sensor and relayed to a stick figure projected on a screen—to select numbers and placements to complete an on-screen Sudoku puzzle. The game thus involves two tasks: the coordination task of controlling the stick figure, and the cognitive task of figuring out the puzzle pattern.
The patient uses movements and gestures—which are detected by the Kinect sensor—to complete anon-screen Sudoku puzzle.
Forty-one elderly participants were divided into a test group (26 individuals) and a control group (15 individuals). After 12 weeks, the test group showed improved cognitive function compared to the control group, as measured by the trail-making test—which measures visual attention and task switching abilities—and a verbal fluency test (see note). While the researchers caution that the results are preliminary, they demonstrate yet another way that Kinect for Windows can help improve health and well-being.
Kinect for Windows Team
Note: As the investigators' research paper states, “Significant differences were observed between the two groups with significant group x time interactions for the executive cognitive functions measure, the delta-trail-making test (part B—part A; Fi.36= 4.94, P =.03; TG: pre mean 48.8 [SD43.9], post mean 42.2 [SD 29.0]; CG pre mean 49.5 [SD 51.8], post mean 64.9 [SD 54.7].” Read the research paper for more details.
Back to blog
A marathon came to London early this year, but it wasn’t the usual 26 miles of pavement pounding. This was a different kind of endurance event, one involving 36 hours of coding with the Kinect v2 sensor.
The event, which took place March 21–22, was organized by Dan Thomas of Moov2, who had approached my colleagues and me in the Microsoft UK Developer Experience team a few months earlier, wondering if we could help put together a Kinect v2 hackathon. Of course we said yes, and with assistance from quite a few friends, the London Kinect Hack was off and running.
After many weeks of planning and hard work on the part of Dan and his team, the event came together and a site opened to distribute tickets. The site featured a snazzy logo (later emblazoned on T-shirts that were distributed at the event), and the hackathon’s allotted 100 tickets sold out within two days.
The London hackathon featured a snazzy logo that adorned participants' T-shirts.
Clearly there was a lot of interest, but we worried about actual turnout—always a risk with a weekend event, when other diversions compete for participants’ time. Moreover, we hoped that all the registrants appreciated that this was a coding event.
So, did the developers come to Kinect Hack London? Did they code? Did they have fun and deliver some great work? Absolutely—see for yourself in this video:
The hackathon was an unqualified success: more than 80 developers turned up for the weekend, coming from not only the UK but also France, Belgium, Holland, Germany, and even Mexico. In addition, a number of people came through on “spectator” tickets, eager to see what was happening.
Over the course of the next two days, teams were formed, Kinect sensors were loaned, laptops were borrowed, bugs were squashed, and sleep was (mostly) ignored. Twitter got a serious workout as teams tweeted their progress, while burgers, curry, and pasta disappeared, along with much coffee and a little beer.
Thirty-six hours later, the indefatigable hackers had produced a long list of projects to pitch during the show-and-tell that closed the event. This was a very relaxed, fun couple of hours, with participants getting to see and try out what the other teams had made. Here’s the full roll call of projects (many of which are featured in the video above):
Kinect Pong (Dave)
A variation of the classic game controlled by doing exercises—squats or press-ups (push-ups, to you Yanks). You can see my colleague Andrew Spooner demonstrating this in the video above.
Sphero Slalom (Victoria, Matthew, Hannah, and Phaninder)
Kinect-captured gestures steered a Sphero (a remote-controlled ball) through a challenging course.
Flight of Light (James)
A four-player Unity game was adapted to support Kinect input, with players spreading their wings and leaning to the left or right to control their avatar.
Vase Maker(a different James)
Not happy with regular home accessories, this one-man team used the Kinect sensor’s camera and Open Frameworks to create a host of weird and wonderful psychedelic, 3D vase visualisations from such props as a shopping bag.
Functional Movement Screen (Chris, Mustafa, Glenn, and Matthew)
Like a watchful gym teacher, this app used Kinect body tracking to analyze the quality with which participants performed a set of exercises.
Skelxplore (James and Leigh)
This app used Kinect to explore a user’s skeletal system and musculature.
Do It For Walt (Joe and Sam)
Developers from Disney prototyped an interactive theme park guide, which featured artificial reality that let users assume their favorite movie characters.
Kinect + Oculus (Tom)
Impressive visuals and sensations ensued when the Kinect sensor brought the body into a view presented by the Oculus Rift, as real limbs combined with augmented challenges.
Flappy Box (Chi and Bryan)
A Flappy Bird-like game, this app enabled up to six players to control a flying bird by jumping and crouching in front of the Kinect sensor.
Music Machine (Jon)
In this multi-person experience, users’ bodies controlled the mix of a set of parts from a music track.
Skynect (Rick, Elizabeth, Tatiana, and Sankha)
This app brought Kinect into the world Skype calls.
Kinect Talks (Fernando)
Intended as a tool to assist a five-year-old suffering from cerebral palsy, this app utilized simple body movements to create voice outputs.
Hole in the Wall (James, Alex, Scott, and Andrew)
In this Unity game, players used gestures to push shaped blocks into an advancing 3D wall.
Box Sizer (Alex, Michael, Tim, and Navid)
Designed for use by shipping companies, Box Sizer uses the Kinect sensor’s camera and depth detection to measure the volume of cardboard boxes.
Multi-Kinect Server (Julien)
This app combined output from multiple Kinect sensors over a network, creating a multi-sensor view of all the tracked bodies on a single monitor screen.
Bubblecatch (Sam, David, and Mark)
In this WPF-powered multiplayer game, players had to catch bubbles and avoid explosives.
Kinect Juggling (Phil and Joe)
A tool to teach juggling, this app used Kinect data to track the path of a juggled ball and analyze the accuracy of juggler.
Kinect Kombat (Gareth, Yohann, and Rene)
A prototype first-person game, this app let combatants hurl virtual fireballs.
Helicar & Lewis (Joel, James, and Thomas)
Intended to help children visualise their imagined environments, this app placed 3D characters in a modelled 3D world.
Kinect Shooter(Kunal and Shabari)
This app provided a gun-wielding shoot-‘em-up experience.
3D Fuser (Claudio and Maruisz)
Need to map Kinect sensor data onto 3D models? This app did it.
The event was not organized as a competition, but Dan put together a small judging panel and three teams received special prizes at the end of the hackathon:
But really, everyone was a winner. With help from the US Kinect team, Dan had many additional prizes to give away during impromptu games. More than two dozen developers went home with a Kinect sensor of their own; others received Raspberry Pi 2 devices and starter kits or Spheros, the latter donated by—you guessed it—Sphero.
This event demonstrated that the Kinect v2 sensor is an inspirational piece of hardware for hackers, and Dan’s team did a wonderful job of a creating a “by community, for community” event. Everyone had a great time, as witness the incredibly positive feedback at the event and on Twitter (search #KinectHackLondon).
Here are a few of the participants’ write-ups: some even include the code they produced:
Huge thanks to Dan Thomas and the team at Moov2 for putting this hackathon together. It was a great piece of work and a lot of fun to be involved in. Thanks also to the UK Microsoft colleagues who helped out, especially Paul Lo and Andrew Spooner.
Above all, many thanks to all the participants who made this weekend so outstanding.
Mike Taulty, Tech Evangelist, Microsoft UK Developer Experience Team
As we discussed in a recent blog, the Kinect v2 sensor and SDK 2.0 enables developers to create Kinect-powered Windows Store apps, opening up an entirely new market for your Kinect for Windows applications. Now on GitHub you can find the Kinect 2 Hands on Labs, a tutorial series that teaches you, step by step, how to build a Windows Store 8.1 app that uses almost every feature of the new sensor.
The lab is a complete introduction, covering everything from setting up the Kinect sensor to utilizing its major features. Included are hands-on lessons about using the infrared, color, and depth data; creating a body mask; displaying body data; removing backgrounds; using the face library; creating hand cursor interactions; employing Kinect Studio; building gestures; adding speech recognition; and tracking multiple users.
The Kinect 2 Hands on Labs include lessons on every major feature of the latest sensor, complete withillustrations such as this one from the lesson on using depth data.
Each lesson includes code to help you build samples, providing a true hands-on learning experience. For example, here is part of the code included in the lesson on how to assemble a body mask:
If you’re thinking about tapping into the Windows Store market with your own Kinect app, this tutorial series is a great place to start.
Watching Cole throw his arms and shoulders into playing a video game, you might never guess that he suffers from a severe muscular disease. But he does. Cole has Duchenne muscular dystrophy (DMD), a genetic disorder that results in progressive muscle degeneration. DMD patients, almost all of whom are boys, seldom live beyond early adulthood, and most are wheelchair bound by their early teens.
Dedicated medical researchers are testing a host of experimental treatments that might slow or even halt the disease’s otherwise relentless progress. Currently, most clinical trials limit admission to patients who can walk unassisted for six straight minutes. The distance the boy can walk in six minutes is used as a baseline; if the distance increases during the course of the treatment, it indicates that the experimental therapy is having a positive effect.
Unfortunately, the six-minute-walk requirement rules out a lot of boys who still have considerable upper-body strength but cannot walk the requisite six minutes. Physical therapists Linda Lowes and Lindsay Alfano at Nationwide Children’s Hospital are working to get more boys accepted into clinical trials by developing a simple, reliable measure of upper body abilities that could be used as an alternative to the walk test. And Kinect for Windows v2 is playing a critical role in their efforts.
ACTIVE-seated uses Kinect for Windows to measure upper-body muscle strength in boys withDuchenne muscular dystrophy.
Lowes, Alfano, and their colleagues have devised a Kinect-enabled video game in which seated DMD patients control the action by vigorous arm and shoulder movements. Called ACTIVE-seated (the acronym stands for Ability Captured Through Interactive Video Evaluation), the game not only measures upper-extremity abilities but does so while motivating the patient to perform his best.
ACTIVE-seated uses Kinect for Windows’ capabilities to record accurate data on the patient’s upper-extremity reach and range. The gamer—that is, the patient—is seated at a special table, some distance from a video monitor that displays the game. Taking advantage of the body tracking options in the Kinect software development kit (SDK), the researchers use the Kinect sensor’s infrared camera to track the position of the patient’s head, trunk, and arms as he plays the game. By identifying points on the head and sternum, both shoulders, and each arm, the researchers can measure the patient’s maximal upper-extremity movement in three planes: horizontal (left and right), vertical (table top to overhead), and depth (forward toward the camera).
Players can choose between two different games based on their interests. Both games were developed with input from the boys, who obviously know what pre-teen males enjoy. They overwhelmingly agreed that something “gross” would be best. Based on this recommendation, one game involves a spider invasion, in which the boys squish the spiders, which crunch realistically and ooze green innards. The second game, designed for the more squeamish, involves digging for jewels in a cave.
“You should see the faces of new patients light up when they hear that they’re going to be playing a video game instead of undergoing another boring set of tests,” says Lowes. The allure of a video game increases the patients’ motivation, which, in turn, improves the reliability of the results. When asked to perform uninspiring tests day after day, boredom sets in, and the desultory results don’t measure true functional ability. But when it comes to playing a video game, boredom isn’t a problem.
ACTIVE-seated is currently in testing, and a recent study of 61 DMD patients found that scores in the game correlated highly with parent reports of daily activities and mobility. Lowes and her colleagues are hopeful that these results will help convince the U.S. Food and Drug Administration to use the game as an alternative test for admission to DMD clinical trials.
“Jane” had a problem: a so-called frozen shoulder, which made it painful to use her left arm. The pain, which had begun mysteriously eight months earlier, affected nearly every aspect of Jane’s life, making it difficult for her to perform routine tasks at her office job and at home.
She had tried a number traditional and alternative treatments, from massage therapy and stretching to acupuncture-like intramuscular stimulation and a soft-tissue treatment called myofascial release. None of these treatments provided meaningful relief, and Jane abandoned each out of disappointment. Emotionally exhausted by the seemingly incurable pain, Jane was prescribed antidepressants by her physician.
Then, as what he called a “last resort,” Jane’s physician referred her to chiropractor Ryan Comeau, one of the founders of Kinetisense, a Canadian company that has pioneered the use of Kinect for Windows v2 to record and track progress during physiotherapy for joint and range-of-motion problems.
Kinetisense’s software takes advantage of the Kinect v2 sensor’s ability to accurately record the exact position of body joints during therapeutic sessions. Unlike traditional methods of measuring joint angles, the Kinetisense system measures true joint values—based on the actual position of the bones—rather than approximating the angles formed by the external body parts.
Kinetisense uses the Kinect v2 sensor to record the exact position of the body joints during therapy, providing an unparalleled level of accuracy.
Kinetisense algorithms obtain the positions of the joints and calculate the exact angle of any given joint at any time. And it does this in less than half a second, without resorting to imprecise hand tools, such as inclinometers and goniometers, or expensive wearable equipment. The patient simply stands or sits in front of the v2 sensor and the Kinetisense software performs all of the necessary calculations with remarkable accuracy and speed. And because the sensor is measuring the true positions of the joints, Kinetisense provides accurate joint analysis even when patients unintentionally try to extend their range of motion by leaning, rather than relying solely on joint movement.
The objective accuracy of the Kinetisense measurements allows the practitioner to adjust the treatment and reach a more realistic prognosis. What’s more, Kinetisense helps with patient compliance, which is a well-documented problem in physiotherapy. And while there are several reasons for noncompliance, Comeau notes that, like Jane, “Many patients with range of motion problems drop out of therapy when they fail to discern a meaningful lessening of their pain. But because pain is a very subjective matter, many people perceive it as an “all or nothing” proposition—either the pain is gone or it has not lessened at all. People may, in fact, be experiencing real benefits from their therapy, but fail to realize it because they are not yet completely pain-free. Kinetisense helps by providing both the patient and the practitioner with graphs that demonstrate real progress in range of motion, even when the patient has yet to sense the improvement in terms of pain reduction. The realization that therapy is working is incredibly reinforcing to patients, who are then much more likely to continue their treatment."
The precise measurements of the joint angles enable Kinetisense to chart improvements in the patient's range of motion. The quantifiable therapeutic results allow patients to see irrefutable evidence of improvement.
Kinetisense meets the longstanding need for objectivity and evidence-based rehabilitation care—a boon to both the patient and the practitioner. And as for Jane, she’s continuing her treatment and shows ongoing improvement. She’s been able to reduce her antidepressant dosage by half, and has referred several friends and family members to Comeau’s practice.
At Microsoft, we are committed to providing more personal computing experiences. To support this, we recently extended Kinect’s value and announced the Kinect Adapter for Windows, enabling anyone with a Kinect for Xbox One to use it with their PCs and tablets. In an effort to simplify and create consistency for developers, we are focusing on that experience and, starting today, we will no longer be producing Kinect for Windows v2 sensors.
Kinect for Xbox One sensor
Over the past several months, we have seen unprecedented demand from the developer community for Kinect sensors and have experienced difficulty keeping up with requests in some markets. At the same time, we have seen the developer community respond positively to being able to use the Kinect for Xbox One sensor for Kinect for Windows app development, and we are happy to report that Kinect for Xbox One sensors and Kinect Adapter for Windows units are now readily available in most markets. You can purchase the Kinect for Xbox One sensor and Kinect Adapter for Windows in the Microsoft Store.
Kinect Adapter for Windows
The Kinect Adapter enables you to connect a Kinect for Xbox One sensor to Windows 8.0 and 8.1 PCs and tablets in the same way as you would a Kinect for Windows v2 sensor. And because both Kinect for Xbox One and Kinect for Windows v2 sensors are functionally identical, our Kinect for Windows SDK 2.0 works exactly the same with either.
Microsoft remains committed to Kinect as a development platform on both Xbox and Windows. So while we are no longer producing the Kinect for Windows v2 sensor, we want to assure developers who are currently using it that our support for the Kinect for Windows v2 sensor remains unchanged and that they can continue to use their sensor.
We are excited to continue working with the developer community to create and deploy applications that allow users to interact naturally with computers through gestures and speech, and continue to see the Kinect sensor inspire vibrant and innovative commercial experiences in multiple industries, including retail, education, healthcare, education, and manufacturing. To see the latest ways that developers are using Kinect, we encourage you to explore other stories in the Kinect for Windows blog.
Michael Fry, Senior Technology Evangelist for Kinect for Windows, Microsoft
What do you do after you’ve built a great app? You make it even better. That’s exactly what Carl Franklin, a Microsoft Most Valuable Professional (MVP), did with GesturePak. Actually, GesturePak is both a WPF app that lets you create your own gestures (movements) and store them as XML files, and a .NET API that can recognize when a user has performed one or more of your predefined gestures. It enables you to create gesture-controlled applications, which are perfect for situations where the user is not physically seated at the computer keyboard.
GesturePak v2 simplifies the creation of gesture-controlled apps. This image shows the app in edit mode.
Franklin’s first version of GesturePak was developed with the original Kinect for Windows sensor. For GesturePak v2, he utilized the Kinect for Windows v2 sensor and its related SDK 2.0 public preview, and as he did, he rethought and greatly simplify the whole process of creating and editing gestures. To create a gesture in the original GesturePak, you had to break the movement down into a series of poses, then hold each pose and say the word “snapshot,” during which a frame of skeleton data was recorded. This process continued until you captured each pose in the gesture, which could then be tested and used in your own apps.
GesturePak v2 works very differently. You merely tell the app to start recording (with speech recognition), then you perform the gesture, and then tell it to stop recording. All of the frames are recorded. This gives you a way to play an animation of the gesture for your users.
GesturePak v2 still uses the same matching technology as version 1, relying on key frames (called poses in v1) that the user matches in series. But with the new version, once you've recorded the entire gesture, you can use the mouse wheel to "scrub" through the movement and pick out key frames. You also can select which joints to match simply by clicking on them. It's a much easier and faster way to create a gesture than the interface of GesturePak v1, which required you to select poses by using voice and manual commands.
Carl Franklin offered these words of technical advice for devs who are writing WPF apps:
If you want to capture video, use SharpAVI (http://sharpavi.codeplex.com/)
If you want to convert the AVI to other formats, use FFmpeg (http://ffmpeg.org/)
When building an app with multiple windows/pages/user controls that use the Kinect sensor, only instantiate one instance of a sensor and reader, then bind to the different windows
Initialize the Kinect sensor object and all readers in the Form Loaded event handler of a WPF window, not the constructor
Another big change is the code itself. GesturePak v1 is written in VB.NET. GesturePak v2 was re-written in C#. (Speaking of coding, see the green box above for Franklin’s advice to devs who are writing WPF apps.)
Franklin was surprised by how easy it was to adapt GesturePak to Kinect for Windows v2. He acknowledges there were some changes to deal with—for instance, “Skeleton” is now “Body” and there are new JointType additions—but he expected that level of change. “Change is the price we pay for innovation, and I don't mind modifying my code in order to embrace the future,” Franklin says.
He finds the Kinect for Windows v2 sensor improved in all categories. “The fidelity is amazing. It can track your skeleton in complete darkness. It can track your skeleton from 50 feet away (or more), and with a much wider field of vision. It can tell whether your hands are open, closed, or pointing,” Franklin states, adding, “I took full advantage of the new hand states in GesturePak. You can now make a gesture in which you hold out your open hand, close it, move it across your chest, and open it again.” In fact, Franklin credits the improvements in fidelity with convincing customers who had been on the fence. “They’re now beating down my door, asking me to build them a next-generation Kinect-based app.”
Austin, Texas: capital of the Lone Star State, home to the Texas Longhorns, and host of not one but two Kinect for Windows hackathons in the past few weeks. We were blown away—like Texas tumbleweeds, you know—by the ingenuity and talent on display at these Austin events.
NUI Central Kinect for Windows Hackathon
Developers, UI/UX designers, and enthusiasts gathered in Austin for 24 hours of coding ingenuity using Kinect for Windows v2 on February 21. Austin Mayor Steve Adler kicked off the event, reminding everyone of Austin’s role as a technology hub and challenging the hackers to create their best innovations. Sponsored by Microsoft, the event was held at WeWork, a shared office space for startups; the venue offered a comfortable lounge and private offices for the hardworking devs, who coded through the night.
The WeWork offices in Austin’s Historic District provided an inviting space for all-night hacking.
All of that coding resulted in some truly innovative Kinect for Windows applications (and some bleary-eyed hackers). The output ranged from games to medical applications to productivity enhancers. It was tough to choose the winners, but, steeled in our resolve by some Texas-strength black coffee, our panel of judges selected the top three apps. Each winning team received a cash prize and Kinect for Windows v2 sensors.
First place went to AR Sandbox, an onscreen, augmented-reality playground based on the infrared data collected by the Kinect sensor. When users manipulated a hand-held infrared reflective cube, the cube’s onscreen image transformed into a rubber duck or puppy. The app also created virtual rainstorms of rubber ducks and puppies. The user was able to interact with the ducks and puppies as onscreen objects.
Coming in second was the Advanced Coma Patient Monitoring System, which is intended to keep watch on comatose patients, generating alerts and recording events to a video file.
The third-place winner was I'm Hungry, an app that integrates Kinect and Skype, allowing callers to play a mini-game during a Skype call.
Inspired by the resourcefulness on display at the NUI Central Kinect for Windows Hackathon, we were eager to get back to Austin for the SXSW Music Hackathon. Luckily, we had fewer than four weeks to wait.
SXSW Music Hackathon Championship
Wednesday, March 18, found the Kinect for Windows team back in Austin for the start of the 2015 SXSW Music Hackathon Championship, where world-class hackers, designers, and programmers competed to create innovations for musicians, the music industry, and, of course, the fans. With their programming know-how and a collection of music-tech APIs they could use, competing teams had 24 hours to work on their prototypes and compete for the $10,000 Grand Prize. Among the Microsoft APIs available to the hackers were the Kinect for Windows SDK and the recently released Microsoft Band SDK.
Developers got a chance to learn about the APIs and meet the sponsors before the hackers pitched their ideas to recruit team members. Once the teams were formed, everyone quickly set to work creating music innovations.
The Kinect v2 sensor and the Microsoft Band added a unique flair to the hackathon. Teams tested their apps throughout the night by dancing in front of the Kinect sensor—when they weren’t busy doing laps to check their heart rate with the Band. These Microsoft products brought an interactive element that intensified the energy level throughout the night.
The SXSW Music Hackathon Championship was a beehive of coding activity, as developers raced the clock to create music apps.
Adding to the excitement of the late-night hackathon was a surprise performance by Boyfriend69, a talented entertainer who drew the developers to the front of the room, where she mingled and danced with them. Her show gave off a high-voltage vibe that kept the devs working through the night in true hackathon spirit.
Entertainer Boyfriend69’s surprise performance got the hackers up and mingling.
As dawn broke on March 19, the developers had fewer than eight hours to finish their projects before presenting them to qualify for the finals. While the last minutes of hacking ticked away, the teams feverishly polished their presentations. Here are the apps that emerged from the hackathon’s 24 hours of frenzied creativity:
This one-man team used Rdio and last.fm to create a QR code that aggregates listening data for display on an Apple Watch. When a user scans the code from another watch, Dandelion surfaces the song being listened to, using Rdio to play full songs or using other services to present 30-second previews.
MusicMap.io, an Austin-based team, is similar to Apple’s Meerkat app, but for music. MusicMap allows anyone to broadcast geo-tagged video and plot it on a map. With this service, users can discover new music from all over the world. MusicMap uses Stream.me as a live streaming service.
KYM (an acronym for Know Your Music), presented by Vince Davis, goes through the existing library on a user’s phone and gathers relevant information about the music by using APIs from various sources. Users can also hook up the app to Apple TV or the Apple Watch, so when they’re listening to music at home, the app shows relevant tweets from the artist.
SetStory aims to solve a problem in festival logistics. Currently, no tool exists that quantitatively evaluates the potential of an event's success based on its artists. By using OpenAura to grab information from various social feeds, SetStory calculates a quantifiable score that gives festival promotors and organizers a reliable gauge of an event's financial viability.
Groupie helps users find promising new artists in their local city. Users can also look at data from other cities, in case they want to discover the hot new bands from places near and far. Groupie uses the Rdio API to play the music and the Echonest API to look up the band's locale.
Bandarama is a workout tool that provides video and audio feedback on the user’s exercise performance. If you're running, for example, and your heart rate slows down, the tempo of the music will slow down, too, signaling you to pick up the pace. Team members Boris Polania and Guillermo Zambrano ran in circles around the room to demonstrate that once you start running faster again, the tempo of the music speeds back up and an applause sound effect provides extra motivation.
Divebomb uses the Kinect for Xbox One sensor to bring users into the music through virtual reality. As songs play, notes fly across the screen and the user can move his or her avatar to hit the notes as they race across the screen.
Mashr takes two different songs and then mashes them together by using the Gracenote API. It also ties into the Musicnote API, which helps determine if two different songs will work well together.
(List and descriptions from William Gruger, social/streaming charts manager for Billboard)
The judges faced a tough job, as only five of these presenters would advance to the finals on Friday. But the intrepid judges were up to the task, selecting Bandarama, Mashr, MusicMap, KYM, and Dandelion to advance.
On Friday, a celebrity panel of judges, consisting of Ty Roberts (Gracenote), Alex White (Next Big Sound), Jonathan Dworkin (Warner Music Group), Bryan Calhoun (Blueprint), Eric Sheinkop (Music Dealers), Jonathan Hull (Facebook), Todd Hansen (SXSW), and Marc Ruxin (Rdio) reviewed the finalists’ projects and selected the winner.
Dandelion took top honors, winning the 2015 SXSW Music Hackathon and its $10,000 grand prize. But the big winners are music lovers, who will undoubtedly enjoy some of the great innovations created by the event’s hackers, sponsors, and artists.
Microsoft unveiled some exciting new APIs at the SXSW Music Hackathon. These included the Neon Hitch API, which enabled artist-in-residence Neon Hitch to close out herstage show with a Kinect v2-enabled creative visual accompaniment to her song “Sparks.” Meanwhile, artist-in-residence Robert DeLong worked with Ableton and Microsoft, two of the hackathon's major sponsors, to turn his body into an instrument, which he then used on stage during his shows, including his set at the YouTube space. Another novel API was DJ Windows 98, an homage to the long-gone Microsoft operating system. It used a vintage CRT monitor controlled by the audience via Kinect for Windows.
As we left Austin for the second time in less than a month, we carried away memories of the creative energy we witnessed at both the NUI Central Kinect Hackathon and the 2015 SXSW Music Hackathon Championship.
While we’ve always thought that Kinect for Windows was a work of art, figuratively speaking, we are delighted to see the art world embracing the Kinect sensor as a creative tool. Two highly imaginative artistic uses of Kinect for Windows recently caught our attention, and we want to share them with you.
The first is a series of photographs by Israeli artist Assaf Evron, displayed at the Andrea Meislin Gallery in New York City from March 7 to April 25, 2015. Titled Visual Pyramid after Alberti, Evron’s striking photos show the interplay of light on everyday objects. The light is actually from the infrared spectrum emitted by the Kinect sensor. Using a separate infrared camera, Evron captures the Kinect-emitted infrared light as it’s reflected off the objects he’s photographing. The resulting images are a bold purple with a dense overlay of points of reflected infrared light.
This photograph, which captures reflected infrared light emitted by a Kinect sensor, is part of artist’s Assaf Evron’s Visual Pyramid after Alberti, 2013–2014.
(Copyright Assaf Evron. Photograph courtesy Andrea Meislin Gallery, New York.)
The photographs were inspired by the aesthetic philosophy of Renaissance thinker Leon Batista Alberti, who described a theory of linear perspective in his 1436 treatise Della pittura (On Painting). Alberti provided the mathematical underpinnings of perspective, showing how to render a three-dimensional illusion on a two-dimensional canvas. Evron’s photographs demonstrate Alberti’s theory in dramatic fashion.
Once you’ve stopped pondering Alberti’s ideas, we have a new brainteaser for you: what do you get when you mix performance art, experimental filmmaking, and an avant-garde music composition? Well, throw in two Kinect v2 sensors, some computers, and the right software, and you get as-phyx-i-a, an otherworldly movie that, in the words of creators, “…is centered in an eloquent choreography that stresses the desire to be expressive without bounds.”
The work of co-directors Maria Takeuchi and Frederico Phillips, the three-minute film renders the sinuous dancing of performance artist Shiho Tanaka as a glowing array of light points and spidery connections, all set to a haunting electronic score. The visuals and music are both eerie and beautiful, as the dancer’s image, which seems both digital and human simultaneously, moves gracefully across the screen.
Phillips was responsible for the visuals, which captured some 30 minutes of Tanaka’s dancing as a mesh of point-cloud data using two Kinect v2 sensors. The data from both sensors was combined and then styled with various 3D tools to create the ethereal images on the final film. Composer Takeuchi used a variety of digital and analogue techniques to create the original sound track that accompanies the visuals.
As Visual Pyramid after Alberti and as-phyx-i-a show, Kinect for Windows can be a potent artistic tool in the right creative hands.