• Kinect for Windows Product Blog

    Hacking away in Canada

    • 0 Comments

    A member of team Kwartzlab++ demonstrates his team's project VR Builder at the Kinect Hackathon in Kitchener, Ontario.
    A member of team Kwartzlab++ demonstrates his team's project VR Builder at the
    Kinect Hackathon in Kitchener, Ontario.

    Last week, we headed north to Canada for the latest stop on our Kinect Hackathon world tour: a three-day event (August 8–10) in Kitchener, Ontario, where developers gathered to develop applications* using Kinect for Windows v2. One of the three cities that make up the Regional Municipality of Waterloo, Kitchener has a booming tech community, fueled in part by the renowned computer science program at the University of Waterloo. So it was no surprise that the Kitchener attendees exhibited boundless energy and enormous creativity. Equally impressive was the hospitality of the people in Kitchener, especially Jennifer Janik and Rob Soosaar of Deep Realities, who were awesome hosts.

    And the winners* are…

    Team CleanSweep took first place.Team CleanSweep took first place.

    • Team CleanSweep took first place (US$500 and three Kinect for Windows v2 sensors) for their app Turtle Curling, an augmented reality version of one of Canada’s favorite games: curling. And no, it doesn’t send real turtles sliding down the ice. It uses two Kinect v2 sensors and a TurtleBot to create an incredibly fun version of this unique Olympic sport.
    • Team Christie Digitalia took second place (US$250 and a Kinect for Windows v2 sensor) for their app Projection Cosplay, which turns anyone into a virtual superhero by using projection mapping. Imagine yourself as a supernaturally endowed crime fighter, the nemesis of virtual bad guys everywhere. 
    • Team Command Your Space took third place (US$100 and a Kinect for Windows v2 sensor) with Command Your Space, an app that enables online shoppers to see how furniture and accessories will fit into real-world environments, as seen by the Kinect sensor. It can also be used with 3D scans of your own furniture, allowing you to do a little virtual rearranging.

    Hard at work: members of team BearHunterNinja (left) and team Titan (right)
    Hard at work: members of team BearHunterNinja (left) and team Titan (right)

    Other projects* presented

    • Angry Asteroid (team Pass/Fail), a game in the style of Angry Birds that uses Kinect motion controls
    • Art Jam (team REAPsters), a kinetic, interactive, multimedia experience in which users simultaneously interact with visual art and music using the Kinect sensor’s ability to detect motion
    • BearHunterNinja (team BearHunterNinja), an app that uses Kinect’s hand-state detection to enable the classic game of “rock, paper, scissors”; also implemented a variation of the game using custom, machine-learning gestures
    • BOHAH (team BOHAH), a therapeutic video game for children with disabilities
    • Bricktastic (team Bricktastic), who adapted their 3D brick-breaker mobile game to work with Kinect and Oculus Rift
    • ConnectConnect (team ConnectConnect), which networks together multiple Kinect sensors to allow sharing and combining of all the data in the same application, enabling more than six users and remote connections
    • Florb (team Titan), an app that lets you virtually fly, using your arms as thrusters
    • GIORP 5000 (team GIORPers), a proof of concept for an interactive retail clothing shopping experience
    • Half-Life 2 Mod for Kinect (team Barney’s Crabs), a Half-Life 2 mod with Kinect for Windows that enables movement and perspective changes
    • InteractionDemo (team Connecteraction), an app that powers experiments with Kinect data from the body, gestures, depth, and color
    • Speechy (team Speechy), a public speaking “training” program that uses Kinect to give you feedback on your posture, voice projection, and use of repeated words during presentations
    • Swish (team Focus on Fun), a marketing app that virtually dresses passersby in a store’s best outfit
    • Voice in Motion? (team Ace of Base?), an app that uses Kinect for Windows to interactively teach people American Sign Language (ASL)
    • VR Builder (team Kwartzlab++), an app that lets users build accurate 3D shapes that can then be placed in the user’s immediate area

    Upcoming events

    • Amsterdam, Netherlands (September 5–6): register at http://aka.ms/k4whackams
    • Vancouver, British Columbia (November 8): registration will open soon (keep an eye on our blog)

    Thanks to everyone who came to the event in Kitchener. I hope to see you at another event in the future!

    Ben Lower, Developer Community Manager, Kinect for Windows

    Key links

    _____________________
    *The names of the hackathon projects and teams are determined solely by the participants and are not intended to be used commercially.

  • Kinect for Windows Product Blog

    V2 meets 3D

    • 3 Comments

    As Microsoft Most Valuable Professional (MVP) James Ashley points out in a recent blog, it’s a whole lot easier to create 3D movies with the Kinect for Windows v2 sensor and its preview software development kit (SDK 2.0 public preview). For starters, the v2 sensor captures up to three times more depth information than the original sensor did. That means you have far more depth data from which to construct your 3D images.

    The next big improvement is in the ability to map color to the 3D image. The original Kinect sensor used an SD camera for color capture, and the resulting low-resolution images made it difficult to match the color data to the depth data. (RGB+D, a tool created by James George, Jonathan Porter, and Jonathan Minard, overcame this problem.) Knowing that the v2 sensor has a high-definition (1080p) video camera, Ashley reasoned that he could use the camera's color images directly, without a workaround tool. He also planned to map the color data to depth positions in real-time, a new capability built into the preview SDK.

    Ashley shot this 3D video of his daughter Sophia by using Kinect for Windows v2 and a standard laptop.

    Putting these features together, Ashley wrote an app that enabled him to create 3D videos on a standard laptop (dual core Intel i5, with 4 GB RAM and an integrated Intel HD Graphics 4400). While he has no plans at present to commercialize the application, he opines that it could be a great way to bring real-time 3D to video chats.

    Ashley also speculates that since the underlying principle is a point cloud, stills of the volumetric recording could be converted into surface meshes that can be read by CAD software or even turned into models that could be printed on a 3D printer. He also thinks it could be useful for recording biometric information in a physician’s office, or for recording precise 3D information at a crime scene, for later review.

    Those who want to learn more from Ashley about developing cool stuff with the v2 sensor should note that his book, Beginning Kinect Programming with Kinect for Windows v2, is due to be published in October.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Updated preview SDK now available

    • 7 Comments

    Today, we are releasing an updated version of the Kinect for Windows SDK 2.0 public preview. This new SDK includes more than 200 improvements to the core SDK. Most notably, this release delivers the much sought after Kinect Fusion tool kit, which provides higher resolution camera tracking and performance. The updated SDK also includes substantial improvements in the tooling, specifically around Visual Gesture Builder (VGB) and Kinect Studio, and it offers 10 new samples (such as Discrete Gestures Basics, Face, and HD Face Basics) to get you coding faster. All of this adds up to a substantially more stable, more feature-rich product that lets you to get serious about finalizing your applications for commercial deployment and, later this year, for availability in the Windows Store.

    The SDK is free and there will be no fees for runtime licenses of commercial applications developed with the SDK.

    If you’ve already downloaded the public preview, please be sure to take advantage of today’s updates. And for developers who haven’t used Kinect for Windows v2 yet, there’s no better time to get started!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Get in the game…literally!

    • 7 Comments

    You’re the hero, blasting your way through a hostile battlefield, dispatching villains right and left. You feel the power as you control your well-armed, sculpted character through the game. But there is always the nagging feeling: that avatar doesn’t really look like me. Wouldn’t it be great if you could create a fully animated 3D game character that was a recognizable version of yourself?

    Well, with the Kinect for Windows v2 sensor and Fuse from Mixamo, you can do just that—no prior knowledge of 3D modelling required. In almost no time, you’ll have a fully armed, animated version of you, ready to insert into selected games and game engines.

    The magic begins with the Kinect for Windows v2 sensor. You simply pose in front of the Kinect for Windows v2 sensor while its 1080p high-resolution camera captures six images of you: four of your body in static poses, and two of your face. With its enhanced depth sensing—up to three times greater than the original Kinect for Windows sensor—and its improved facial and body tracking, the v2 sensor captures your body in incredible, 3D detail. It tracks 25 joint positions and, with a mesh of 2,000 points, a wealth of facial detail.

    You begin creating your personal 3D avatar by posing in front of the Kinect for Windows v2 sensor.
    You begin creating your personal 3D avatar by posing in front of the Kinect for Windows v2 sensor.

    Once you have captured your image with the Kinect sensor, you simply upload it to Body Snap or a similar scanning software program, which will render it as a 3D model. This model is ready for download into an .obj file format designed for Fuse import requirements, which takes place in Body Hub, which, like Body Snap, is a product of Body Labs.

    In Body Hub, your 3D model is prepared for download as an .obj file.
    In Body Hub, your 3D model is prepared for download as an .obj file.

    Next, you upload the 3D model to Fuse, where you can take advantage of more 280 “blendshapes” that you can push and pull, sculpting your 3D avatar as much as you want. You can also change your hairstyle and your coloring, as well as choose from a large assortment of clothing.

    With your model imported to Fuse, you can customize its shape, hair style, and coloring.
    With your model imported to Fuse, you can customize its shape, hair style, and coloring.

    The customization process gives you an extensive array of wardrobe choices.
    The customization process also gives you an extensive array of wardrobe choices.

    Once you’ve customized your newly scanned image, you export it to Mixamo, where it gets automatically rigged and animated. The process is so simple that it seems almost unreal. Rigging prepares your static 3D model for animation by inserting a 3D skeleton and binding it to the skin of your avatar. Normally, you would need to be a highly skilled technical director to accomplish this, but with Maximo, any gamer can rig a character. Now you’re ready to save and export your animated self into Garry’s Mod and Team Fortress 2—which are just the first two games that have community-made workflows for Fuse-created characters. Support for exporting directly from Fuse to other popular "modding" games is on the Fuse development roadmap.

    On the left is a customized 3D avatar created from the scans of the gamer on the right.
    On the left is a customized 3D avatar created from the scans of the gamer on the right.

    The beauty of this system is not only its simplicity, but also its speed and relatively low cost. Within just minutes, you can create a fully rigged and animated 3D character. The Kinect for Windows v2 sensor costs just US$199 in the Microsoft Store, and Body Snap from Body Labs is free to download. Fuse can be purchased through Steam for $99, and includes two free auto-rigs per week.

    In Mixamo, your image gets automatically rigged and animated.
    In Mixamo, your avatar really comes to life, as auto-rigging makes it fully animated.

    The speed and low cost of this system make it appealing to professional game developers and designers, too, especially since workflows exist for Unity, UDK, Unreal Engine, Source Engine, and Source Filmmaker.

    Rigged and ready for action, your personal 3D avatar can be added to games and game engines, as in this shot from a game being developed with Unity.
    Rigged and ready for action, your personal 3D avatar can be added to games and game engines, as in this shot from a game being developed with Unity.

    The folks at Mixamo are committed to making character creation as easy and accessible as possible. “Mixamo’s mission is to make 3D content creation accessible for everyone, and this is another step in that direction,” says Stefano Corazza, CEO of Mixamo. “Kinect for Windows v2 and Fuse make it easier than ever for gamers and game developers to put their likeness into a game. In minutes, the 3D version of you can be running around in a 3D scene.”

    And here's the payoff—the gamer plays the 3D avatar of himself. Now that’s putting yourself in the action!
    And here's the payoff—the gamer plays the 3D avatar of himself. Now that’s putting yourself in the action!

    The expertise and equipment required for 3D modeling have long thwarted players and developers who want to add more characters to games, but Kinect for Windows v2 plus Fuse is poised to break down this barrier. Soon, you can thrill to an animated version of you fulfilling your gaming desires, be it holding off alien hordes or building a virtual community. It’s just one more example of how Kinect for Windows technology and partnerships are enhancing entertainment and creativity.

    Kinect for Windows Team

    Key links

Page 1 of 1 (4 items)