• Kinect for Windows Product Blog

    Real-time 3D scanning stuns the gnome world

    • 4 Comments

    Garden gnomes: they decorate our yards, take bizarre trips, and now can be scanned in 3D in real time by using readily available computer hardware, as can be seen in this video from ReconstructMe. The developers employed the preview version of the Kinect for Windows v2 sensor and SDK, taking advantage of the sensor’s enhanced color and depth streams. Instead of directly linking the input of the Kinect with ReconstructMe, they streamed the data over a network, which allowed them to decouple the reconstruction from the data acquisition.

    Real-time 3D scan of garden gnome created by using Kinect for Windows v2

    Developer Christoph Heindl (he’s the one holding the gnome in the video) notes that the ReconstructMe team plans to update this 3D scanning technology when the Kinect for Windows v2 is officially released this summer, saying, “We’re eager to make this technology widely available upon the release of Kinect for Windows v2.”

    Heindl adds that this real-time process has potential applications in 3D scanning, 3D modelling through gestures, and animation. Not to mention the ability to document gnomic travels in 3D!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Windows Store app development is coming to Kinect for Windows

    • 9 Comments

    Today at Microsoft BUILD 2014, Microsoft made it official: the Kinect for Windows v2 sensor and SDK are coming this summer (northern hemisphere). With it, developers will be able to start creating Windows Store apps with Kinect for the first time. The ability to build such apps has been a frequent request from the developer community. We are delighted that it’s now on the immediate horizon—with the ability for developers to start developing this summer and to commercially deploy their solutions and make their apps available to Windows Store customers later this summer.

    The ability to create Windows Store apps with Kinect for Windows not only fulfills a dream of our developer community, it also marks an important step forward in Microsoft’s vision of providing a unified development platform across Windows devices, from phones to tablets to laptops and beyond. Moreover, access to the Windows Store opens a whole new marketplace for business and consumer experiences created with Kinect for Windows.

    The Kinect for Windows v2 has been re-engineered with major enhancements in color fidelity, video definition, field of view, depth perception, and skeletal tracking. In other words, the v2 sensor offers greater overall precision, improved responsiveness, and intuitive capabilities that will accelerate your development of voice and gesture experiences.

    Specifically, the Kinect for Windows v2 includes 1080p HD video, which allows for crisp, high-quality augmented scenarios; a wider field of view, which means that users can stand closer to the sensor—making it possible to use the sensor in smaller rooms; improved skeletal tracking, which opens up even better scenarios for health and fitness apps and educational solutions; and new active infrared detection, which provides better facial tracking and gesture detection, even in low-light situations.

    The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:

    • Window Store app development: Being able to integrate the latest human computing technology into Windows apps and publish those to the Windows Store will give our developers the ability to reach more customers and open up access to natural user experiences in the home.
    • Unity Support: We are committed to supporting the broader developer community with a mix of languages, frameworks, and protocols. With support for Unity this summer, more developers will be able to build and publish their apps to the Windows Store by using tools they already know.
    • Improved anatomical accuracy: With the first-generation SDK, developers were able to track up to two people simultaneously; now, their apps can track up to six. And the number of joints that can be tracked has increased from 20 to 25 joints per person. Lastly, joint orientation is better. The result is skeletal tracking that’s greatly enhanced overall, making it possible for developers to deliver new and improved applications with skeletal tracking, which our preview participants are calling “seamless.”
    • Simultaneous, multi-app support: Multiple Kinect-enabled applications can run simultaneously. Our community has frequently requested this feature and we’re excited to be able to give it to them with the upcoming release.

    Developers who have been part of the Kinect for Windows v2 Developer Preview program praise the new sensor’s capabilities, which take natural, human computing to the next level. We are in awe and humbled by what they’ve already been able to create.

    Technologists from a few participating companies are on hand at BUILD, showing off the apps they have created by using the Kinect for Windows v2. See what two of them, Freak’n Genius and Reflexion Health, have already been able to achieve, and learn more about these companies.

    The v2 sensor and SDK dramatically enhance the world of gesture and voice control that were pioneered in the original Kinect for Windows, opening up new ways for developers to create applications that transform how businesses and consumers interact with computers. If you’re using the original Kinect for Windows to develop natural voice- and gesture-based solutions, you know how intuitive and powerful this interaction paradigm can be. And if you haven’t yet explored the possibilities of building natural applications, what are you waiting for? Join us as we continue to make technology easier to use and more intuitive for everyone.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    BUILDing business with Kinect for Windows v2

    • 7 Comments

    BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

    Making schoolwork fun with Freak’n Genius, which lets anyone become an animator using Kinect for Windows v2. Here a student is choosing a character to animate in real time, for a video presentation on nutrition.
    Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.

    Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

    But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

    What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.”  He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.

    Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.

    While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling  to perform the exercises correctly at home—or simply skipping them altogether.

    Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

    Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

    Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.”  And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review.  “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.

    But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor.  Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”

    From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Revealing Kinect for Windows v2 hardware

    • 61 Comments

    As we continue the march toward the upcoming launch of Kinect for Windows v2, we’re excited to share the hardware’s final look.

    Sensor

    The sensor closely resembles the Kinect for Xbox One, except that it says “Kinect” on the top panel, and the Xbox Nexus—the stylized green “x”—has been changed to a simple, more understated power indicator:

    Kinect for Windows v2 sensor
    Kinect for Windows v2 sensor

    Hub and power supply

    The sensor requires a couple other components to work: the hub and the power supply. Tying everything together is the hub (top item pictured below), which accepts three connections: the sensor, USB 3.0 output to PC, and power. The power supply (bottom item pictured below) does just what its name implies: it supplies all the power the sensor requires to operate. The power cables will vary by country or region, but the power supply itself supports voltages from 100–240 volts.

    Kinect for Windows v2 hub (top) and power supply (bottom)

    Kinect for Windows v2 hub (top) and power supply (bottom)

    As this first look at the Kinect for Windows v2 hardware indicates, we're getting closer and closer to launch. So stay tuned for more updates on the next generation of Kinect for Windows.

    Kinect for Windows Team

    Key links


  • Kinect for Windows Product Blog

    Swap your face…really

    • 4 Comments

    Ever wish you looked like someone else? Maybe Brad Pitt or Jennifer Lawrence? Well, just get Brad or Jennifer in the same room with you, turn on the Kinect for Windows v2 sensor, and presto: you can swap your mug for theirs (and vice versa, of course). Don’t believe it? Then take a look at this cool video from Apache, in which two developers happily trade faces.

    Swapping faces in real time—let the good times roll

    According to Adam Vahed, managing director at Apache, the ability of the Kinect for Windows v2 sensor and SDK to track multiple bodies was essential to this project, as the solution needed to track the head position of both users. In fact, Adam rates the ability to perform full-skeletal tracking of multiple bodies as the Kinect for Windows v2 sensor’s most exciting feature, observing that it “opens up so many possibilities for shared experiences and greater levels of game play in the experiences we create.”

    Adam admits that the face swap demo was done mostly for fun. That said, he also notes that “the ability to identify and capture a person’s face in real time could be very useful for entertainment-based experiences—for instance, putting your face onto a 3D character that can be driven by your own movements.”

    Adam also stressed the value of the higher definition color feed in the v2 sensor, noting that Apache’s developers directly manipulated this feed in the face swap demo in order to achieve the desired effect. He finds the new color feed provides the definition necessary for full-screen augmented-reality experiences, something that wasn’t possible with the original Kinect for Windows sensor.

    Above all, Adam encourages other developers to dive in with the Kinect for Windows v2 sensor and SDK—to load the samples and play around with the capabilities. He adds that the forums are a great source of inspiration as well as information, and he advises developers “to take a look at what other people are doing and see if you can do something different or better—or both!”

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Holiday shoppers got the Midas touch

    • 1 Comments

    Ever wonder what you’d look like drenched in gold? December shoppers in Manhattan were captivated by just such images when they paused before an innovative window display for the new men’s cologne Gold Jay Z at Macy’s flagship store in Herald Square. This innovative, engaging display, the creation of advertising agency kbs+ and interactive design firm Future Colossal, employed Kinect for Windows to capture images of window shoppers and flow liquid gold over their silhouettes.

    Window shoppers found it hard to resist creating a gold-clad avatar.

    The experience began when the Kinect for Windows sensor detected that a passer-by had engaged with the display, which showed liquid gold rippling and flowing across a high-resolution screen. The Kinect for Windows sensor then captured a 3D image of the shopper, which artfully emerged from the pool of flowing gold to appear as a silhouette draped in the precious metal. This golden avatar interactively followed the window shopper’s movements, creating a beautiful, sinuous tableau that pulled the passer-by into an immersive experience with the fragrance brand. The Kinect for Windows also provided the shopper a photo of his or her golden doppelganger and a hashtag for sharing it via social media.  

    Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    An MVP’s look at the Kinect for Windows v2 developer preview

    • 0 Comments

    A few months ago, Microsoft Most Valuable Professional (MVP) James Ashley, a leader in developing with Kinect for Windows, wrote a very perceptive blog about Kinect for Windows v2 entitled, Kinect for Windows v2 First Look. James’ blog was so insightful that we wanted to check in with him after being in the Developer Preview program for three months and learn more about his experiences with the preview sensor and his advice to fellow Kinect for Windows developers. Here’s our Q&A with James:

    Microsoft: As a participant in the developer preview program, what cool things have you been doing with the Kinect for Windows v2 sensor and SDK over the past few months? Which features have you used, and what did you do with them?

    James: My advanced technology group at Razorfish has been very interested in developing mixed-media and mixed-technology stories with the Kinect for Windows v2 sensor. We recently did a proof- of-concept digital store with the Windows 8 team for the National Retail Federation (aka “Retail’s BIG Show”) in New York. You've heard of pop-up stores? We took this a step further by pre-loading a shipping container with digital screens, high-lumen projectors, massive arrays of Microsoft Surface tablets, and Perceptive Pixel displays and having a tractor-trailer deposit it in the Javits Center in New York City. When you opened the container, you had an instant retail store. We used the Kinect for Windows v2 sensor and SDK to drive an interactive soccer game built in Unity’s 3D toolset, in which 3D soccer avatars were controlled by the player's full body movements: when you won a game, a signal was sent by using Arduino components to drop a drink from a vending machine.

    Watch the teaser for Razorfish's interactive soccer game

    We also used Kinect for Windows v2 to allow people to take pictures with digital items they designed on the Perceptive Pixel. We then dropped a beach scene they selected into the background of the picture, which was printed out on the spot as well as emailed and pushed to their social networks if they wanted. In creating this experience, the new time-of-flight depth camera in Kinect for Windows v2 proved to be leagues better than anything we were able to do with the original Kinect for Windows sensor; we were thrilled with how well it worked. [Editor’s note: You can learn more about these retail applications in this blog post.]

    Much closer to the hardware, we have also been working with a client on using Kinect for Windows v2 to do precise measurements, to see if the Kinect for Windows v2 sensor can be used in retail to help people get fitted precisely—for instance with clothing and other wearables. Kinect for Windows v2 promises accuracy of 2.5 cm at even 4 meters, so this is totally feasible and could transform how we shop.

    Microsoft: Which features do you find the most useful and/or the most exciting, and why?

    James: Right now, I'm most interested in the depth camera. It has a much higher resolution than some standard time-of-flight cameras currently selling for $8,000 or $9,000. Even though the Kinect for Windows v2 final pricing hasn't been announced yet, we can expect it to be much, much less than that. It's stunning that Microsoft was able to pull off this technical feat, providing both improved quality and improved value in one stroke.

    Microsoft: Have you heard from other developers, and if so, what are they saying about your applications and/or their impressions of Kinect for Windows v2?

    James: I'm on both the MVP list and the developer preview program's internal list, so I've had a chance to hear a lot of really great feedback. Basically, we all had to learn a lot of tricks to make things work the way we wanted with the original Kinect for Windows. With v2, it feels like we are finally getting all the hardware performance we've wanted and then some. Of course, the SDK is still under development and we're obviously still early on with the preview program. People need to be patient.

    Microsoft: Any words of advice or encouragement for other developers about using Kinect for Widows v2?

    James: If you are a C# developer and you haven't made the plunge, now is a good time to start learning Visual C++. All of the powerful interaction and visually intensive things you might want to do are taking advantage of C++ libraries like Cinder, openFrameworks, PCL, and OpenCV. It requires being willing to feel stupid again for about six months, but at the end of that time, you'll be glad you made the effort.

    Our thanks to James for taking time to share his insights and experience with us. And as mentioned at the top of this post, you should definitely read James’ Kinect for Windows v2 First Look blog.

    Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Course simplifies creation of WPF applications

    • 1 Comments

    KinectInteraction is a set of features, first introduced in Developer Toolkit 1.7, which allows Kinect-enabled applications to incorporate gesture-based interactivity. Developers can use KinectInteraction to create Windows Presentation Foundation (WPF) applications in which the movement of the user’s hand controls an on-screen hand, much like the movement of a mouse controls an on-screen cursor.

    The course teaches how to develop apps that use hand gestures to control an on-screen hand.The course teaches how to develop apps that use hand gestures to control an on-screen hand.

    András Velvart, a Kinect for Windows MVP, has created an online video course that provides step-by-step instructions on how to create such WPF applications and teaches students how to customize the look and feel of the controls provided by Microsoft. It even demonstrates how to completely control the interaction model or use KinectInteraction outside of WPF. The course, aptly named "KinectInteraction with WPF and Beyond,” is available through Pluralsight, an online training service for developers and IT professionals.  

    Kinect for Windows Team

     Key links

     

  • Kinect for Windows Product Blog

    Exploring v2 body imaging capabilities

    • 0 Comments

    In a pair of related blog posts, Zubair Ahmed, a Microsoft Most Valuable Professional nominee and a participant in the Kinect for Windows v2 developer preview program, put his new v2 Kinect for Windows sensor through its paces. In the first post, Zubair demonstrates how to use the body source data captured by the sensor to draw the bones, hands, and joints and overlay them on top of the color frame that comes from the sensor. The post includes the relevant code* and useful tips and tricks. 

    Zubair demonstrates the hand color frame received from the Kinect for Windows sensor.
    Zubair demonstrates the hand color frame received from the Kinect for Windows sensor.

    Zubair’s second post continues his deep dive into the body tracking of the v2 Kinect for Windows sensor. He refines his methods to eliminate a hack he had employed in the original code. In addition, he explains how to merge two body-image color frames and use a single image control to render them. This post not only includes the relevant code* and helpful tips; it also provides a demonstration video.

    Kinect for Windows Team

    Key links

    ________________________

    *This is preliminary software and/or hardware and APIs are preliminary and subject to change.

  • Kinect for Windows Product Blog

    My, you’ve grown…

    • 0 Comments

    That’s what you might be thinking as you scroll through the posts on this site.  That’s because we’ve merged the Kinect for Windows developer and product blogs. This union creates a “one-stop shop” for news about Kinect for Windows: a single source for learning about cool product applications plus the latest developer information. So, yeah, we’re fatter now—just think of it as more to love!          

                    Kinect for Windows Team

Page 3 of 11 (106 items) 12345»