• Kinect for Windows Product Blog

    Inside the Newest Kinect for Windows SDK—Infrared Control

    • 0 Comments

    Inside the Newest Kinect for Windows SDK—Infrared ControlThe Kinect for Windows software development kit (SDK) October release was a pivotal update with a number of key improvements. One important update in this release is how control of infrared (IR) sensing capabilities has been enhanced to create a world of new possibilities for developers.

    IR sensing is a core feature of the Kinect sensor, but until this newest release, developers were somewhat restrained in how they could use it. The front of the Kinect for Windows sensor has three openings, each housing a core piece of technology. On the left, there is an IR emitter, which transmits a factory calibrated pattern of dots across the room in which the sensor resides. The middle opening is a color camera. The third is the IR camera, which reads the dot pattern and can help the Kinect for Windows system software sense objects and people along with their skeletal tracking data.

    One key improvement in the SDK is the ability to control the IR emitter with a new API, KinectSensor.ForceInfraredEmitterOff. How is this useful? Previously, the sensor's IR emitter was always active when the sensor was active, which can cause depth detection degradation if multiple sensors are observing the same space. The original focus of the SDK had been on single sensor use, but as soon as innovative multi-sensor solutions began emerging, it became a high priority to enable developers to control the IR emitter. “We have been listening closely to the developer community, and expanded IR functionality has been an important request,” notes Adam Smith, Kinect for Windows principal engineering lead. “This opens up a lot of possibilities for Kinect for Windows solutions, and we plan to continue to build on this for future releases.”

    Another useful application is expanded night vision with an external IR lamp (wavelength: 827 nanometers). “You can turn off the IR emitter for pure night vision ("clean IR"),” explains Smith, “or you can leave the emitter on as an illumination source and continue to deliver full skeletal tracking. You could even combine these modes into a dual-mode application, toggling between clean IR and skeletal tracking on demand, depending on the situation. This unlocks a wide range of possibilities—from security and monitoring applications to motion detection, including full gesture control in a dark environment.”

    Finally, developers can use the latest version of the SDK to pair the IR capabilities of the Kinect for Windows sensor with a higher definition color camera for enhanced green screen capabilities. This will enable them to go beyond the default 640x480 color camera resolution without sacrificing frame rate. “To do this, you calibrate your own color camera with the depth sensor by using a tool like OpenCV, and then use the Kinect sensor in concert with additional external cameras or, indeed, additional Kinect sensors,” notes Smith. “The possibilities here are pretty remarkable: you could build a green screen movie studio with full motion tracking and create software that transforms professional actors—or even, say, visitors to a theme park—into nearly anything that you could imagine."

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows Helps Girls Everywhere Dress Like Barbie

    • 2 Comments

    I grew up in the UK and my female cousins all had Barbie. In fact Barbies – they had lots of Barbie dolls and ton of accessories that they were obsessed with. I was more of a BMX kind of kid and thought my days of Barbie education were long behind me, but with a young daughter I’m beginning to realize that I have plenty more Barbie ahead of me, littered around the house like landmines. This time around though, I’m genuinely interested thanks to a Kinect-enabled application. The outfits from Barbie the Dream Closet not only scale to fit users, but enable them to turn sideways to see how they look from various angles.

    This week, Barbie lovers in Sydney, Australia, are being given the chance to do more than fanaticize how they’d look in their favorite Barbie outfit. Thanks to Mattel, Gun Communications, Adapptor, and Kinect for Windows, Barbie The Dream Closet is here.

    The application invites users to take a walk down memory lane and select from 50 years of Barbie fashions. Standing in front of Barbie’s life-sized augmented reality “mirror,” fans can choose from several outfits in her digital wardrobe—virtually trying them on for size.

    The solution, built with the Kinect for Windows SDK and using the Kinect for Windows sensor, tracks users’ movements and gestures enabling them to easily browse through the closet and select outfits that strike their fancy. Once an outfit is selected, the Kinect for Windows skeletal tracking determines the position and orientation of the user. The application then rescales Barbie’s clothes, rendering them over the user in real time for a custom fit.

    One of the most interesting aspects of this solution is the technology’s ability to scale - with menus, navigation controls and clothing all dynamically adapting so that everyone from a little girl to a grown woman (and cough, yes, even a committed father) can enjoy the experience. To facilitate these advancements, each outfit was photographed on a Barbie doll, cut into multiple parts, and then built individually via the application. 

    Of course, the experience wouldn’t be complete without the ability to memorialize it. A photo is taken and, with approval/consent from those photographed, is uploaded and displayed in a gallery on the Barbie Australian Facebook page. (Grandparents can join in the fun from afar!)

    I spoke with Sarah  Sproule, Director, Gun Communications about the genesis of the idea who told me, We started working on Barbie The Dream Closet six months ago, working with our development partner Adapptor. Everyone has been impressed by the flexibility, and innovation Microsoft has poured into Kinect for Windows. Kinect technology has provided Barbie with a rich and exciting initiativBarbie enthusiasts of all ages can enjoy trying on and posing in outfits.e that's proving to delight fans of all ages. We're thrilled with the result, as is Mattel - our client."

    Barbie’s Dream Closet, was opened to the public at the Westfield Parramatta in Sydney  today and will be there through April 15. Its first day, it drew enthusiastic crowds, with around 100 people experiencing Barbie The Dream Closet. It's expected to draw even larger crowds over the holidays. It’s set to be in Melbourne and Brisbane later this year.

     Meantime, the Kinect for Windows team is just as excited about it as my daughter:

    “The first time I saw Barbie’s Dream Closet in action, I knew it would strike a chord,” notes Kinect for Windows Communications Manager, Heather Mitchell. “It’s such a playful, creative use of the technology. I remember fanaticizing about wearing Barbie’s clothes when I was a little girl. Disco Ken was a huge hit in my household back then…Who didn’t want to match his dance moves with their own life-sized Barbie disco dress? I think tens of thousands of grown girls have been waiting for this experience for years…Feels like a natural.”

    That’s the beauty of Kinect – it enables amazingly natural interactions with technology and hundreds of companies are out there building amazing things; we can’t wait to see what they continue to invent.

    Steve Clayton
    Editor, Next at Microsoft

  • Kinect for Windows Product Blog

    Kinect for Windows at Convergence of Style and Technology for New York Fashion Week

    • 5 Comments

    Kinect for Windows powers a new technology that virtually models the hottest styles in Bloomingdale’s during Fashion Week.This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.

    Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.

    Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.

    Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.

    "Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."

    Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.

    The Printing Dress, a remarkable melding of fashion and technology, on display at Bloomingdale's in New York.Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.

    Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:

    • One of the participants in the recent Microsoft Accelerator for Kinect program, Styku, LLC, has also developed virtual fitting room software and body scanner technology powered by Kinect for Windows. 
    • Mattel brought to life Barbie: The Dream Closet that makes it possible for anyone to try on clothes from 50 years of Barbie's wardrobe. 
    • Kimetric , another Kinect Accelerator participant, uses Kinect for Windows sensors strategically placed throughout a store to gather useful data, helping a retailer to better understand consumer behavior.

    With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.

    Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Build-A-Bear Selects Kinect for Windows for "Store of the Future"

    • 0 Comments

    Build-A-Bear Workshop stores have been delivering custom-tailored experiences to children for 15 years in the form of make-your-own stuffed animals, but the company recently recognized that its target audience was gravitating toward digital devices. So it has begun advancing its in-store experiences to match the preferences of its core customers by incorporating digital screens throughout the stores—from the entrance to the stations where the magic of creating new fluffy friends happens.

    A key part of Build-A-Bear's digital shift is their interactive storefront that's powered by Kinect for Windows. It enables shoppers to play digital games on either a screen adjacent to the store entrance or directly through the storefront window simply by using their bodies and natural gestures to control the game.

    Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.
    Children pop virtual balloons in a Kinect for Windows-enabled game at this Build-A-Bear store's front window.

    "We're half retail, half theme park," said Build-A-Bear Director of Digital Ventures Brandon Elliott. The Kinect for Windows platform instantly appealed to Build-A-Bear as "a great enabler for personalized interactivity."

    The Kinect for Windows application, launched at six pilot stores, uses skeletal tracking to enable two players (four hands) to pop virtual balloons (up to five balloons simultaneously) by waving their hands or by touching the screen directly. While an increasing number of retail stores use digital signage these days, Elliott noted: "What they're not doing is building a platform for interactive use."

    "We wanted something that we could build on, that's a platform for ever-improving experiences," Elliott said. "With Kinect for Windows, there’s no learning curve. People can interact naturally with technology by simply speaking and gesturing the way they do when communicating with other people. The Kinect for Windows sensor sees and hears them."

    "Right now, we're just using the skeletal tracking, but we could use voice recognition components or transform the kids into on-screen avatars," he added. "The possibilities are endless."
     
    Part of the Build-A-Bear's vision is to create Kinect for Windows apps that tie into the seasonal marketing themes that permeate the stores. Elliott said that Build-A-Bear selected the combination of the Microsoft .NET Framework, Kinect for Windows SDK, and Kinect for Windows sensor specifically so that they can take advantage of existing developer platforms to build these new apps quickly.

    “We appreciate that the Kinect for Windows SDK is developing so rapidly. We appreciate the investment Microsoft is making to continue to open up features within the Kinect for Windows sensor to us,” Elliott said. "The combination of Kinect for Windows hardware and software unlocks a world of new UI possibilities for us."

    Microsoft developer and technical architect Todd Van Nurden and others at the Minneapolis-based Microsoft Technology Center helped Build-A-Bear with an early consultation that led to prototyping apps for the project.

    "The main focus of my consult was to look for areas beyond simple screen-tied interactions to create experiences where Kinect for Windows activates the environment. Screen-based interactions are, of course, the easiest but less magical then environmental," Van Nurden said. "We were going for magical."

    The first six Build-A-Bear interactive stores launched in October and November 2012 in St. Louis, Missouri; Pleasanton, California; Annapolis, Maryland; Troy, Michigan; Fairfax, Virginia, and Indianapolis, Indiana (details). Four of the stores have gesture-enhanced interactive signs at the entrance, while two had to be placed behind windows to comply with mall rules. Kinect for Windows can work through glass with the assistance of a capacitive sensor that enables the window to work as a touch screen, and an inductive driver that turns glass into a speaker.

    So far, Build-A-Bear has been thrilled with what Elliott calls "fantastic" results. "Kids get it," he said. "We have a list of apps we want to build over the next couple of years. We can literally write an app for one computer in the store, and put it anywhere."

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    Developing with Kinect for Windows v2 on a Mac

    • 5 Comments

    With the launch of the Kinect for Windows v2 public preview, we want to ensure that developers have access to the SDK so that you can start writing Kinect-based applications. As you may be aware, the Kinect for Windows SDK 2.0 public preview will run only on Windows 8 and Windows 8.1 64-bit systems. If you have a Windows 8 PC that meets the minimum requirements, you’re ready to go.

    For our Macintosh developers, this may be bittersweet news, but we’re here to help. There are two options available for developers who have an Intel-based Mac: (1) install Windows to the Mac’s hard drive, or (2) install Windows to an external USB 3.0 drive. Many Mac users are aware of the first option, but the second is less well known.

    First, you need to ensure that your hardware meets the minimum requirements for Kinect for Windows v2.

    Due to the requirements for full USB 3.0 bandwidth and GPU Shader Model 5 (DirectX 11), virtualization products such as VMWare Fusion, Parallels Desktop, or Oracle VirtualBox are not supported. If you’re not sure what hardware you have, you can find out on these Apple websites:


    Installing Windows on the internal hard drive of your Intel-based Macintosh

    We’re going to focus on getting Windows 8.1 installed, since this is typically the stumbling block. (If you need help installing Visual Studio or other applications on Windows, you can find resources online.)

    Apple has provided a great option called Boot Camp. This tool will download the drivers for Windows, set up bootable media for installation, and guide you through the partitioning process. Please refer to Apple’s website on using this option:


    Alternative to installing Windows on your primary drive

    Boot Camp requires Windows to be installed on your internal hard drive. This might be impractical or impossible for a variety of reasons, including lack of available free space, technical failures during setup, or personal preferences.

    An alternative is to install Windows to an external drive using Windows To Go, a feature of Windows 8 and 8.1 Enterprise. (Learn more about this feature in Windows 8.1 Enterprise.)

    In the section, Hardware considerations for Windows To Go, on Windows To Go: Feature Overview, you can find a list of recommended USB 3.0 drives. These drives have additional security features that you may want to review with your systems administrators, to ensure you are in compliance with your company’s security policies.


    Getting started with Windows To Go

    • You will need the following to proceed:
    • Existing PC with USB 3.0 that has Windows 8/8.1 Enterprise installed (the “technician computer”)
    • USB 3.0 flash or external hard drive
    • Windows 8/8.1 Enterprise installation media (CD or ISO)
    • Windows 8/8.1 product key

    You will need to log in as the administrator. Start the Windows to Go tool, press Win-Q to start the search, and enter Windows To Go:

    press Win-Q to start the search, and enter "Windows To Go"

    Launch the Windows To Go application from the list. From the main application window, you will see a list of the attached drives that you can use with the tool. As shown below, you may be alerted of USB 3.0 drives that are not Windows To Go certified. You can still use the drive but understand that it might not work or could have an impact on performance. If you are using a non-certified USB 3.0 drive, you will have do your own testing to ensure it meets your needs. (Note: while not officially supported by Microsoft, we have used the Western Digital My Passport Ultra 500 GB and 1 TB drives at some of our developer hackathons to get people using Macs up and running with our dev tools on Windows.)

    "Choose the drive you want to use" window

    Select the drive you wish to use and click Next. If you have not already done so, insert the Windows 8.1 Enterprise CD at this time. If you have the .ISO, you can double-click the icon or right-click and select Mount to use it as a virtual drive.

    If you have the .ISO, you can double-click the icon or right-click and select Mount to use it as a virtual drive.

    If you do not see an image in the list, click the Add search location button and browse your system to find the DVD drive or mounted CD partition:

    Browse your system to find the DVD drive or mounted CD partition.

    It should now appear in the list, and you can select it and click Next.

    Select your Windows 8.1 image and click "Next."

    If you need or wish to use BitLocker, you can enable that now. We will Skip this.

    "Set a BitLocker password (optional)" screen 

    The confirmation screen will summarize the selections you have made. This is your last chance to ensure that you are using the correct drive. Please avail yourself of this opportunity, as the Windows To Go installation process will reformat the drive and you will not be able to recover any data that is currently on the drive. Once you have confirmed that you are using the correct drive, click Create to continue.

    "Ready to create your Windows To Go workspace" window

    Once the creation step is complete, you are ready to reboot the system. But first, you’ll need to download the drivers necessary for running Windows on Macintosh hardware from the Apple support page, as, by default, Windows setup does not include these drivers.

    I recommend that you create an Extras folder on your drive and copy the files you’ll need. As shown below, I downloaded and extracted the Boot Camp drivers in this folder, since this will be the first thing I’ll need after logging in for the first time.

    Extracting the Boot Camp drivers from the Extras folder I created.

    Disconnect the hard drive from the Windows computer and connect it to your Mac. Be sure that you are using the USB 3.0 connection if you have both USB 2 and USB 3.0 hardware ports. Once the drive is connected, boot or restart your system while holding down the option key. (Learn more about these startup key shortcuts for Intel-based Macs.)

    Connect the hard drive to your Mac and restart your system while holding down the option key.

    During the initial setup, you will be asked to enter your product key, enter some default settings, and create an account. If your system has to reboot at any time, repeat the previous step to ensure that you return to the USB 3.0 workspace. Once you have successfully logged in for the first time, install the Boot Camp driver and any other applications you wish to use. Then you’ll have a fully operational Windows environment you can use for your Kinect for Windows development.

    Carmine Sirignano
    Developer Support Escalation Engineer
    Kinect for Windows

    Key links

     

  • Kinect for Windows Product Blog

    BUILDing business with Kinect for Windows v2

    • 7 Comments

    BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

    Making schoolwork fun with Freak’n Genius, which lets anyone become an animator using Kinect for Windows v2. Here a student is choosing a character to animate in real time, for a video presentation on nutrition.
    Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.

    Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

    But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

    What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.”  He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.

    Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.

    While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling  to perform the exercises correctly at home—or simply skipping them altogether.

    Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

    Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

    Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.”  And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review.  “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.

    But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor.  Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”

    From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    An MVP’s look at the Kinect for Windows v2 developer preview

    • 0 Comments

    A few months ago, Microsoft Most Valuable Professional (MVP) James Ashley, a leader in developing with Kinect for Windows, wrote a very perceptive blog about Kinect for Windows v2 entitled, Kinect for Windows v2 First Look. James’ blog was so insightful that we wanted to check in with him after being in the Developer Preview program for three months and learn more about his experiences with the preview sensor and his advice to fellow Kinect for Windows developers. Here’s our Q&A with James:

    Microsoft: As a participant in the developer preview program, what cool things have you been doing with the Kinect for Windows v2 sensor and SDK over the past few months? Which features have you used, and what did you do with them?

    James: My advanced technology group at Razorfish has been very interested in developing mixed-media and mixed-technology stories with the Kinect for Windows v2 sensor. We recently did a proof- of-concept digital store with the Windows 8 team for the National Retail Federation (aka “Retail’s BIG Show”) in New York. You've heard of pop-up stores? We took this a step further by pre-loading a shipping container with digital screens, high-lumen projectors, massive arrays of Microsoft Surface tablets, and Perceptive Pixel displays and having a tractor-trailer deposit it in the Javits Center in New York City. When you opened the container, you had an instant retail store. We used the Kinect for Windows v2 sensor and SDK to drive an interactive soccer game built in Unity’s 3D toolset, in which 3D soccer avatars were controlled by the player's full body movements: when you won a game, a signal was sent by using Arduino components to drop a drink from a vending machine.

    Watch the teaser for Razorfish's interactive soccer game

    We also used Kinect for Windows v2 to allow people to take pictures with digital items they designed on the Perceptive Pixel. We then dropped a beach scene they selected into the background of the picture, which was printed out on the spot as well as emailed and pushed to their social networks if they wanted. In creating this experience, the new time-of-flight depth camera in Kinect for Windows v2 proved to be leagues better than anything we were able to do with the original Kinect for Windows sensor; we were thrilled with how well it worked. [Editor’s note: You can learn more about these retail applications in this blog post.]

    Much closer to the hardware, we have also been working with a client on using Kinect for Windows v2 to do precise measurements, to see if the Kinect for Windows v2 sensor can be used in retail to help people get fitted precisely—for instance with clothing and other wearables. Kinect for Windows v2 promises accuracy of 2.5 cm at even 4 meters, so this is totally feasible and could transform how we shop.

    Microsoft: Which features do you find the most useful and/or the most exciting, and why?

    James: Right now, I'm most interested in the depth camera. It has a much higher resolution than some standard time-of-flight cameras currently selling for $8,000 or $9,000. Even though the Kinect for Windows v2 final pricing hasn't been announced yet, we can expect it to be much, much less than that. It's stunning that Microsoft was able to pull off this technical feat, providing both improved quality and improved value in one stroke.

    Microsoft: Have you heard from other developers, and if so, what are they saying about your applications and/or their impressions of Kinect for Windows v2?

    James: I'm on both the MVP list and the developer preview program's internal list, so I've had a chance to hear a lot of really great feedback. Basically, we all had to learn a lot of tricks to make things work the way we wanted with the original Kinect for Windows. With v2, it feels like we are finally getting all the hardware performance we've wanted and then some. Of course, the SDK is still under development and we're obviously still early on with the preview program. People need to be patient.

    Microsoft: Any words of advice or encouragement for other developers about using Kinect for Widows v2?

    James: If you are a C# developer and you haven't made the plunge, now is a good time to start learning Visual C++. All of the powerful interaction and visually intensive things you might want to do are taking advantage of C++ libraries like Cinder, openFrameworks, PCL, and OpenCV. It requires being willing to feel stupid again for about six months, but at the end of that time, you'll be glad you made the effort.

    Our thanks to James for taking time to share his insights and experience with us. And as mentioned at the top of this post, you should definitely read James’ Kinect for Windows v2 First Look blog.

    Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Monsters Come to Life with Kinect for Windows

    • 2 Comments

    A demon dog robot under constructionIt all started with a couple of kids and a remarkable idea, which eventually spawned two terrifying demon dogs and their master. This concept is transforming the haunt industry and could eventually change how theme parks and other entertainment businesses approach animated mechanical electronics (animatronics).
     
    Here's the behind-the-scenes story of how this all came to be:

    The boys, 6-year-old Mark and 10-year-old Jack, fell in love with Travel Channel's Making Monsters, a TV program that chronicles the creation of lifelike animatronic creatures. After seeing their dad's work with Kinect for Windows at the Minneapolis-based Microsoft Technology Center, they connected the dots and dreamed up the concept: wouldn't it be awesome if Dad could use his expertise with the Kinect for Windows motion sensor to make better and scarier monsters?

    So “Dad”—Microsoft developer and technical architect Todd Van Nurden—sent an email to Distortions Unlimited in Greeley, Colorado, offering praise of their work sculpting monsters out of clay and adjustable metal armatures. He also threw in his boys' suggestion on how they might take things to the next level with Kinect for Windows: Imagine how much cooler and more realistic these monsters could be if they had the ability to see you, hear you, anticipate your behavior, and respond to it. Imagine what it means to this industry now that monster makers can take advantage of the Kinect for Windows gesture and voice capabilities.

    Two months passed. Then one day, Todd received a voice mail message from Distortions CEO Ed Edmunds expressing interest. The result: nine months of off-and-on work, culminating with the debut of a Making Monsters episode detailing the project on Travel Channel earlier today, October 21 (check local listings for show times, including repeat airings). The full demonic installation can also be experienced firsthand at The 13th Floor haunted house in Denver, Colorado, now through November 10.

    To get things started, Distortions sent Van Nurden maquettes—scale models about one-quarter of the final size—to build prototypes of two demon dogs and their demon master. Van Nurden worked with Parker, a company that specializes in robotics, to develop movement by using random path manipulation that is more fluid than your typical robot and also is reactive and only loosely scripted. The maquettes were wired to Kinect for Windows with skeletal tracking, audio tracking, and voice control functionality as a proof of concept to suggest a menu of possible options.

    Distortions was impressed. "Ed saw everything it could do and said, 'I want all of them. We need to blow this out’," recalled Van Nurden.


    Todd Van Nurden prepares to install the Kinect for Windows sensor in the demon's belt 
    Todd Van Nurden prepares to install the Kinect for Windows sensor in the demon's belt

    The full-sized dogs are four feet high, while the demon master stands nearly 14 feet. A Kinect for Windows sensor connected to a ruggedized Lenovo M92 workstation is embedded in the demon's belt and, after interpreting tracking data, sends commands to control itself and the dogs via wired Ethernet. Custom software, built by using the Kinect for Windows SDK, provides the operators with a drag-and-drop interface for laying out character placement and other configurable settings. It also provides a top-down view for the attraction's operator, displaying where the guests are and how the creatures are tracking them.

    "We used a less common approach to processing the data as we leveraged the Reactive Extensions for .NET to basically set up push-based Linq subscriptions," Van Nurden revealed. "The drag-and-drop features enable the operator to control the place-space configuration, as well as when certain behaviors begin. We used most of the Kinect for Windows SDK managed API with the exception of raw depth data."

    The dogs are programmed to react very differently if approached by an adult (which might elicit a bark or growl) versus a child (which could prompt a fast pant or soft whimper). Scratching behind a hound's ears provokes a "happy dog" response—assuming you can overcome your fear and get close enough to actually touch one! Each action or mood includes its own set of kinesthetic actions and vocal cues. The sensor quietly tracks groups of people, alternating between a loose tracking algorithm that can calculate relative height quickly when figures are further away and full skeletal tracking when someone approaches a dog or demon, requiring more detailed data to drive the beasts' reactions.

    The end product was so delightfully scary that Van Nurden had to reassure his own sons when they were faced with a life-sized working model of one of the dogs. "I programmed him, he's not going to hurt you," he comforted them.

    Fortunately, it is possible to become the demons' master. If you perform a secret voice and movement sequence, they will actually bow to you.

    Lisa Tanzer, executive producer for Making Monsters, has been following creature creation for two years while shooting the show at Distortions Unlimited. She was impressed by how much more effective the Kinect for Windows interactivity is than the traditional looped audio and fully scripted movements of regular animatronics: "Making the monsters themselves is the same process—you take clay, sculpt it over an armature, mold it, paint it, all the same steps," she said. "The thing that made this project Distortions did for 13th Floor so incredible and fascinating was the Kinect for Windows technology.”

    "It can be really scary," Tanzer reported. "The dogs and demon creature key into people and actually track them around the room. The dog turns, looks at you and whimpers; you go 'Oh, wow, is this thing going to get me?' It's just like a human actor latching on to somebody in a haunted house but there's no human, only this incredible technology.”

    "Incorporating Kinect for Windows into monster making is very new to the haunt industry," she added. "In terms of the entertainment industry, it's a huge deal. I think it's a really cool illustration of where things are going."

    Kinect for Windows team


    Key Links

  • Kinect for Windows Product Blog

    nsquared releases three new Kinect for Windows-based applications

    • 0 Comments

    The following blog post was guest authored by Celeste Humphrey, business development consultant at nsquared, and Dr. Neil Roodyn, director of nsquared.

    A company that is passionate about learning, technology, and creating awesome user experiences, nsquared has developed three new applications that take advantage of Kinect for Windows to provide users with interactive, natural user interface experiences. nsquared is located in Sydney, Australia.


    At nsquared, we believe that vision-based interaction is the future of computing. The excitement we see in the technology industry regarding touch and tablet computing is a harbinger of the changes that are coming as smarter computer vision systems evolve.

    Kinect for Windows has provided us with the tools to create some truly amazing products for education, hospitality, and events.

    Education: nsquared sky spelling

    We are excited to announce nsquared sky spelling, our first Kinect for Windows-based educational game. This new application, aimed at children aged 4 to 12, makes it fun for children to learn to spell in an interactive and collaborative environment. Each child selects a character or vehicle, such as a dragon, a biplane, or a butterfly, and then flies as that character through the sky to capture letters that complete the spelling of various words. The skeleton recognition capabilities of the Kinect for Windows sensor and software development kit (SDK) track the movement of the children as they stretch out their arms as wings to navigate their character through hoops alongside their wingman (another player). The color camera in the Kinect for Windows sensor allows each child to add their photo, thereby personalizing their experience.

    nsquared sky spelling
    nsquared sky spelling

    Hospitality: nsquared hotel kiosk

    The nsquared hotel kiosk augments the concierge function in a hotel by providing guidance to hotel guests through an intuitive, interactive experience. Guests can browse through images and videos of activities, explore locations on a map, and find out what's happening with a live event calendar. It also provides live weather updates and has customizable themes. The nsquared hotel kiosk uses the new gestures supported in the Kinect for Windows SDK 1.7, enabling users to use a “grip” gesture to drag content across the screen and a “push” gesture to select content. With its fun user interface, this informative kiosk provides guests an interactive alternative to the old brochure rack.

    Kinect for Windows technology enables nsquared to provide an interactive kiosk experience for less than half the price of a similar sized touchscreen (see note).

    nsquared hotel kiosk
    nsquared hotel kiosk

    Events: nsquared media viewer

    The new nsquared media viewer application is a great way to explore interactive content in almost any environment. Designed for building lobbies, experience centers, events, and corporate locations, the nsquared media viewer enables you to display images and video by category in a stylish, customizable carousel. Easy to use, anyone can walk up and start browsing in seconds.

    In addition to taking advantage of key features of the Kinect for Windows sensor and SDK, nsquared media viewer utilizes Windows Azure,  allowing clients to view reports about the usage of the screen and the content displayed.

    nsquared media viewer
    nsquared media viewer

    Kinect for Windows technology has made it possible for nsquared to create applications that allow people to interact with content in amazing new ways, helping us take a step towards our collective future of richer vision-based computing systems.

    Celeste Humphrey, business development consultant, and
    Dr. Neil Roodyn, director, nsquared

    Key links

     

    ____________
    Note: Based on the price of 65-inch touch overlay at approximately US$900 compared to the cost of a Kinect for Windows sensor at approximately US$250. For integrated touch solutions, the price can be far higher.
    Back to blog...

  • Kinect for Windows Product Blog

    Inside the Kinect for Windows SDK Update with Peter Zatloukal and Bob Heddle

    • 1 Comments

    Now that the updated Kinect for Windows SDK  is available for download, Engineering Manager Peter Zatloukal and Group Program Manager Bob Heddle sat down to discuss what this significant update means to developers.

    Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK 
    Bob Heddle demonstrates the new infrared functionality in the Kinect for Windows SDK.

    Why should developers care about this update to the Kinect for Windows Software Development Kit (SDK)?

    Bob: Because they can do more stuff and then deploy that stuff on multiple operating systems!

    Peter: In general, developers will like the Kinect for Windows SDK because it gives them what I believe is the best tool out there for building applications with gesture and voice.

    In the SDK update, you can do more things than you could before, there’s more documentation, plus there’s a specific sample called Basic Interactions that’s a follow-on to our Human Interface Guidelines (HIG). Human Interface Guidelines are a big investment of ours, and will continue to be. First we gave businesses and developers the HIG in May, and now we have this first sample, demonstrating an implementation of the HIG. With it, the Physical Interaction Zone (PhIZ) is exposed. The PhIZ is a component that maps a motion range to the screen size, allowing users to comfortably control the cursor on the screen.

    This sample is a bit hidden in the toolkit browser, but everyone should check it out. It embodies best practices that we described in the HIG and is can be re-purposed by developers easily and quickly.

    Bob: First we had the HIG, now we have this first sample. And it’s only going to get better. There will be more to come in the future.

    Why upgrade?

    Bob: There’s no downside to upgrading, so everyone should do it today! There are no breaking changes; it’s fully compatible with previous releases of the SDK, it gives you better operating support reach, there are a lot of new features, and it supports distribution in more countries with localized setup and license agreements. And, of course, China is now part of the equation.

    Peter: There are four basic reasons to use the Kinect for Windows SDK and to upgrade to the most recent version:

    • More sensor data are exposed in this release.
    • It’s easier to use than ever (more samples, more documentation).
    • There’s more operating system and tool support (including Windows 8, virtual machine support, Microsoft Visual Studio 2012, and Microsoft .NET Framework 4.5).
    • It supports distribution in more geographical locations. 

    What are your top three favorite features in the latest release of the SDK and why?

    Peter: If I must limit myself to three, then I’d say the HIG sample (Basic Interactions) is probably my favorite new thing. Secondly, there’s so much more documentation for developers. And last but not least…infrared! I’ve been dying for infrared since the beginning. What do you expect? I’m a developer. Now I can see in the dark!

    Bob: My three would be extended-range depth data, color camera settings, and Windows 8 support. Why wouldn’t you want to have the ability to develop for Windows 8? And by giving access to the depth data, we’re giving developers the ability to see beyond 4 meters. Sure, the data out at that range isn’t always pretty, but we’ve taken the guardrails off—we’re letting you go off-roading. Go for it!

    New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right). 
    New extended-range depth data now provides details beyond 4 meters. These images show the difference between depth data gathered from previous SDKs (left) versus the updated SDK (right).

    Peter: Oh yeah, and regarding camera settings, in case it isn’t obvious: this is for those people who want to tune their apps specifically to known environments.

    What's it like working together?

    Peter: Bob is one of the most technically capable program managers (PMs) I have had the privilege of working with.

    Bob: We have worked together for so long—over a decade and in three different companies—so there is a natural trust in each other and our abilities. When you are lucky to have that, you don’t have to spend energy and time figuring out how to work together. Instead, you can focus on getting things done. This leaves us more time to really think about the customer rather than the division of labor.

    Peter: My team is organized by the areas of technical affinity. I have developers focused on:

    • SDK runtime
    • Computer vision/machine learning
    • Drivers and low-level subsystems
    • Audio
    • Samples and tools

    Bob: We have a unique approach to the way we organize our teams: I take a very scenario-driven approach, while Peter takes a technically focused approach. My team is organized into PMs who look holistically across what end users need, versus what commercial customers need, versus what developers need.

    Peter: We organize this way intentionally and we believe it’s a best practice that allows us to iterate quickly and successfully!

    What was the process you and your teams went through to determine what this SDK release would include, and who is this SDK for?

    Bob: This SDK is for every Kinect for Windows developer and anyone who wants to develop with voice and gesture. Seriously, if you’re already using a previous version, there is really no reason not to upgrade. You might have noticed that we gave developers a first version of the SDK in February, then a significant update in May, and now this release. We have designed Kinect for Windows around rapid updates to the SDK; as we roll out new functionality, we test our backwards compatibility very thoroughly, and we ensure no breaking changes.

    We are wholeheartedly dedicated to Kinect for Windows. And we’re invested in continuing to release updated iterations of the SDK rapidly for our business and developer customers. I hope the community recognizes that we’re making the SDK easier and easier to use over time and are really listening to their feedback.

    Peter Zatloukal, Engineering Manager
    Bob Heddle, Group Program Manager
    Kinect for Windows

    Related Links

Page 4 of 11 (109 items) «23456»