• Kinect for Windows Product Blog

    Kinect for Windows at Convergence of Style and Technology for New York Fashion Week

    • 5 Comments

    Kinect for Windows powers a new technology that virtually models the hottest styles in Bloomingdale’s during Fashion Week.This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.

    Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.

    Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.

    Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.

    "Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."

    Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.

    The Printing Dress, a remarkable melding of fashion and technology, on display at Bloomingdale's in New York.Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.

    Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:

    • One of the participants in the recent Microsoft Accelerator for Kinect program, Styku, LLC, has also developed virtual fitting room software and body scanner technology powered by Kinect for Windows. 
    • Mattel brought to life Barbie: The Dream Closet that makes it possible for anyone to try on clothes from 50 years of Barbie's wardrobe. 
    • Kimetric , another Kinect Accelerator participant, uses Kinect for Windows sensors strategically placed throughout a store to gather useful data, helping a retailer to better understand consumer behavior.

    With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.

    Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows Academic Pricing Now Available in the US

    • 5 Comments

    Students, teachers, researchers, and other educators have been quick to embrace Kinect’s natural user interface (NUI), which makes it possible to interact with computers using movement, speech, and gestures. In fact, some of the earliest Kinect for Windows applications to emerge were projects done by students, including several at last year’s Imagine Cup.

    One project, from an Imagine Cup team in Italy, created an application for people with severe disabilities that enables them to communicate, learn, and play games on computers using a Kinect sensor instead of a traditional mouse or keyboard. Another innovative Imagine Cup project, done by university students in Russia, used the Kinect natural user interface to fold, rotate, and examine online origami models.

    To encourage students, educators, and academic researchers to continue innovating with Kinect for Windows, special academic pricing on Kinect for Windows sensors is now available in the United States. The academic price is $149.99 through Microsoft Stores.

    If you are an educator or faculty with an accredited school, such as a university, community college, vocational school, or K-12, you can purchase a Kinect for Windows sensor at this price.

    Find out if you qualify, and then purchase online or visit a Microsoft store in your area.

    Kinect for Windows team

  • Kinect for Windows Product Blog

    The Power of Enthusiasm

    • 4 Comments

    OpenKinect founder Josh Blake at Microsoft’s Kinect for Windows Code CampWhen we launched Kinect for Xbox 360 on November 4th, 2010, something amazing happened: talented Open Source hackers and enthusiasts around the world took the Kinect and let their imaginations run wild.  We didn’t know what we didn’t know about Kinect on Windows when we shipped Kinect for Xbox 360, and these early visionaries showed the world what was possible.  What we saw was so compelling that we created the Kinect for Windows commercial program.

    Our commercial program is designed to allow our partners— companies like Toyota, Mattel, American Express, Telefonica, and United Health Group—to deploy solutions to their customers and employees.  It is also designed to allow early adopters and newcomers alike to take their ideas and release them to the world on Windows, with hardware that’s supported by Microsoft.   At the same time, we wanted to let our early adopters keep working on the hardware they’d previously purchased. That is why our SDK continues to support the Kinect for Xbox 360 as a development device.

    Kinect developer Halimat Alabi at Microsoft’s 24-hour coding marathon, June 2011As I reflect back on the past eleven months since Microsoft announced we were bringing Kinect to Windows, one thing is clear: The efforts of these talented Open Source hackers and enthusiasts helped inspire us to develop Kinect for Windows faster.  And their continued ambition and drive will help the world realize the benefits of Kinect for Windows even faster still.  From all of us on the Kinect for Windows team:  thank you.

     Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Making Learning More Interactive and Fun for Young Children

    • 4 Comments

    Although no two people learn in exactly the same way, the process of learning typically involves seeing, listening/speaking, and touching. For most young children, all three senses are engaged in the process of grasping a new concept.

    For example, when a red wooden block is given to a toddler, they hear the words “red” and “block,” see the color red, and also use their hands to touch and feel the shape of the wooden block.

    Uzma Khan, a graduate student in the Department of Computer Science at the University of Toronto, realized the Kinect natural user interface (NUI) could provide similar experiences. She used the Kinect for Windows SDK to create a prototype of an application that utilizes speech and gestures to simplify complex learning, and make early childhood education more fun and interactive.

    The application asks young children to perform an activity, such as identify the animals that live on a farm.  Using their hands to point to the animals on a computer screen, along with voice commands, the children complete the activities. To reinforce their choices, the application praises them when they make a correct selection.

    Using the speech and gesture recognition capabilities of Kinect enables children to not only learn by seeing, listening, and speaking; it lets them actively participate by selecting, copying, moving, and manipulating colors, shapes, objects, patterns, letters, numbers, and much more.

    The creation of applications to aid learning for people of all ages is one of the many ways we anticipate Kinect for Windows will be used to enable a future in which computers work more naturally and intelligently to improve our lives.

    Sheridan Jones
    Business and Strategy Director, Kinect for Windows

  • Kinect for Windows Product Blog

    Swap your face…really

    • 4 Comments

    Ever wish you looked like someone else? Maybe Brad Pitt or Jennifer Lawrence? Well, just get Brad or Jennifer in the same room with you, turn on the Kinect for Windows v2 sensor, and presto: you can swap your mug for theirs (and vice versa, of course). Don’t believe it? Then take a look at this cool video from Apache, in which two developers happily trade faces.

    Swapping faces in real time—let the good times roll

    According to Adam Vahed, managing director at Apache, the ability of the Kinect for Windows v2 sensor and SDK to track multiple bodies was essential to this project, as the solution needed to track the head position of both users. In fact, Adam rates the ability to perform full-skeletal tracking of multiple bodies as the Kinect for Windows v2 sensor’s most exciting feature, observing that it “opens up so many possibilities for shared experiences and greater levels of game play in the experiences we create.”

    Adam admits that the face swap demo was done mostly for fun. That said, he also notes that “the ability to identify and capture a person’s face in real time could be very useful for entertainment-based experiences—for instance, putting your face onto a 3D character that can be driven by your own movements.”

    Adam also stressed the value of the higher definition color feed in the v2 sensor, noting that Apache’s developers directly manipulated this feed in the face swap demo in order to achieve the desired effect. He finds the new color feed provides the definition necessary for full-screen augmented-reality experiences, something that wasn’t possible with the original Kinect for Windows sensor.

    Above all, Adam encourages other developers to dive in with the Kinect for Windows v2 sensor and SDK—to load the samples and play around with the capabilities. He adds that the forums are a great source of inspiration as well as information, and he advises developers “to take a look at what other people are doing and see if you can do something different or better—or both!”

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Real-time 3D scanning stuns the gnome world

    • 4 Comments

    Garden gnomes: they decorate our yards, take bizarre trips, and now can be scanned in 3D in real time by using readily available computer hardware, as can be seen in this video from ReconstructMe. The developers employed the preview version of the Kinect for Windows v2 sensor and SDK, taking advantage of the sensor’s enhanced color and depth streams. Instead of directly linking the input of the Kinect with ReconstructMe, they streamed the data over a network, which allowed them to decouple the reconstruction from the data acquisition.

    Real-time 3D scan of garden gnome created by using Kinect for Windows v2

    Developer Christoph Heindl (he’s the one holding the gnome in the video) notes that the ReconstructMe team plans to update this 3D scanning technology when the Kinect for Windows v2 is officially released this summer, saying, “We’re eager to make this technology widely available upon the release of Kinect for Windows v2.”

    Heindl adds that this real-time process has potential applications in 3D scanning, 3D modelling through gestures, and animation. Not to mention the ability to document gnomic travels in 3D!

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Dev Kit Program Update

    • 3 Comments

    Back in June at the Build Conference we announced and started taking applications for our upcoming developer kit program.

    The response & interest we’ve seen has been tremendous: thousands of developers from 74 different countries applied.

    Mea Culpa

    When we announced the program we said we’d start notifying successful applicants in August. Many people interpreted that to mean that we’d be done with notifications in August. I apologize for not being clearer on this. We never intended to have all the notifications done in August. While we did start in August and have notified many developers of their acceptance to the program, there are still many more applicants to be notified.

    Over the coming weeks we will continue to let applicants know if they are admitted to the program, denied admission, or waitlisted (just like college :-)). Every applicant will hear something from us by the end of September. Anyone who is waitlisted will have a final decision by the end of the year.

    We are still planning to start sending out the developer kits in late November to all program participants.

    Again, apologies for any confusion. Please stay tuned…you will hear something from us soon!

    Ben

    @benlower | kinectninja@microsoft.com | mobile: +1 (206) 659-NINJA (6465)

  • Kinect for Windows Product Blog

    V2 meets 3D

    • 3 Comments

    As Microsoft Most Valuable Professional (MVP) James Ashley points out in a recent blog, it’s a whole lot easier to create 3D movies with the Kinect for Windows v2 sensor and its preview software development kit (SDK 2.0 public preview). For starters, the v2 sensor captures up to three times more depth information than the original sensor did. That means you have far more depth data from which to construct your 3D images.

    The next big improvement is in the ability to map color to the 3D image. The original Kinect sensor used an SD camera for color capture, and the resulting low-resolution images made it difficult to match the color data to the depth data. (RGB+D, a tool created by James George, Jonathan Porter, and Jonathan Minard, overcame this problem.) Knowing that the v2 sensor has a high-definition (1080p) video camera, Ashley reasoned that he could use the camera's color images directly, without a workaround tool. He also planned to map the color data to depth positions in real-time, a new capability built into the preview SDK.

    Ashley shot this 3D video of his daughter Sophia by using Kinect for Windows v2 and a standard laptop.

    Putting these features together, Ashley wrote an app that enabled him to create 3D videos on a standard laptop (dual core Intel i5, with 4 GB RAM and an integrated Intel HD Graphics 4400). While he has no plans at present to commercialize the application, he opines that it could be a great way to bring real-time 3D to video chats.

    Ashley also speculates that since the underlying principle is a point cloud, stills of the volumetric recording could be converted into surface meshes that can be read by CAD software or even turned into models that could be printed on a 3D printer. He also thinks it could be useful for recording biometric information in a physician’s office, or for recording precise 3D information at a crime scene, for later review.

    Those who want to learn more from Ashley about developing cool stuff with the v2 sensor should note that his book, Beginning Kinect Programming with Kinect for Windows v2, is due to be published in October.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Using Kinect InteractionStream Outside of WPF

    • 3 Comments

    Last month with the release of version 1.7 of our SDK and toolkit we introduced something called the InteractionStream.  Included in this release were two new samples called Controls Basics and Interaction Gallery which, among other things, show how to use the new InteractionStream along with new interactions like Press and Grip.  Both of these new samples are written using managed code (C#) and WPF.

    One question I’ve been hearing from developers is, “I don’t want to use WPF but I still want to use InteractionStream with managed code.  How do I do this?”  In this post I’m going to show how to do exactly that.  I’m going to take it to the extreme by removing the UI layer completely:  we’ll use a console app using C#.

    The way our application will work is summarized in the diagram below:

    image

     

    There are a few things to note here:

    1. Upon starting the program, we initialize our sensor, interactions, and create FrameReady event handlers.
    2. Our sensor is generating data for every frame.  We use our FrameReady event handlers to respond and handle depth, skeleton, and interaction frames.
    3. The program implements the IInteractionClient interface which requires us to implement a method called GetInteractionInfoAtLocationwhich gives us back information about interactions happening with a particular user at a specified location:
      public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
      {
      var interactionInfo = new InteractionInfo
      {
      IsPressTarget = false,
      IsGripTarget = false
      };

      // Map coordinates from [0.0,1.0] coordinates to UI-relative coordinates
      double xUI = x * InteractionRegionWidth;
      double yUI = y * InteractionRegionHeight;

      var uiElement = this.PerformHitTest(xUI, yUI);

      if (uiElement != null)
      {
      interactionInfo.IsPressTarget = true;

      // If UI framework uses strings as button IDs, use string hash code as ID
      interactionInfo.PressTargetControlId = uiElement.Id.GetHashCode();

      // Designate center of button to be the press attraction point
      //// TODO: Create your own logic to assign press attraction points if center
      //// TODO: is not always the desired attraction point.
      interactionInfo.PressAttractionPointX = ((uiElement.Left + uiElement.Right) / 2.0) / InteractionRegionWidth;
      interactionInfo.PressAttractionPointY = ((uiElement.Top + uiElement.Bottom) / 2.0) / InteractionRegionHeight;
      }

      return interactionInfo;
      }
    4. The other noteworthy part of our program is in the InteractionFrameReady method.  This is where we process information about our users, route our UI events, handle things like Grip and GripRelease, etc.

     

    I’ve posted some sample code that you may download and use to get started using InteractStream in your own managed apps.  The code is loaded with tips in the comments that should get you started down the path of using our interactions in your own apps.  Thanks to Eddy Escardo Raffo on my team for writing the sample console app.

    Ben

    @benlower | kinectninja@microsoft.com | mobile: +1 (206) 659-NINJA (6465)

  • Kinect for Windows Product Blog

    Join Now, BUILD for Tomorrow

    • 3 Comments

    Today at Microsoft BUILD 2013, we made two important announcements for our Kinect for Windows developer community.

    First, starting today, developers can apply for a place in our upcoming developer kit program. This program will give participants exclusive early access to everything they need to start building applications for the recently-announced new generation Kinect for Windows sensor, including a pre-release version of the new sensor hardware and software development kit (SDK) in November, and a replacement unit of the final sensor hardware and firmware when it is publicly available next year. The cost for the program will be US$399 (or local equivalent). Applications must be received by July 31 and successful applicants will be notified and charged in August. Interested developers are strongly encouraged to apply early, as spots are very limited and demand is already great for the new sensor. Review complete program details and apply for the program.


    The upcoming Kinect for Windows SDK 1.8 will include more realistic color capture with Kinect Fusion.
    The upcoming Kinect for Windows SDK 1.8 will include more realistic color capture with Kinect Fusion.

    Additionally, in September we will again refresh the Kinect for Windows SDK with several exciting updates including:

    • The ability to extract the user from the background in real time
    • The ability to develop Kinect for Windows desktop applications by using HTML5/JavaScript
    • Enhancements to Kinect Fusion, including capture of color data and improvements to tracking robustness and accuracy

    The feature enhancements will enable even better Kinect for Windows-based applications for businesses and end users, and the convenience of HTML5 will make it easier for developers to build leading-edge touch-free experiences.

    This will be the fourth significant update to the Kinect for Windows SDK since we launched 17 months ago. We are committed to continuing to improve the existing Kinect for Windows platform as we prepare to release the new generation Kinect for Windows sensor and SDK.  If you aren’t already using Kinect for Windows to develop touch-free solutions, now is a great time to start. Join us as we continue to make technology easier to use and more intuitive for everyone.

    Bob Heddle
    Director, Kinect for Windows

    Key links 

Page 4 of 11 (109 items) «23456»