• Kinect for Windows Product Blog

    The New Generation Kinect for Windows Sensor is Coming Next Year

    • 32 Comments

    The all-new active-infrared capabilities allow the new sensor to work in nearly any lighting condition. This makes it possible for developers to build apps with enhanced recognition of facial features, hand position, and more.By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year. 

    Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.

    Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.

    Some of the key capabilities of the new Kinect sensor include:

    • Higher fidelity
      The new sensor includes a high-definition (HD) color camera as well as a new noise-isolating multi-microphone array that filters ambient sounds to recognize natural speaking voices even in crowded rooms. Also included is Microsoft’s proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision. All of this means that the new sensor recognizes precise motions and details, such as slight wrist rotation, body position, and even the wrinkles in your clothes. The Kinect for Windows community will benefit from the sensor’s enhanced fidelity, which will allow developers to create highly accurate solutions that see a person’s form better than ever, track objects and environments with greater detail, and understand voice commands in noisier settings than before.

    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to
    create apps that see a person's form better, track objects with greater detail, and understand
    voice commands in noisier settings.

    • Expanded field of view
      The expanded field of view accommodates a multitude of differently sized rooms, minimizing the need to modify existing room configurations and opening up new solution-development opportunities. The combination of the new sensor’s higher fidelity plus expanded field of view will give businesses the tools they need to create truly untethered, natural computing experiences such as clicker-free presentation scenarios, more dynamic simulation and training solutions, up-close interactions, more fluid gesture recognition for quick interactions on the go, and much more.
          
    • Improved skeletal tracking
      The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This not only yields more accurate skeletal tracking, it opens up a range of new scenarios, including improved “avateering,” the ability to develop enhanced rehabilitation and physical fitness solutions, and the possibility to create new experiences in public spaces—such as retail—where multiple users can participate simultaneously.

    The new sensor tracks more points on the human body than previously and tracks six skeletons at once, opening a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
    The new sensor tracks more points on the human body than previously, including the tip of the hand
    and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved
    "avateering" to experiences in which multiple users can participate simultaneously.
      

    • New active infrared (IR)
      The all-new active-IR capabilities allow the new sensor to work in nearly any lighting condition and, in essence, give businesses access to a new fourth sensor: audio, depth, color…and now active IR. This will offer developers better built-in recognition capabilities in different real-world settings—independent of the lighting conditions—including the sensor’s ability to recognize facial features, hand position, and more. 

    I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.

    A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.

    Bob Heddle
    Director, Kinect for Windows

    Key links

     
    Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images

     

  • Kinect for Windows Product Blog

    Reflexion Health advancing physical therapy with Kinect for Windows

    • 0 Comments

    Reflexion Health, founded with technology developed at the West Health Institute, realized years ago that assessing physical therapy outcomes is difficult for a variety of reasons, and took on the challenge of designing a solution to help increase the success rates of rehabilitation from physical injury.

    In 2011, the Reflexion team approached the Orthopedic Surgery Department of the Naval Medical Center San Diego to help test their new Rehabilitation Measurement Tool (RMT). This software solution was developed to make physical therapy more engaging, efficient, and successful. By using the Kinect for Windows sensor and software development kit (SDK), the RMT allows clinicians to measure patient progress. Patients often do much of their therapy alone and because they can lack immediate feedback from therapists, it can be difficult for them to be certain that they are performing the exercises in a manner that will provide them with optimal benefits. The RMT can indicate if exercises were performed properly, how frequently they were performed, and give patients real-time feedback.

    Reflexion Health's Kinect for Windows-based tool helps measure how patients respond to physical therapy.
    Reflexion Health's Kinect for Windows-based tool helps measure how patients respond to physical therapy.

    “Kinect for Windows helps motivate patients to do physical therapy—and the data set we gather when they use the RMT is becoming valuable to demonstrate what form of therapy is most effective, what types of patients react better to what type of therapy, and how to best deliver that therapy. Those questions have vexed people for a long time,” says Dr. Ravi Komatireddy, co-founder at Reflexion Health.

    The proprietary RMT software engages patients with avatars and educational information, and a Kinect for Windows sensor tracks a patient’s range of motion and other clinical data. This valuable information helps therapists customize and deliver therapy plans to patients.

    “RMT is a breakthrough that can change how physical therapy is delivered,” Spencer Hutchins, co-founder and CEO of Reflexion Health says. “Kinect for Windows helps us build a repository of information so we can answer rigorous questions about patient care in a quantitative way.” Ultimately, Reflexion Health has demonstrated how software could be prescribed—similarly to pharmaceuticals and medical devices—and how it could possibly lower the cost of healthcare.

    More information about RMT and the clinical trials conducted by the Naval Medical Center can be found in the newly released case study.

    Kinect for Windows team

    Key links

     

  • Kinect for Windows Product Blog

    Using Kinect InteractionStream Outside of WPF

    • 3 Comments

    Last month with the release of version 1.7 of our SDK and toolkit we introduced something called the InteractionStream.  Included in this release were two new samples called Controls Basics and Interaction Gallery which, among other things, show how to use the new InteractionStream along with new interactions like Press and Grip.  Both of these new samples are written using managed code (C#) and WPF.

    One question I’ve been hearing from developers is, “I don’t want to use WPF but I still want to use InteractionStream with managed code.  How do I do this?”  In this post I’m going to show how to do exactly that.  I’m going to take it to the extreme by removing the UI layer completely:  we’ll use a console app using C#.

    The way our application will work is summarized in the diagram below:

    image

     

    There are a few things to note here:

    1. Upon starting the program, we initialize our sensor, interactions, and create FrameReady event handlers.
    2. Our sensor is generating data for every frame.  We use our FrameReady event handlers to respond and handle depth, skeleton, and interaction frames.
    3. The program implements the IInteractionClient interface which requires us to implement a method called GetInteractionInfoAtLocationwhich gives us back information about interactions happening with a particular user at a specified location:
      public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
      {
      var interactionInfo = new InteractionInfo
      {
      IsPressTarget = false,
      IsGripTarget = false
      };

      // Map coordinates from [0.0,1.0] coordinates to UI-relative coordinates
      double xUI = x * InteractionRegionWidth;
      double yUI = y * InteractionRegionHeight;

      var uiElement = this.PerformHitTest(xUI, yUI);

      if (uiElement != null)
      {
      interactionInfo.IsPressTarget = true;

      // If UI framework uses strings as button IDs, use string hash code as ID
      interactionInfo.PressTargetControlId = uiElement.Id.GetHashCode();

      // Designate center of button to be the press attraction point
      //// TODO: Create your own logic to assign press attraction points if center
      //// TODO: is not always the desired attraction point.
      interactionInfo.PressAttractionPointX = ((uiElement.Left + uiElement.Right) / 2.0) / InteractionRegionWidth;
      interactionInfo.PressAttractionPointY = ((uiElement.Top + uiElement.Bottom) / 2.0) / InteractionRegionHeight;
      }

      return interactionInfo;
      }
    4. The other noteworthy part of our program is in the InteractionFrameReady method.  This is where we process information about our users, route our UI events, handle things like Grip and GripRelease, etc.

     

    I’ve posted some sample code that you may download and use to get started using InteractStream in your own managed apps.  The code is loaded with tips in the comments that should get you started down the path of using our interactions in your own apps.  Thanks to Eddy Escardo Raffo on my team for writing the sample console app.

    Ben

    @benlower | kinectninja@microsoft.com | mobile: +1 (206) 659-NINJA (6465)

  • Kinect for Windows Product Blog

    Siemens uses Kinect for Windows to improve nuclear plant employee safety

    • 1 Comments

    As you might imagine, working in a nuclear power plant provides special challenges. One crucial aspect for any project is the need to minimize employee exposure to radiation by applying a standard known as As Low As Reasonably Achievable—ALARA for short.

    How this works: Plant ALARA managers work with the maintenance groups to estimate how much time is required to perform a task and, allowing for exposure limits, they determine how many employees may be needed to safely complete it. Today, that work is typically done with pen and paper. But new tools from Siemens PLM Software that incorporate the Kinect for Windows sensor could change this by providing a 3-D virtual interactive modeling environment.

    Kinect for Windows is used to capture realistic movement for use in the Siemens Teamcenter solution
    Kinect for Windows is used to capture realistic movement for use in the Siemens Teamcenter solution
    for ALARA radiation planning.

    The solution, piloted at a US nuclear power plant last year, is built on Siemens’ Teamcenter software, using its Tecnomatix process simulate productivity product. Siemens PLM Software Tecnomatix provides virtual 3-D human avatars—“Jack” and “Jill”—that are integrated to model motion-controlled actions input with a Kinect for Windows sensor. This solution is helping to usher in a new era of industrial planning applications for employee health and safety in the nuclear industry.

    "We're really revolutionizing the industry," said Erica Simmons, global marketing manager for Energy, Oil, and Gas Industries at Siemens PLM Software. "For us, this was a new way to develop a product in tandem with the industry associations. We created a specific use case with off-the-shelf technology and tested and validated it with industry. What we have now is a new visual and interactive way of simulating potential radiation exposure which can lead to better health and safety strategies for the plant."

    Traditional pencil-and-paper planning (left) compared to the Siemens PLM Software Process Simulate on
    Traditional pencil-and-paper planning (left) compared to the Siemens PLM Software Process Simulate on
    Teamcenter solution (right) with “Jack” avatar and Kinect for Windows movement input.
     

    The Siemens Tecnomatix process planning application, integrated with the Kinect for Windows system, will give nuclear plant management the ability to better manage individual worker radiation exposure and optimize steps to reduce overall team exposure. As a bonus, once tasks have been recorded by using “Jack,” the software can be used for training. Employees can learn and practice an optimized task by using Kinect for Windows and Siemens “Jack” and “Jill”—safely outside of the radiation zone—until they have mastered it and are ready to perform the actual work.

    "We wanted to add something more for the user of this solution in addition to our 3-D human avatars and the hazard zones created by our visual cartography; this led us to exploring what we could do with the Kinect for Windows SDK for this use case," said Dr. Ulrich Raschke, director of Human Simulation Technologies at Siemens PLM Software. “User feedback has been good so far; the addition of the Kinect for Windows system adds another level of interactivity to our application."

    This Siemens solution grew out of a collaborative effort with Electric Power Research Institute (EPRI) and Fiatech industry association, which identified the need for more technologically advanced estimation tools for worker radiation dosage. Kinect for Windows was incorporated when the developers were tailoring the avatar system to the solution and exploring ways to make the user experience much more interactive.

    "Collaboration with several key stakeholders and industry experts led to this innovative solution," said Phung Tran, senior project manager at EPRI. "We're pleased the industry software providers are using it, and look forward to seeing the industry utilize these new tools."

    “In fact,” Tran added, “the tool is not necessarily limited to radiation work planning. It could help improve the management and execution of many operation, maintenance, and project-based tasks.”

    Kinect for Windows team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows SDK v1.7 is Available!

    • 8 Comments

    We are stoked to announce the immediate availability of our latest SDK and Developer Toolkit (v1.7)!  Kinect Interactions (grip and press), Kinect Fusion, new samples for MATLAB, OpenCV and more.

    Download the new hotness here.

    Our product blog has all the details on what we believe is our biggest release since 1.0.

    Ben

  • Kinect for Windows Product Blog

    The latest Kinect for Windows SDK is here

    • 14 Comments

    Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately. 

    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7:
    counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify
    primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.

    Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here.  Now you can untether yourself and move around a conference room naturally.
    Kinect Interactions are designed to let users intuitively do things like press their hand forward
    a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you
    can untether yourself and move around a conference room naturally.

    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the
    computer without leaving her patient’s side.

    Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
    Customers can virtually try on merchandise, such as sunglasses, by using business solutions
    created
    with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in
    stock, you can still
    see what they look like on you.

    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate
    3-D renderings of people and objects in real time.

    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the
    price it would cost with traditional high-end 3-D scanners.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including
    augmented reality, 3-D printing, interior and industrial design, and body scanning for
    things like custom fitting and improved clothes shopping.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality,
    3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .

    The Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows announces new version of SDK coming March 18

    • 12 Comments

    Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.

    On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:

    Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.

    Kinect for Windows Interactions transform how people interact with computers in settings ranging from retail to education, training, and physical therapy.
    Kinect for Windows Interactions transform how people interact with computers in
    settings ranging from retail to education, training, and physical therapy.

    Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.

    Kinect Fusion enables developers to create accurate 3-D renderings in real time.Kinect Fusion enables developers to create accurate 3-D renderings in real time.

    The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:

    • OpenCV and MATLAB Samples to allow developers to build advanced Kinect-enabled applications while using industry standards.
    • Kinect for Windows Code Samples on CodePlex, marking the first time that select Kinect for Windows code samples will be available through an open-source resource, enabling existing and potential partners to explore and develop new scenarios.

    Seeing is believing
    We demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!

    As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.

    The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.

    Bob Heddle, Director
    Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    What's Up with CodePlex?

    • 2 Comments

    Last week we announced the release of the source code of 22 Kinect for Windows sample applications.  The developer response has been terrific and much larger than we expected.

    Some publications claimed we had open sourced “all of the code for Kinect” or the “core code of Kinect”.  Neither of these is true.  We released source code for most of our sample applications.  It’s important to understand that sample code is not the same as core code.  The purpose of the sample applications is to give developers examples of how to use particular APIs and/or to give a good starting point for a new application.  Samples do use the core APIs and Kinect for Windows platform but we have not changed anything about the licensing of those underlying components.



    The samples we’ve released show how to do things like get raw infrared data from the sensor, build an interactive kiosk that changes content when a person is detected, and track a person’s facial movements.  The samples are one of the many areas in which we are investing to make it easy for new and seasoned developers alike to build applications using Kinect for Windows.

    It was not our intention for our announcement to be misinterpreted.  It’s evident in the comments of many posts that readers understood the distinction.  It’s also been great to see the debate & discussions (I’m looking at you, Reddit :-).

    We are following up with some publications to clarify our announcement and to request they update their posts.

     

    Ben

    @benlower | kinectninja@microsoft.com | mobile:  +1 (206) 659-NINJA (6465)

    Kinect for Windows:  @KinectWindows

  • Kinect for Windows Product Blog

    Kinect for Windows Academic Pricing Update

    • 2 Comments

    Shortly after the commercial release of Kinect for Windows in early 2012, Microsoft announced the availability of academic pricing for the Kinect for Windows sensor to higher education faculty and students for $149.99 at the Microsoft Store in the United States. We are now pleased to announce that we have broadened the availability of academic pricing through Microsoft Authorized Educational Resellers (AERs).

    Most of these resellers have the capability to offer academic pricing directly to educational institutions; academic researchers; and students, faculty, and staff of public or private K-12 schools, vocational schools, junior colleges, colleges, universities, and scientific or technical institutions. In the United States, eligible institutions are accredited by associations that are recognized by the US Department of Education and/or the State Board of Education. Academic pricing on the Kinect for Windows sensor is currently available through AERs in the United States, Taiwan, and Hong Kong SAR.

    Within the academic community, the potential of Kinect for Windows in the classroom is generating a lot of excitement. Researchers and academia in higher education collaborate with Microsoft Research on a variety of projects that involve educational uses of Kinect for Windows. The educator driven community resource, KinectEDucation, encourages developers, teachers, students, enthusiasts and any other education stakeholders to help transform classrooms with accessible technology.
     
    One such development is a new product from Kaplan Early Learning Company, the Inspire-NG Move, bundled with the Kinect for Windows sensor. This bundle includes four educational programs for children age three years and older. The programs make it possible for children to experience that hands-on, kinesthetic play with a purpose makes learning fun. The bundle currently sells for US$499.

    “We’re excited about the new learning models that are enabled by Kinect for Windows,” stated Chris Gerblick, vice president of IT and Professional Services at Kaplan Early Learning Company. “We see the Inspire NG-Move family of products as excellent learning tools for both the classroom and the home.”

    With the availability of academic pricing, we look forward to many developments from the academic community that integrate Kinect for Windows into interactive educational experiences.

    Michael Fry
    Business Development, Strategic Alliances
    Kinect for Windows

    Key Links

     

  • Kinect for Windows Product Blog

    Easy Access to Kinect for Windows Sample Code

    • 23 Comments

    We are happy to announce we are releasing the Kinect for Windows samples under an open source license.  You can find everything on CodePlex: http://kinectforwindows.codeplex.com/.  We have posted a total of 22 unique samples in C#, C++, and Visual Basic.

    We’re doing this for a few reasons:

    1. Easy Access -> we will continue to release our sample applications as part of our Developer Toolkit.  However, that’s a large download & install that can be cumbersome if you just want to quickly view or access code on the web
    2. Reuse The Code -> we’re releasing all the samples under an Apache 2.0 license so that you can take the code and reuse, remix, etc.  Also, we’re using a Git repository so it’s easy clone & fork if you want
    3. Get Feedback -> we will use CodePlex's built-in feedback & discussion tools to get community input on the samples.  We want to hear from you to understand what we can do better with the samples
    4. Faster Updates -> we will be able to update samples more quickly on CodePlex (compared to Toolkit releases).  CodePlex also has a “Subscribe” feature that enables you to follow the project and get notified when something changes, a bug gets fixed, someone says something smart in the discussions, etc.  (note:  the subscription feature doesn’t actually track the smartness of a post but one can dream :-))

     

    Browse K4W sample code right in your browser...  


     
    Oh Yeah, This Blog is New

    You probably noticed:  this is our first blog post.  We are committed to this blog becoming a useful resource to the Kinect for Windows development community.  Our existing product blog (at http://blogs.msdn.com/b/kinectforwindows/) will continue to focus on announcements, product news, and highlighting great, real-world uses of Kinect for Windows.  The developer blog (where you are now) will focus on going behind the scenes with the K4W engineering team and will go deeper on the technology and APIs, share tips & tricks, and provide other tidbits of information relevant to those building K4W applications.

    We have ideas in mind for future posts but would love to hear from you to understand what topics would be most useful to you.  Please use the comments below, hit up the team on Twitter @KinectWindows, or get in touch with me directly (contact info below).

     

    Ben

    @benlower | kinectninja@microsoft.com | mobile:  +1 (206) 659-NINJA (6465)

Page 6 of 11 (103 items) «45678»