• Kinect for Windows Product Blog

    nsquared releases three new Kinect for Windows-based applications

    • 0 Comments

    The following blog post was guest authored by Celeste Humphrey, business development consultant at nsquared, and Dr. Neil Roodyn, director of nsquared.

    A company that is passionate about learning, technology, and creating awesome user experiences, nsquared has developed three new applications that take advantage of Kinect for Windows to provide users with interactive, natural user interface experiences. nsquared is located in Sydney, Australia.


    At nsquared, we believe that vision-based interaction is the future of computing. The excitement we see in the technology industry regarding touch and tablet computing is a harbinger of the changes that are coming as smarter computer vision systems evolve.

    Kinect for Windows has provided us with the tools to create some truly amazing products for education, hospitality, and events.

    Education: nsquared sky spelling

    We are excited to announce nsquared sky spelling, our first Kinect for Windows-based educational game. This new application, aimed at children aged 4 to 12, makes it fun for children to learn to spell in an interactive and collaborative environment. Each child selects a character or vehicle, such as a dragon, a biplane, or a butterfly, and then flies as that character through the sky to capture letters that complete the spelling of various words. The skeleton recognition capabilities of the Kinect for Windows sensor and software development kit (SDK) track the movement of the children as they stretch out their arms as wings to navigate their character through hoops alongside their wingman (another player). The color camera in the Kinect for Windows sensor allows each child to add their photo, thereby personalizing their experience.

    nsquared sky spelling
    nsquared sky spelling

    Hospitality: nsquared hotel kiosk

    The nsquared hotel kiosk augments the concierge function in a hotel by providing guidance to hotel guests through an intuitive, interactive experience. Guests can browse through images and videos of activities, explore locations on a map, and find out what's happening with a live event calendar. It also provides live weather updates and has customizable themes. The nsquared hotel kiosk uses the new gestures supported in the Kinect for Windows SDK 1.7, enabling users to use a “grip” gesture to drag content across the screen and a “push” gesture to select content. With its fun user interface, this informative kiosk provides guests an interactive alternative to the old brochure rack.

    Kinect for Windows technology enables nsquared to provide an interactive kiosk experience for less than half the price of a similar sized touchscreen (see note).

    nsquared hotel kiosk
    nsquared hotel kiosk

    Events: nsquared media viewer

    The new nsquared media viewer application is a great way to explore interactive content in almost any environment. Designed for building lobbies, experience centers, events, and corporate locations, the nsquared media viewer enables you to display images and video by category in a stylish, customizable carousel. Easy to use, anyone can walk up and start browsing in seconds.

    In addition to taking advantage of key features of the Kinect for Windows sensor and SDK, nsquared media viewer utilizes Windows Azure,  allowing clients to view reports about the usage of the screen and the content displayed.

    nsquared media viewer
    nsquared media viewer

    Kinect for Windows technology has made it possible for nsquared to create applications that allow people to interact with content in amazing new ways, helping us take a step towards our collective future of richer vision-based computing systems.

    Celeste Humphrey, business development consultant, and
    Dr. Neil Roodyn, director, nsquared

    Key links

     

    ____________
    Note: Based on the price of 65-inch touch overlay at approximately US$900 compared to the cost of a Kinect for Windows sensor at approximately US$250. For integrated touch solutions, the price can be far higher.
    Back to blog...

  • Kinect for Windows Product Blog

    The New Generation Kinect for Windows Sensor is Coming Next Year

    • 31 Comments

    The all-new active-infrared capabilities allow the new sensor to work in nearly any lighting condition. This makes it possible for developers to build apps with enhanced recognition of facial features, hand position, and more.By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year. 

    Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.

    Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.

    Some of the key capabilities of the new Kinect sensor include:

    • Higher fidelity
      The new sensor includes a high-definition (HD) color camera as well as a new noise-isolating multi-microphone array that filters ambient sounds to recognize natural speaking voices even in crowded rooms. Also included is Microsoft’s proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision. All of this means that the new sensor recognizes precise motions and details, such as slight wrist rotation, body position, and even the wrinkles in your clothes. The Kinect for Windows community will benefit from the sensor’s enhanced fidelity, which will allow developers to create highly accurate solutions that see a person’s form better than ever, track objects and environments with greater detail, and understand voice commands in noisier settings than before.

    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to
    create apps that see a person's form better, track objects with greater detail, and understand
    voice commands in noisier settings.

    • Expanded field of view
      The expanded field of view accommodates a multitude of differently sized rooms, minimizing the need to modify existing room configurations and opening up new solution-development opportunities. The combination of the new sensor’s higher fidelity plus expanded field of view will give businesses the tools they need to create truly untethered, natural computing experiences such as clicker-free presentation scenarios, more dynamic simulation and training solutions, up-close interactions, more fluid gesture recognition for quick interactions on the go, and much more.
          
    • Improved skeletal tracking
      The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This not only yields more accurate skeletal tracking, it opens up a range of new scenarios, including improved “avateering,” the ability to develop enhanced rehabilitation and physical fitness solutions, and the possibility to create new experiences in public spaces—such as retail—where multiple users can participate simultaneously.

    The new sensor tracks more points on the human body than previously and tracks six skeletons at once, opening a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
    The new sensor tracks more points on the human body than previously, including the tip of the hand
    and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved
    "avateering" to experiences in which multiple users can participate simultaneously.
      

    • New active infrared (IR)
      The all-new active-IR capabilities allow the new sensor to work in nearly any lighting condition and, in essence, give businesses access to a new fourth sensor: audio, depth, color…and now active IR. This will offer developers better built-in recognition capabilities in different real-world settings—independent of the lighting conditions—including the sensor’s ability to recognize facial features, hand position, and more. 

    I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.

    A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.

    Bob Heddle
    Director, Kinect for Windows

    Key links

     
    Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images

     

  • Kinect for Windows Product Blog

    Reflexion Health advancing physical therapy with Kinect for Windows

    • 0 Comments

    Reflexion Health, founded with technology developed at the West Health Institute, realized years ago that assessing physical therapy outcomes is difficult for a variety of reasons, and took on the challenge of designing a solution to help increase the success rates of rehabilitation from physical injury.

    In 2011, the Reflexion team approached the Orthopedic Surgery Department of the Naval Medical Center San Diego to help test their new Rehabilitation Measurement Tool (RMT). This software solution was developed to make physical therapy more engaging, efficient, and successful. By using the Kinect for Windows sensor and software development kit (SDK), the RMT allows clinicians to measure patient progress. Patients often do much of their therapy alone and because they can lack immediate feedback from therapists, it can be difficult for them to be certain that they are performing the exercises in a manner that will provide them with optimal benefits. The RMT can indicate if exercises were performed properly, how frequently they were performed, and give patients real-time feedback.

    Reflexion Health's Kinect for Windows-based tool helps measure how patients respond to physical therapy.
    Reflexion Health's Kinect for Windows-based tool helps measure how patients respond to physical therapy.

    “Kinect for Windows helps motivate patients to do physical therapy—and the data set we gather when they use the RMT is becoming valuable to demonstrate what form of therapy is most effective, what types of patients react better to what type of therapy, and how to best deliver that therapy. Those questions have vexed people for a long time,” says Dr. Ravi Komatireddy, co-founder at Reflexion Health.

    The proprietary RMT software engages patients with avatars and educational information, and a Kinect for Windows sensor tracks a patient’s range of motion and other clinical data. This valuable information helps therapists customize and deliver therapy plans to patients.

    “RMT is a breakthrough that can change how physical therapy is delivered,” Spencer Hutchins, co-founder and CEO of Reflexion Health says. “Kinect for Windows helps us build a repository of information so we can answer rigorous questions about patient care in a quantitative way.” Ultimately, Reflexion Health has demonstrated how software could be prescribed—similarly to pharmaceuticals and medical devices—and how it could possibly lower the cost of healthcare.

    More information about RMT and the clinical trials conducted by the Naval Medical Center can be found in the newly released case study.

    Kinect for Windows team

    Key links

     

  • Kinect for Windows Product Blog

    Using Kinect InteractionStream Outside of WPF

    • 3 Comments

    Last month with the release of version 1.7 of our SDK and toolkit we introduced something called the InteractionStream.  Included in this release were two new samples called Controls Basics and Interaction Gallery which, among other things, show how to use the new InteractionStream along with new interactions like Press and Grip.  Both of these new samples are written using managed code (C#) and WPF.

    One question I’ve been hearing from developers is, “I don’t want to use WPF but I still want to use InteractionStream with managed code.  How do I do this?”  In this post I’m going to show how to do exactly that.  I’m going to take it to the extreme by removing the UI layer completely:  we’ll use a console app using C#.

    The way our application will work is summarized in the diagram below:

    image

     

    There are a few things to note here:

    1. Upon starting the program, we initialize our sensor, interactions, and create FrameReady event handlers.
    2. Our sensor is generating data for every frame.  We use our FrameReady event handlers to respond and handle depth, skeleton, and interaction frames.
    3. The program implements the IInteractionClient interface which requires us to implement a method called GetInteractionInfoAtLocationwhich gives us back information about interactions happening with a particular user at a specified location:
      public InteractionInfo GetInteractionInfoAtLocation(int skeletonTrackingId, InteractionHandType handType, double x, double y)
      {
      var interactionInfo = new InteractionInfo
      {
      IsPressTarget = false,
      IsGripTarget = false
      };

      // Map coordinates from [0.0,1.0] coordinates to UI-relative coordinates
      double xUI = x * InteractionRegionWidth;
      double yUI = y * InteractionRegionHeight;

      var uiElement = this.PerformHitTest(xUI, yUI);

      if (uiElement != null)
      {
      interactionInfo.IsPressTarget = true;

      // If UI framework uses strings as button IDs, use string hash code as ID
      interactionInfo.PressTargetControlId = uiElement.Id.GetHashCode();

      // Designate center of button to be the press attraction point
      //// TODO: Create your own logic to assign press attraction points if center
      //// TODO: is not always the desired attraction point.
      interactionInfo.PressAttractionPointX = ((uiElement.Left + uiElement.Right) / 2.0) / InteractionRegionWidth;
      interactionInfo.PressAttractionPointY = ((uiElement.Top + uiElement.Bottom) / 2.0) / InteractionRegionHeight;
      }

      return interactionInfo;
      }
    4. The other noteworthy part of our program is in the InteractionFrameReady method.  This is where we process information about our users, route our UI events, handle things like Grip and GripRelease, etc.

     

    I’ve posted some sample code that you may download and use to get started using InteractStream in your own managed apps.  The code is loaded with tips in the comments that should get you started down the path of using our interactions in your own apps.  Thanks to Eddy Escardo Raffo on my team for writing the sample console app.

    Ben

    @benlower | kinectninja@microsoft.com | mobile: +1 (206) 659-NINJA (6465)

Page 1 of 1 (4 items)