• Kinect for Windows Product Blog

    Windows Store app development is coming to Kinect for Windows

    • 9 Comments

    Today at Microsoft BUILD 2014, Microsoft made it official: the Kinect for Windows v2 sensor and SDK are coming this summer (northern hemisphere). With it, developers will be able to start creating Windows Store apps with Kinect for the first time. The ability to build such apps has been a frequent request from the developer community. We are delighted that it’s now on the immediate horizon—with the ability for developers to start developing this summer and to commercially deploy their solutions and make their apps available to Windows Store customers later this summer.

    The ability to create Windows Store apps with Kinect for Windows not only fulfills a dream of our developer community, it also marks an important step forward in Microsoft’s vision of providing a unified development platform across Windows devices, from phones to tablets to laptops and beyond. Moreover, access to the Windows Store opens a whole new marketplace for business and consumer experiences created with Kinect for Windows.

    The Kinect for Windows v2 has been re-engineered with major enhancements in color fidelity, video definition, field of view, depth perception, and skeletal tracking. In other words, the v2 sensor offers greater overall precision, improved responsiveness, and intuitive capabilities that will accelerate your development of voice and gesture experiences.

    Specifically, the Kinect for Windows v2 includes 1080p HD video, which allows for crisp, high-quality augmented scenarios; a wider field of view, which means that users can stand closer to the sensor—making it possible to use the sensor in smaller rooms; improved skeletal tracking, which opens up even better scenarios for health and fitness apps and educational solutions; and new active infrared detection, which provides better facial tracking and gesture detection, even in low-light situations.

    The Kinect for Windows v2 SDK brings the sensor’s new capabilities to life:

    • Window Store app development: Being able to integrate the latest human computing technology into Windows apps and publish those to the Windows Store will give our developers the ability to reach more customers and open up access to natural user experiences in the home.
    • Unity Support: We are committed to supporting the broader developer community with a mix of languages, frameworks, and protocols. With support for Unity this summer, more developers will be able to build and publish their apps to the Windows Store by using tools they already know.
    • Improved anatomical accuracy: With the first-generation SDK, developers were able to track up to two people simultaneously; now, their apps can track up to six. And the number of joints that can be tracked has increased from 20 to 25 joints per person. Lastly, joint orientation is better. The result is skeletal tracking that’s greatly enhanced overall, making it possible for developers to deliver new and improved applications with skeletal tracking, which our preview participants are calling “seamless.”
    • Simultaneous, multi-app support: Multiple Kinect-enabled applications can run simultaneously. Our community has frequently requested this feature and we’re excited to be able to give it to them with the upcoming release.

    Developers who have been part of the Kinect for Windows v2 Developer Preview program praise the new sensor’s capabilities, which take natural, human computing to the next level. We are in awe and humbled by what they’ve already been able to create.

    Technologists from a few participating companies are on hand at BUILD, showing off the apps they have created by using the Kinect for Windows v2. See what two of them, Freak’n Genius and Reflexion Health, have already been able to achieve, and learn more about these companies.

    The v2 sensor and SDK dramatically enhance the world of gesture and voice control that were pioneered in the original Kinect for Windows, opening up new ways for developers to create applications that transform how businesses and consumers interact with computers. If you’re using the original Kinect for Windows to develop natural voice- and gesture-based solutions, you know how intuitive and powerful this interaction paradigm can be. And if you haven’t yet explored the possibilities of building natural applications, what are you waiting for? Join us as we continue to make technology easier to use and more intuitive for everyone.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows SDK v1.7 is Available!

    • 8 Comments

    We are stoked to announce the immediate availability of our latest SDK and Developer Toolkit (v1.7)!  Kinect Interactions (grip and press), Kinect Fusion, new samples for MATLAB, OpenCV and more.

    Download the new hotness here.

    Our product blog has all the details on what we believe is our biggest release since 1.0.

    Ben

  • Kinect for Windows Product Blog

    Kinect for Windows Solution Leads to the Perfect Fitting Jeans

    • 7 Comments

    The Bodymetric Pod is small enough to be used in retail locations for capturing customers’ unique body shapes to virtually to select and purchase garmentsThe writer Mark Twain once said “We are alike, on the inside.” On the outside, however, few people are the same. While two people might be the same height and wear the same size, the way their clothing fits their bodies can vary dramatically. As a result, up to 40% of clothing purchased both online and in person is returned because of poor fit.

    Finding the perfect fit so clothing conforms to a person’s unique body shape is at the heart of the Bodymetrics Pod. Developed by Bodymetrics, a London-based pioneer in 3D body-mapping, the Bodymetrics Pod was introduced to American shoppers for the first time today during Women’s Denim Days at Bloomingdale’s in Century City, Los Angeles. This is the first time Kinect for Windows has been used commercially in the United States for body mapping in a retail clothing environment.

    Bloomingdale’s, a leader in retail innovation, has one of the largest offerings in premium denim from fashion-forward brands like J Brand, 7 for all mankind, Citizens and Humanity, AG, and Paige. The Bodymetrics services allows customers to get their body mapped and find jeans that fit and flatter their unique shape from the hundreds of different jeans styles that Bloomingdale’s stocks.

    During Bloomingdale’s Denim Days, March 15 – 18, customers will be able to get their body mapped, and also become a Bodymetrics member. This free service enables customers to access an online account and order jeans based on their body shape.

    The Bodymetric scanning technology maps hundreds of measurements and contours, which can be used for choosing clothing that perfectly fits a person’s body“We’re very excited about bringing Bodymetrics to US shoppers,” explains Suran Goonatilake, CEO of Bodymetrics. “Once we 3D map a customer’s body, we classify their shape into three categories - emerald, sapphire and ruby. A Bodymetrics Stylist will then find jeans that exactly match the body shape of the customer from jean styles that Bloomingdale’s stocks.”

    The process starts with a customer creating a Bodymetrics account. They are then directed to the Bodymetrics Pod, a secure, private space, where their body is scanned by 8 Kinect for Windows sensors arranged in a circle. Bodymetrics’ proprietary software produces a 3D map of the customer’s body, and then calculates the shape of the person, taking hundreds of measurements and contours into account. The body-mapping process takes less than 5 seconds.

    Helping women shop for best-fitting jeans in department stores is just the start of what Bodymetrics envisions for their body-mapping technologies. The company is working on a solution that can be used at home. Individuals will be able to scan their body, and then go online to select, virtually try on, and purchase clothing that match their body shape.

    Goonatilake explains, “Body-mapping is in its infancy. We’re just starting to explore what’s possible in retail stores and at home. Stores are increasingly looking to provide experiences that entice shoppers into their stores, and then allow a seamless journey from stores to online. And we all want shopping experiences that are personalized to us – our size, shape and style.”

    Even though people may not be identical on the outside, we desire clothing that fits well and complements our body shapes. The Kinect for Windows-enabled Bodymetrics Pod offers a retail-ready solution that makes the perfect fit beautifully simple.

    Kinect for Windows Team

  • Kinect for Windows Product Blog

    BUILDing business with Kinect for Windows v2

    • 7 Comments

    BUILD—Microsoft’s annual developer conference—is the perfect showcase for inventive, innovative solutions created with the latest Microsoft technologies. As we mentioned in our previous blog, some of the technologists who have been part of the Kinect for Windows v2 developer preview program are here at BUILD, demonstrating their amazing apps. In this blog, we’ll take a closer look at how Kinect for Windows v2 has spawned creative leaps forward at two innovative companies: Freak’n Genius and Reflexion Health.

    Making schoolwork fun with Freak’n Genius, which lets anyone become an animator using Kinect for Windows v2. Here a student is choosing a character to animate in real time, for a video presentation on nutrition.
    Left: A student is choosing a Freak’n Genius character to animate in real time for a video presentation on nutrition. Right: Vera, by Reflexion Health can track a patient performing physical therapy exercises at home and give her immediate feedback on her execution while also transmitting the results to her therapist.

    Freak’n Genius is a Seattle-based company whose current YAKiT and YAKiT Kids applications, which let users create talking photos on a smartphone, have been used to generate well over a million videos.

    But with Kinect for Windows 2, Freak’n Genius is poised to flip animation on its head, by taking what has been highly technical, time consuming, and expensive and making it instant, free, and fun. It’s performance-based animation without the suits, tracking balls, and room-size setups. Freak’n Genius has developed software that will enable just about anyone to create cartoons with fully animated characters by using a Kinect for Windows v2 sensor. The user simply chooses an on-screen character—the beta features 20 characters, with dozens more in the works—and animates it by standing in front of the Kinect for Windows sensor and moving. With its precise skeletal tracking capabilities, the v2 sensor captures the “animator’s” every twitch, jump, and gesture, translating them into movements of the on-screen character.

    What’s more, with the ability to create Windows Store apps, Kinect for Windows v2 stands to bring Freak’n Genius’s improved animation applications to countless new customers. Dwayne Mercredi, the chief technology officer at Freakn’ Genius, says that “Kinect for Windows v2 is awesome. From a technology perspective, it gives us everything we need so that an everyday person can create amazing animations immediately.” He praises how the v2 sensor reacts perfectly to the user’s every movement, making it seem “as if they were in the screen themselves.”  He also applauds the v2 sensor’s color camera, which provides full HD at 1080p. “There’s no reason why this shouldn’t fully replace the web cam,” notes Mercredi.

    Mercredi notes that YAKiT is already being used for storytelling, marketing, education reports, enhanced communication, or just having fun. With Kinect for Windows v2, Freak’n Genius envisions that kids of all ages will have an incredibly simple and entertaining way to express their creativity and humor while professional content creators—such as advertising, design, and marketing studios—will be able to bring their content to life either in large productions or on social media channels. There is also a white-label offering, giving media companies the opportunity to use their content in a new way via YAKiT’s powerful animation engine.

    While Freak’n Genius captures the fun and commercial potential of Kinect for Windows v2, Reflexion Health shows just how powerful the new sensor can be to the healthcare field. As anyone who’s ever had a sports injury or accident knows, physical therapy (PT) can be a crucial part of their recovery. Physical therapists are rigorously trained and dedicated to devising a tailored regimen of manual treatment and therapeutic exercises that will help their patients mend. But increasingly, patients’ in-person treatment time has shrunk to mere minutes, and, as any physical therapist knows, once patients leave the clinic, many of them lose momentum, often struggling  to perform the exercises correctly at home—or simply skipping them altogether.

    Reflexion Health, based in San Diego, uses Kinect for Windows to augment their physical therapy program and give the therapists a powerful, data-driven new tool to help ensure that patients get the maximum benefit from their PT. Their application, named Vera, uses Kinect for Windows to track patients’ exercise sessions. The initial version of this app was built on the original Kinect for Windows, but the team eagerly—and easily—adapted the software to the v2 sensor and SDK. The new sensor’s improved depth sensing and enhanced skeletal tracking, which delivers information on more joints, allows the software to capture the patient’s exercise moves in far more precise detail.  It provides patients with a model for how to do the exercise correctly, and simultaneously compares the patient’s movements to the prescribed exercise. The Vera system thus offers immediate, real-time feedback—no more wondering if you’re lifting or twisting in the right way.  The data on the patient’s movements are also shared with the therapist, so that he or she can track the patient’s progress and adjust the exercise regimen remotely for maximum therapeutic benefit.

    Not only does the Kinect for Windows application provide better results for patients and therapists, it also fills a need in an enormous market. PT is a $30 billion business in the United States alone—and a critical tool in helping to manage the $127 billion burden of musculoskeletal disorders. By extending the expertise and oversight of the best therapists, Reflexion Health hopes to empower and engage patients, helping to improve the speed and quality of recovery while also helping to control the enormous costs that come from extra procedures and re-injury. Moreover, having the Kinect for Windows v2 supported in the Windows Store stands to open up home distribution for Reflexion Health. 

    Mark Barrett, a lead software engineer at Reflexion Health, is struck by the rewards of working on the app. Coming from a background in the games industry, he now enjoys using Kinect technology to “try and tackle such a large and meaningful problem. That’s just a fantastic feeling.”  As a developer, he finds the improved skeletal tracking the v2 sensor’s most significant change, calling it a real step forward from the original Kinect for Windows. “It’s so much more precise,” he says. “There are more joints, and they’re in more accurate positions.”  And while the skeletal tracking has made the greatest improvement in Reflexion Health’s app—giving both patients and clinicians more accurate and actionable data on precise body movements—Barrett is also excited for the new color camera and depth sensor, which together provide a much better image for the physical therapist to review.  “You see such a better representation of the patient…It was jaw-dropping the first time I saw it,” he says.

    But like any cautious dev, Barrett acknowledges being apprehensive about porting the application to the Kinect for Windows v2 sensor.  Happily, he discovered that the switch was painless, commenting that “I’ve never had a hardware conversion from one version to the next be so effortless and so easy.” He’s also been pleased to see how easy the application is for patients to use. “It’s so exciting to be working on a solution that has the potential to help so many people and make people’s lives better. To know that my skills as a developer can help make this possible is a great feeling.”

    From creating your own animations to building a better path for physical rehabilitation, the Kinect for Windows v2 sensor is already in the hands of thousands of developers. We can’t wait to make it publicly available this summer and see what the rest of you do with the technology.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Using Kinect Background Removal with Multiple Users

    • 7 Comments

    Introduction: Background Removal in Kinect for Windows

    The 1.8 release of the Kinect for Windows Developer Toolkit includes a component for isolating a user from the background of the scene. The component is called the BackgroundRemovedColorStream. This capability has many possible uses, such as simulating chroma-key or “green-screen” replacement of the background – without needing to use an actual green screen; compositing a person’s image into a virtual environment; or simply blurring out the background, so that video conference participants can’t see how messy your office really is.

    BackgroundRemovalBasics

    To use this feature in an application, you create the BackgroundRemovedColorStream, and then feed it each incoming color, depth, and skeleton frame when they are delivered by your Kinect for Windows sensor. You also specify which user you want to isolate, using their skeleton tracking ID. The BackgroundRemovedColorStream produces a sequence of color frames, in BGRA (blue/green/red/alpha) format. These frames are identical in content to the original color frames from the sensor, except that the alpha channel is used to distinguish foreground pixels from background pixels. Pixels that the background removal algorithm considers part of the background will have an alpha value of 0 (fully transparent), while foreground pixels will have their alpha at 255 (fully opaque). The foreground region is given a smoother edge by using intermediate alpha values (between 0 and 255) for a “feathering” effect. This image format makes it easy to combine the background-removed frames with other images in your application.

    As a developer, you get the choice of which user you want in the foreground. The BackgroundRemovalBasics-WPF sample has some simple logic that selects the user nearest the sensor, and then continues to track the same user until they are no longer visible in the scene.

    private void ChooseSkeleton()
    {
        var isTrackedSkeltonVisible = false;
        var nearestDistance = float.MaxValue;
        var nearestSkeleton = 0;
     
        foreach (var skel in this.skeletons)
        {
            if (null == skel)
            {
                 continue;
            }
     
            if (skel.TrackingState != SkeletonTrackingState.Tracked)
            {
                continue;
            }
     
            if (skel.TrackingId == this.currentlyTrackedSkeletonId)
            {
                isTrackedSkeltonVisible = true;
                break;
            }
     
            if (skel.Position.Z < nearestDistance)
            {
                nearestDistance = skel.Position.Z;
                nearestSkeleton = skel.TrackingId;
            }
        }
     
        if (!isTrackedSkeltonVisible && nearestSkeleton != 0)
        {
            this.backgroundRemovedColorStream.SetTrackedPlayer(nearestSkeleton);
            this.currentlyTrackedSkeletonId = nearestSkeleton;
        }
    }

    Wait, only one person?

    If you wanted to select more than one person from the scene to appear in the foreground, it would seem that you’re out of luck, because the BackgroundRemovedColorStream’s SetTrackedPlayer method accepts only one tracking ID. But you can work around this limitation by running two separate instances of the stream, and sending each one a different tracking ID. Each of these streams will produce a separate color image, containing one of the users. These images can then be combined into a single image, or used separately, depending on your application’s needs.

    Wait, only two people?

    In the most straightforward implementation of the multiple stream approach, you’d be limited to tracking just two people, due to an inherent limitation in the skeleton tracking capability of Kinect for Windows. Only two skeletons at a time can be tracked with full joint-level fidelity. The joint positions are required by the background removal implementation in order to perform its job accurately.

    However, there is an additional trick we can apply, to escape the two-skeleton limit. This trick relies on an assumption that the people in the scene will not be moving at extremely high velocities (generally a safe bet). If a particular skeleton is not fully tracked for a frame or two, we can instead reuse the most recent frame in which that skeleton actually was fully tracked. Since the skeleton tracking API lets us choose which two skeletons to track at full fidelity, we can choose a different pair of skeletons each frame, cycling through up to six skeletons we wish to track, over three successive frames.

    Each additional instance of BackgroundRemovedColor stream will place increased demands on CPU and memory. Depending on your own application’s needs and your hardware configuration, you may need to dial back the number of simultaneous users you can process in this way.

    Wait, only six people?

    Demanding, aren’t we? Sorry, the Kinect for Windows skeleton stream can monitor at most six people simultaneously (two at full fidelity, and four at lower fidelity). This is a hard limit.

    Introducing a multi-user background removal sample

    We’ve created a new sample application, called BackgroundRemovalMultiUser-WPF, to demonstrate how to use the technique described above to perform background removal on up to six people. We started with the code from the BackgroundRemovalBasics-WPF sample, and changed it to support multiple streams, one per user. The output from each stream is then overlaid on the backdrop image.

    BackgroundRemovalMultiUser

    Factoring the code: TrackableUser

    The largest change to the original sample was refactoring the application code that interacts the BackgroundRemovedColorStream, so that we can have multiple copies of it running simultaneously. This code, in the new sample, resides in a new class named TrackableUser. Let’s take a brief tour of the interesting parts of this class.

    The application can instruct TrackableUser to track a specific user by setting the TrackingId property appropriately.

    public int TrackingId
    {
        get
        {
            return this.trackingId;
        }
     
        set
        {
            if (value != this.trackingId)
            {
                if (null != this.backgroundRemovedColorStream)
                {
                    if (InvalidTrackingId != value)
                    {
                        this.backgroundRemovedColorStream.SetTrackedPlayer(value);
                        this.Timestamp = DateTime.UtcNow;
                    }      
                    else
                    {
                        // Hide the last frame that was received for this user.
                        this.imageControl.Visibility = Visibility.Hidden;      
                        this.Timestamp = DateTime.MinValue;
                    }      
                }
     
                this.trackingId = value;
            }
        }
    }

    The Timestamp property indicates when the TrackingId was most recently set to a valid value. We’ll see later how this property is used by the sample application’s user-selection logic.

    public DateTime Timestamp { get; private set; }

    Whenever the application is notified that the default Kinect sensor has changed (at startup time, or when the hardware is plugged in or unplugged), it passes this information along to each TrackableUser by calling OnKinectSensorChanged. The TrackableUser, in turn, sets up or tears down its BackgroundRemovedColorStream accordingly.

    public void OnKinectSensorChanged(KinectSensor oldSensor, KinectSensor newSensor)
    {
        if (null != oldSensor)
        {
            // Remove sensor frame event handler.
            oldSensor.AllFramesReady -= this.SensorAllFramesReady;
     
            // Tear down the BackgroundRemovedColorStream for this user.
            this.backgroundRemovedColorStream.BackgroundRemovedFrameReady -=
                 this.BackgroundRemovedFrameReadyHandler;
            this.backgroundRemovedColorStream.Dispose();
            this.backgroundRemovedColorStream = null;
            this.TrackingId = InvalidTrackingId;
        }
     
        this.sensor = newSensor;
     
        if (null != newSensor)
        {
            // Setup a new BackgroundRemovedColorStream for this user.
            this.backgroundRemovedColorStream = new BackgroundRemovedColorStream(newSensor);
            this.backgroundRemovedColorStream.BackgroundRemovedFrameReady +=
                this.BackgroundRemovedFrameReadyHandler;
            this.backgroundRemovedColorStream.Enable(
                newSensor.ColorStream.Format,
                newSensor.DepthStream.Format);
     
            // Add an event handler to be called when there is new frame data from the sensor.
            newSensor.AllFramesReady += this.SensorAllFramesReady;
        }
    }

    Each time the Kinect sensor produces a matched set of depth, color, and skeleton frames, we forward each frame’s data along to the BackgroundRemovedColorStream.

    private void SensorAllFramesReady(object sender, AllFramesReadyEventArgs e)
    {
        ...
            if (this.IsTracked)
            {
                using (var depthFrame = e.OpenDepthImageFrame())
                {      
                    if (null != depthFrame)
                    {
                         // Process depth data for background removal.
                         this.backgroundRemovedColorStream.ProcessDepth(
                             depthFrame.GetRawPixelData(),
                             depthFrame.Timestamp);
                    }
                }
     
                using (var colorFrame = e.OpenColorImageFrame())      
                {      
                    if (null != colorFrame)      
                    {
                        // Process color data for background removal.
                        this.backgroundRemovedColorStream.ProcessColor(
                            colorFrame.GetRawPixelData(),
                            colorFrame.Timestamp);      
                    }
                }
     
                using (var skeletonFrame = e.OpenSkeletonFrame())
                {
                    if (null != skeletonFrame)
                    {
                        // Save skeleton frame data for subsequent processing.
                        CopyDataFromSkeletonFrame(skeletonFrame);
     
                        // Locate the most recent data in which this user was fully tracked.
                        bool isUserPresent = UpdateTrackedSkeletonsArray();
     
                        // If we have an array in which this user is fully tracked,
                        // process the skeleton data for background removal.
                        if (isUserPresent && null != this.skeletonsTracked)
                        {
                            this.backgroundRemovedColorStream.ProcessSkeleton(
                                this.skeletonsTracked,
                                skeletonFrame.Timestamp);
                         }
                    }
                }
            }
        ...
    }

    The UpdateTrackedSkeletonsArray method implements the logic to reuse skeleton data from an older frame when the newest frame contains the user’s skeleton, but not in a fully-tracked state. It also informs the caller whether the user with the requested tracking ID is still present in the scene.

    private bool UpdateTrackedSkeletonsArray()
    {
        // Determine if this user is still present in the scene.
        bool isUserPresent = false;
        foreach (var skeleton in this.skeletonsNew)
        {
            if (skeleton.TrackingId == this.TrackingId)
             {
                isUserPresent = true;
                if (skeleton.TrackingState == SkeletonTrackingState.Tracked)
                 {
                    // User is fully tracked: save the new array of skeletons,
                    // and recycle the old saved array for reuse next time.
                    var temp = this.skeletonsTracked;
                    this.skeletonsTracked = this.skeletonsNew;
                    this.skeletonsNew = temp;
                }
     
                break;
            }
        }
     
        if (!isUserPresent)
        {
            // User has disappeared; stop trying to track.
            this.TrackingId = TrackableUser.InvalidTrackingId;
        }
     
        return isUserPresent;
    }

    Whenever the BackgroundRemovedColorStream produces a frame, we copy its BGRA data to the bitmap that is the underlying Source for an Image element in the MainWindow. This causes the updated frame to appear within the application’s window, overlaid on the background image.

    private void BackgroundRemovedFrameReadyHandler(
        object sender,
        BackgroundRemovedColorFrameReadyEventArgs e)
    {
        using (var backgroundRemovedFrame = e.OpenBackgroundRemovedColorFrame())
        {
             if (null != backgroundRemovedFrame && this.IsTracked)
             {
                 int width = backgroundRemovedFrame.Width;
                 int height = backgroundRemovedFrame.Height;
     
                 WriteableBitmap foregroundBitmap =
                    this.imageControl.Source as WriteableBitmap;
     
                // If necessary, allocate new bitmap. Set it as the source of the Image
                // control.
                if (null == foregroundBitmap ||
                    foregroundBitmap.PixelWidth != width ||
                    foregroundBitmap.PixelHeight != height)
                {
                    foregroundBitmap = new WriteableBitmap(
                        width,      
                        height,
                        96.0,
                        96.0,      
                        PixelFormats.Bgra32,
                        null);
     
                    this.imageControl.Source = foregroundBitmap;
                }
     
                // Write the pixel data into our bitmap.
                foregroundBitmap.WritePixels(
                    new Int32Rect(0, 0, width, height),
                    backgroundRemovedFrame.GetRawPixelData(),
                    width * sizeof(uint),
                    0);
     
                // A frame has been delivered; ensure that it is visible.
                this.imageControl.Visibility = Visibility.Visible;
            }
        }
    }

    Limiting the number of users to track

    As mentioned earlier, the maximum number of trackable users may have a practical limit, depending on your hardware. To specify the limit, we define a constant in the MainWindow class:

    private const int MaxUsers = 6;

    You can modify this constant to have any value from 2 to 6. (Values larger than 6 are not useful, as Kinect for Windows does not track more than 6 users.)

    Selecting users to track: The User View

    We want to provide a convenient way to choose which users will be tracked for background removal. To do this, we present a view of the detected users in a small inset. By clicking on the users displayed in this inset, we can select which of those users are associated with our TrackableUser objects, causing them to be included in the foreground.

    UserView

    We update the user view each time a depth frame is received by the sample’s main window.

    private void UpdateUserView(DepthImageFrame depthFrame)
    {
        ...
        // Store the depth data.
        depthFrame.CopyDepthImagePixelDataTo(this.depthData);      
        ...
        // Write the per-user colors into the user view bitmap, one pixel at a time.
        this.userViewBitmap.Lock();
       
        unsafe
        {
            uint* userViewBits = (uint*)this.userViewBitmap.BackBuffer;
            fixed (uint* userColors = &this.userColors[0])
            {      
                // Walk through each pixel in the depth data.
                fixed (DepthImagePixel* depthData = &this.depthData[0])      
                {
                    DepthImagePixel* depthPixel = depthData;
                    DepthImagePixel* depthPixelEnd = depthPixel + this.depthData.Length;
                    while (depthPixel < depthPixelEnd)
                    {
                        // Lookup a pixel color based on the player index.
                        // Store the color in the user view bitmap's buffer.
                        *(userViewBits++) = *(userColors + (depthPixel++)->PlayerIndex);
                    }
                }
            }
        }
     
        this.userViewBitmap.AddDirtyRect(new Int32Rect(0, 0, width, height));
        this.userViewBitmap.Unlock();
    }

    This code fills the user view bitmap with solid-colored regions representing each of the detected users, as distinguished by the value of the PlayerIndex field at each pixel in the depth frame.

    The main window responds to a mouse click within the user view by locating the corresponding pixel in the most recent depth frame, and using its PlayerIndex to look up the user’s TrackingId in the most recent skeleton data. The TrackingID is passed along to the ToggleUserTracking method, which will attempt to toggle the tracking of that TrackingID between the tracked and untracked states.

    private void UserViewMouseLeftButtonDown(object sender, MouseButtonEventArgs e)
    {
        // Determine which pixel in the depth image was clicked.
        Point p = e.GetPosition(this.UserView);
        int depthX =
            (int)(p.X * this.userViewBitmap.PixelWidth / this.UserView.ActualWidth);
        int depthY =
            (int)(p.Y * this.userViewBitmap.PixelHeight / this.UserView.ActualHeight);
        int pixelIndex = (depthY * this.userViewBitmap.PixelWidth) + depthX;
        if (pixelIndex >= 0 && pixelIndex < this.depthData.Length)
        {
            // Find the player index in the depth image. If non-zero, toggle background
            // removal for the corresponding user.
            short playerIndex = this.depthData[pixelIndex].PlayerIndex;
            if (playerIndex > 0)
            {      
                // playerIndex is 1-based, skeletons array is 0-based, so subtract 1.
                this.ToggleUserTracking(this.skeletons[playerIndex - 1].TrackingId);
            }
        }
    }

    Picking which users will be tracked

    When MaxUsers is less than 6, we need some logic to handle a click on an untracked user, and we are already tracking the maximum number of users. We choose to stop tracking the user who was tracked earliest (based on timestamp), and start tracking the newly chosen user immediately. This logic is implemented in ToggleUserTracking.

    private void ToggleUserTracking(int trackingId)
    {
        if (TrackableUser.InvalidTrackingId != trackingId)
        {
            DateTime minTimestamp = DateTime.MaxValue;
            TrackableUser trackedUser = null;
            TrackableUser staleUser = null;
     
            // Attempt to find a TrackableUser with a matching TrackingId.
            foreach (var user in this.trackableUsers)
            {
                if (user.TrackingId == trackingId)
                {
                    // Yes, this TrackableUser has a matching TrackingId.
                    trackedUser = user;
                }
     
                // Find the "stale" user (the trackable user with the earliest timestamp).
                if (user.Timestamp < minTimestamp)
                {      
                    staleUser = user;
                    minTimestamp = user.Timestamp;
                }
            }
     
            if (null != trackedUser)
            {
                // User is being tracked: toggle to not tracked.
                trackedUser.TrackingId = TrackableUser.InvalidTrackingId;
            }
            else
            {      
                // User is not currently being tracked: start tracking, by reusing
                // the "stale" trackable user.
                staleUser.TrackingId = trackingId;
            }
        }
    }

    Once we’ve determined which users will be tracked by the TrackableUser objects, we need to ensure that those users are being targeted for tracking by the skeleton stream on a regular basis (at least once every three frames). UpdateChosenSkeletons implements this using a round-robin scheme.

    private void UpdateChosenSkeletons()
    {
        KinectSensor sensor = this.sensorChooser.Kinect;
        if (null != sensor)
        {
            // Choose which of the users will be tracked in the next frame.
            int trackedUserCount = 0;
            for (int i = 0; i < MaxUsers && trackedUserCount < this.trackingIds.Length; ++i)
             {
                // Get the trackable user for consideration.
                var trackableUser = this.trackableUsers[this.nextUserIndex];
                if (trackableUser.IsTracked)
                {
                    // If this user is currently being tracked, copy its TrackingId to the
                    // array of chosen users.
                    this.trackingIds[trackedUserCount++] = trackableUser.TrackingId;
                }
     
                // Update the index for the next user to be considered.
                this.nextUserIndex = (this.nextUserIndex + 1) % MaxUsers;
            }      
     
            // Fill any unused slots with InvalidTrackingId.
            for (int i = trackedUserCount; i < this.trackingIds.Length; ++i)
            {
                this.trackingIds[i] = TrackableUser.InvalidTrackingId;
            }
     
            // Pass the chosen tracking IDs to the skeleton stream.
            sensor.SkeletonStream.ChooseSkeletons(this.trackingIds[0], this.trackingIds[1]);
        }
    }

    Combining multiple foreground images

    Now that we can have multiple instances of TrackableUser, each producing a background-removed image of a user, we need to combine those images on-screen. We do this by creating multiple overlapping Image elements (one per trackable user), each parented by the MaskedColorImages element, which itself is a sibling of the Backdrop element. Wherever the background has been removed from each image, the backdrop image will show through.

    As each image is created, we associate it with its own TrackableUser.

    public MainWindow()
    {
        ...
        // Create one Image control per trackable user.
        for (int i = 0; i < MaxUsers; ++i)
        {
            Image image = new Image();
            this.MaskedColorImages.Children.Add(image);
            this.trackableUsers[i] = new TrackableUser(image);
        }
    }

    To capture and save a snapshot of the current composited image, we create two VisualBrush objects, one for the Backdrop, and one for MaskedColorImages. We draw rectangles with each of these brushes, into a bitmap, and then write the bitmap to a file.

    private void ButtonScreenshotClick(object sender, RoutedEventArgs e)
    {
        ...
        var dv = new DrawingVisual();
        using (var dc = dv.RenderOpen())
        {
            // Render the backdrop.
            var backdropBrush = new VisualBrush(Backdrop);      
            dc.DrawRectangle(
                backdropBrush,      
                null,
                new Rect(new Point(), new Size(colorWidth, colorHeight)));
     
            // Render the foreground.
            var colorBrush = new VisualBrush(MaskedColorImages);      
            dc.DrawRectangle(
                colorBrush,
                null,
                new Rect(new Point(), new Size(colorWidth, colorHeight)));
        }
     
        renderBitmap.Render(dv);
        ...
    }

    Summary

    While the BackgroundRemovedColorStream is limited to tracking only one user at a time, the new BackgroundRemovalMultiUser-WPF sample demonstrates that you can run multiple stream instances to track up to six users simultaneously. When using this technique, you should consider – and measure – the increased resource demands (CPU and memory) that the additional background removal streams will have, and determine for yourself how many streams your configuration can handle.

    We hope that this sample opens up new possibilities for using background removal in your own applications.

    John Elsbree
    Principal Software Development Engineer
    Kinect for Windows

  • Kinect for Windows Product Blog

    Developing with Kinect for Windows v2 on a Mac

    • 5 Comments

    With the launch of the Kinect for Windows v2 public preview, we want to ensure that developers have access to the SDK so that you can start writing Kinect-based applications. As you may be aware, the Kinect for Windows SDK 2.0 public preview will run only on Windows 8 and Windows 8.1 64-bit systems. If you have a Windows 8 PC that meets the minimum requirements, you’re ready to go.

    For our Macintosh developers, this may be bittersweet news, but we’re here to help. There are two options available for developers who have an Intel-based Mac: (1) install Windows to the Mac’s hard drive, or (2) install Windows to an external USB 3.0 drive. Many Mac users are aware of the first option, but the second is less well known.

    First, you need to ensure that your hardware meets the minimum requirements for Kinect for Windows v2.

    Due to the requirements for full USB 3.0 bandwidth and GPU Shader Model 5 (DirectX 11), virtualization products such as VMWare Fusion, Parallels Desktop, or Oracle VirtualBox are not supported. If you’re not sure what hardware you have, you can find out on these Apple websites:


    Installing Windows on the internal hard drive of your Intel-based Macintosh

    We’re going to focus on getting Windows 8.1 installed, since this is typically the stumbling block. (If you need help installing Visual Studio or other applications on Windows, you can find resources online.)

    Apple has provided a great option called Boot Camp. This tool will download the drivers for Windows, set up bootable media for installation, and guide you through the partitioning process. Please refer to Apple’s website on using this option:


    Alternative to installing Windows on your primary drive

    Boot Camp requires Windows to be installed on your internal hard drive. This might be impractical or impossible for a variety of reasons, including lack of available free space, technical failures during setup, or personal preferences.

    An alternative is to install Windows to an external drive using Windows To Go, a feature of Windows 8 and 8.1 Enterprise. (Learn more about this feature in Windows 8.1 Enterprise.)

    In the section, Hardware considerations for Windows To Go, on Windows To Go: Feature Overview, you can find a list of recommended USB 3.0 drives. These drives have additional security features that you may want to review with your systems administrators, to ensure you are in compliance with your company’s security policies.


    Getting started with Windows To Go

    • You will need the following to proceed:
    • Existing PC with USB 3.0 that has Windows 8/8.1 Enterprise installed (the “technician computer”)
    • USB 3.0 flash or external hard drive
    • Windows 8/8.1 Enterprise installation media (CD or ISO)
    • Windows 8/8.1 product key

    You will need to log in as the administrator. Start the Windows to Go tool, press Win-Q to start the search, and enter Windows To Go:

    press Win-Q to start the search, and enter "Windows To Go"

    Launch the Windows To Go application from the list. From the main application window, you will see a list of the attached drives that you can use with the tool. As shown below, you may be alerted of USB 3.0 drives that are not Windows To Go certified. You can still use the drive but understand that it might not work or could have an impact on performance. If you are using a non-certified USB 3.0 drive, you will have do your own testing to ensure it meets your needs. (Note: while not officially supported by Microsoft, we have used the Western Digital My Passport Ultra 500 GB and 1 TB drives at some of our developer hackathons to get people using Macs up and running with our dev tools on Windows.)

    "Choose the drive you want to use" window

    Select the drive you wish to use and click Next. If you have not already done so, insert the Windows 8.1 Enterprise CD at this time. If you have the .ISO, you can double-click the icon or right-click and select Mount to use it as a virtual drive.

    If you have the .ISO, you can double-click the icon or right-click and select Mount to use it as a virtual drive.

    If you do not see an image in the list, click the Add search location button and browse your system to find the DVD drive or mounted CD partition:

    Browse your system to find the DVD drive or mounted CD partition.

    It should now appear in the list, and you can select it and click Next.

    Select your Windows 8.1 image and click "Next."

    If you need or wish to use BitLocker, you can enable that now. We will Skip this.

    "Set a BitLocker password (optional)" screen 

    The confirmation screen will summarize the selections you have made. This is your last chance to ensure that you are using the correct drive. Please avail yourself of this opportunity, as the Windows To Go installation process will reformat the drive and you will not be able to recover any data that is currently on the drive. Once you have confirmed that you are using the correct drive, click Create to continue.

    "Ready to create your Windows To Go workspace" window

    Once the creation step is complete, you are ready to reboot the system. But first, you’ll need to download the drivers necessary for running Windows on Macintosh hardware from the Apple support page, as, by default, Windows setup does not include these drivers.

    I recommend that you create an Extras folder on your drive and copy the files you’ll need. As shown below, I downloaded and extracted the Boot Camp drivers in this folder, since this will be the first thing I’ll need after logging in for the first time.

    Extracting the Boot Camp drivers from the Extras folder I created.

    Disconnect the hard drive from the Windows computer and connect it to your Mac. Be sure that you are using the USB 3.0 connection if you have both USB 2 and USB 3.0 hardware ports. Once the drive is connected, boot or restart your system while holding down the option key. (Learn more about these startup key shortcuts for Intel-based Macs.)

    Connect the hard drive to your Mac and restart your system while holding down the option key.

    During the initial setup, you will be asked to enter your product key, enter some default settings, and create an account. If your system has to reboot at any time, repeat the previous step to ensure that you return to the USB 3.0 workspace. Once you have successfully logged in for the first time, install the Boot Camp driver and any other applications you wish to use. Then you’ll have a fully operational Windows environment you can use for your Kinect for Windows development.

    Carmine Sirignano
    Developer Support Escalation Engineer
    Kinect for Windows

    Key links

     

  • Kinect for Windows Product Blog

    Kinect for Windows at Convergence of Style and Technology for New York Fashion Week

    • 5 Comments

    Kinect for Windows powers a new technology that virtually models the hottest styles in Bloomingdale’s during Fashion Week.This year, Kinect for Windows gives Fashion Week in New York a high-tech boost by offering a new way to model the latest styles at retail. Swivel, a virtual dressing room that is featured at Bloomingdale's, helps you quickly see what clothes look like on you—without the drudgery of trying on multiple garments in the changing room.

    Twenty Bloomingdale's stores across the United States are featuring Swivel this week— including outlets in Atlanta, Chicago, Miami, Los Angeles, and San Francisco. This Kinect for Windows application was developed by FaceCake Marketing Technologies, Inc.

    Also featured at Bloomingdale's during Fashion Week is a virtual version of a Microsoft Research project called The Printing Dress. This remarkable melding of fashion and technology is on display at Bloomingdale's 59th Street location in New York. The Printing Dress enables the wearer of the virtual dress to display messages via a projector inside the dress by typing on keys that are inlaid on the bodice. Normally, you wouldn't be able to try on such a fragile runway garment, but the Kinect-enabled technology makes it possible to see how haute couture looks on you.

    Bloomingdale's has made early and ongoing investments in deploying Kinect for Windows gesture-based experiences at retail stores: they featured another Kinect for Windows solution last March at their Century City store in Los Angeles, just six weeks after the launch of the technology. That solution by Bodymetrics uses shoppers’ body measurements to help them find the best fitting jeans. The Bodymetrics body mapping technology is currently being used at the Bloomingdale’s store in Palo Alto, California.

    "Merging fashion with technology is not just a current trend, but the wave of the future," said Bloomingdale's Senior Vice President of Marketing Frank Berman. "We recognize the melding of the two here at Bloomingdale's, and value our partnership with companies like Microsoft to bring exciting animation to our stores and website to enhance the experience for our shoppers."

    Here's how Swivel works: the Kinect for Windows sensor detects your body and displays an image of you on the screen. Kinect provides both the customer's skeleton frame and 3-D depth data to the Swivel sizing and product display applications. Wave your hand to select a new outfit, and it is nearly instantly fitted to your form. Next, you can turn around and view the clothing from different angles. Finally, you can snap a picture of you dressed in your favorite ensemble and—by using a secure tablet—share it with friends over social networks.

    The Printing Dress, a remarkable melding of fashion and technology, on display at Bloomingdale's in New York.Since Bloomingdale’s piloted the Swivel application last May, FaceCake has enhanced detection and identification so that the camera tracks the shopper (instead of forcing the shopper to move further for the camera) and improved detection of different-sized people so that it can display more accurately how the garment would look if fitted to the customer.

    Swivel and Bodymetrics are only two examples of Kinect for Windows unleashing new experiences in fashion and retail. Others include:

    • One of the participants in the recent Microsoft Accelerator for Kinect program, Styku, LLC, has also developed virtual fitting room software and body scanner technology powered by Kinect for Windows. 
    • Mattel brought to life Barbie: The Dream Closet that makes it possible for anyone to try on clothes from 50 years of Barbie's wardrobe. 
    • Kimetric , another Kinect Accelerator participant, uses Kinect for Windows sensors strategically placed throughout a store to gather useful data, helping a retailer to better understand consumer behavior.

    With this recent wave of retail experiences powered by Kinect for Windows, we are starting to get a glimpse into the ways technology innovators and retailers will reimagine and transform the way we shop with new Kinect-enabled tools.

    Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows Academic Pricing Now Available in the US

    • 5 Comments

    Students, teachers, researchers, and other educators have been quick to embrace Kinect’s natural user interface (NUI), which makes it possible to interact with computers using movement, speech, and gestures. In fact, some of the earliest Kinect for Windows applications to emerge were projects done by students, including several at last year’s Imagine Cup.

    One project, from an Imagine Cup team in Italy, created an application for people with severe disabilities that enables them to communicate, learn, and play games on computers using a Kinect sensor instead of a traditional mouse or keyboard. Another innovative Imagine Cup project, done by university students in Russia, used the Kinect natural user interface to fold, rotate, and examine online origami models.

    To encourage students, educators, and academic researchers to continue innovating with Kinect for Windows, special academic pricing on Kinect for Windows sensors is now available in the United States. The academic price is $149.99 through Microsoft Stores.

    If you are an educator or faculty with an accredited school, such as a university, community college, vocational school, or K-12, you can purchase a Kinect for Windows sensor at this price.

    Find out if you qualify, and then purchase online or visit a Microsoft store in your area.

    Kinect for Windows team

  • Kinect for Windows Product Blog

    The Power of Enthusiasm

    • 4 Comments

    OpenKinect founder Josh Blake at Microsoft’s Kinect for Windows Code CampWhen we launched Kinect for Xbox 360 on November 4th, 2010, something amazing happened: talented Open Source hackers and enthusiasts around the world took the Kinect and let their imaginations run wild.  We didn’t know what we didn’t know about Kinect on Windows when we shipped Kinect for Xbox 360, and these early visionaries showed the world what was possible.  What we saw was so compelling that we created the Kinect for Windows commercial program.

    Our commercial program is designed to allow our partners— companies like Toyota, Mattel, American Express, Telefonica, and United Health Group—to deploy solutions to their customers and employees.  It is also designed to allow early adopters and newcomers alike to take their ideas and release them to the world on Windows, with hardware that’s supported by Microsoft.   At the same time, we wanted to let our early adopters keep working on the hardware they’d previously purchased. That is why our SDK continues to support the Kinect for Xbox 360 as a development device.

    Kinect developer Halimat Alabi at Microsoft’s 24-hour coding marathon, June 2011As I reflect back on the past eleven months since Microsoft announced we were bringing Kinect to Windows, one thing is clear: The efforts of these talented Open Source hackers and enthusiasts helped inspire us to develop Kinect for Windows faster.  And their continued ambition and drive will help the world realize the benefits of Kinect for Windows even faster still.  From all of us on the Kinect for Windows team:  thank you.

     Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Making Learning More Interactive and Fun for Young Children

    • 4 Comments

    Although no two people learn in exactly the same way, the process of learning typically involves seeing, listening/speaking, and touching. For most young children, all three senses are engaged in the process of grasping a new concept.

    For example, when a red wooden block is given to a toddler, they hear the words “red” and “block,” see the color red, and also use their hands to touch and feel the shape of the wooden block.

    Uzma Khan, a graduate student in the Department of Computer Science at the University of Toronto, realized the Kinect natural user interface (NUI) could provide similar experiences. She used the Kinect for Windows SDK to create a prototype of an application that utilizes speech and gestures to simplify complex learning, and make early childhood education more fun and interactive.

    The application asks young children to perform an activity, such as identify the animals that live on a farm.  Using their hands to point to the animals on a computer screen, along with voice commands, the children complete the activities. To reinforce their choices, the application praises them when they make a correct selection.

    Using the speech and gesture recognition capabilities of Kinect enables children to not only learn by seeing, listening, and speaking; it lets them actively participate by selecting, copying, moving, and manipulating colors, shapes, objects, patterns, letters, numbers, and much more.

    The creation of applications to aid learning for people of all ages is one of the many ways we anticipate Kinect for Windows will be used to enable a future in which computers work more naturally and intelligently to improve our lives.

    Sheridan Jones
    Business and Strategy Director, Kinect for Windows

Page 3 of 10 (98 items) 12345»