• Kinect for Windows Product Blog

    Kinect for Windows Fall Roadmap

    • 17 Comments

    The Kinect for Windows team has been hard at work this summer, and I have some very exciting developments to share with you regarding our roadmap between now and the end of the year.

    On October 8, Kinect for Windows is coming to China. China is a leader in business and technology innovation. We are very excited to make Kinect for Windows available in China so that developers and businesses there can innovate with Kinect for Windows and transform experiences through touch-free solutions.

    Kinect for Windows hardware will be available in seven additional markets later this fall: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico.

    Map: Kinect for Windows sensor availability

    In addition to making Kinect for Windows hardware available in eight new markets this fall, we will be releasing an update to the Kinect for Windows runtime and software development kit (SDK) on October 8. This release has numerous new features that deliver additional power to Kinect for Windows developers and business customers. We will share the full details when it’s released on October 8, but in the meantime here are a few highlights:

    • Enable businesses to do more with Kinect for Windows
      We are committed to opening up even more opportunities for the creation of new end user experiences. We’ll be adding features such as expanded sensor data access—including color camera settings and extended depth data—to continue to inspire innovative uses of the Kinect for Windows technology in new and different places.
    • Improve developer efficiency
      We continue to invest in making our platform easier and more powerful for developers. That’s why we’ll be releasing more tools and samples in October, such as a new sample that demonstrates a “best in class” UI based on the Kinect for Windows Human Interface Guidelines.
    • Extend our Windows tools and operating system support
      We want to make it easy for our customers to be able to build and deploy on a variety of Windows platforms. Our October update will include support for Windows 8 desktop applications, Microsoft .NET 4.5, and Microsoft Visual Studio 2012.

    It has been a little more than seven months since we first launched Kinect for Windows in 12 markets. By the end of the year, Kinect for Windows will be available in 38 markets and we will have shipped two significant updates to the SDK and runtime beyond the initial release—and this is this just the beginning. Microsoft has had a multi-decade commitment to natural user interface (NUI), and my team and I look forward to continuing to be an important part of that commitment. In coming years, I believe that we will get to experience an exciting new era where computing becomes invisible and all of us will be able to interact intuitively and naturally with the computers around us.

    Craig Eisler
    General Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    It’s Official: Kinect for Windows is Coming Soon

    • 10 Comments

    Kimect for Windows

    To commemorate the one-year anniversary of Kinect and the Kinect Effect, I sent an email to my team earlier this week. I’d like to quote for you what I said to them, “It all started with a revolutionary sensor and amazing software that turned voice and movement into magic. With that magical combination, last year the Interactive Entertainment Business at Microsoft showed the world how to re-imagine gaming.  This year, we’re showing the world how to re-imagine entertainment.  Next year, with Kinect for Windows, we will help the world re-imagine everything else.”

    To mark the milestone, the Kinect for Windows team is celebrating with our own milestones: We’re starting up this blog, launching the official Kinect for Windows web site, and releasing beta 2 of the Kinect for Windows SDK. (And, yes, we will celebrate the anniversary more this evening– it’s been an amazing journey these past months!)

    I know many of you are eagerly awaiting the Kinect for Windows commercial program coming in early 2012. My team is working hard to deliver a great product and I’m confident that it will be worth the wait.

    We’ve already seen strong enthusiasm for Kinect among developers who have done amazing things with it in countless different ways, from education to healthcare, gaming to art installations, manufacturing to retail.

    Currently, we have more than 200 companies taking part in our pilot program. They are telling us how Kinect for Windows will help them transform their products, their processes, their brands, and their businesses. Putting the power of Kinect + Windows into the hands of business leaders and technical visionaries will give them the tools they need to develop novel solutions for everything from training employees to visualizing data, from configuring a car to managing an assembly line.

    The updated software development kit that we released today includes some great new features that help us get closer to realizing this vision, including faster skeletal tracking, better accuracy rate when it comes to skeletal tracking and joint recognition, and the ability to plug and unplug your Kinect without losing work/productivity.

    Every day, I come to work and learn about another amazing application that a partner or other developer is doing with Kinect for Windows. I look forward to next year, when the potential goes exponential and everyone’s ideas, including yours, are part of that equation.

    If you haven’t done so already, download the SDK and re-imagine the world with us.

     

    --Craig Eisler
    General Manager, Kinect for Windows


  • Kinect for Windows Product Blog

    Turn any surface into a touch screen with Ubi Interactive and Kinect for Windows

    • 0 Comments

    The following blog post was guest authored by Anup Chathoth, co-founder and CEO of Ubi Interactive.

    Ubi Interactive is a Seattle startup that was one of 11 companies from around the world selected to take part in a three-month Microsoft Kinect Accelerator program in the spring of 2012. Since then, the company has developed the software with more than 100 users and is now accepting orders for the software.

     

    Patrick Wirtz, an innovation manager for The Walsh Group, spends most of his time implementing technology that will enhance Walsh’s ability to work with clients. It’s a vital role at The Walsh Group, a general building construction organization founded in 1898 that has invested more than US$450 Million in capital equipment and regularly employs more than 5,000 engineers and skilled tradespeople.

    "It’s a powerful piece of technology," says Patrick Wirtz, shown here using Ubi in The Walsh Group offices."It’s a powerful piece of technology," says Patrick Wirtz, shown here using Ubi in The Walsh Group
    offices. By setting up interactive 3-D blueprints on the walls, Walsh gives clients the ability
    to explore, virtually, a future building or facility.

    In the construction industry, building information modeling (BIM) is a critical component of presentations to clients. BIM allows construction companies like The Walsh Group to represent the functional characteristics of a facility digitally. While this is mostly effective, Wirtz wanted something that would really “wow” his clients. He wanted a way for them to not only see the drawings, but to bring the buildings to life by allowing clients to explore the blueprints themselves.

    Ubi's interactive display being used during a presentation in a Microsoft conference roomWirtz found the solution he had been seeking when he stumbled upon an article about Ubi. At Ubi Interactive, we provide the technology to transform any surface into an interactive touch screen. All the user needs is a computer running our software, a projector, and the Kinect for Windows sensor. Immediately, Wirtz knew Ubi was something he wanted to implement at Walsh: “I contacted the guys at Ubi and told them I am very interested in purchasing the product.” Wirtz was excited about the software and flew out to Seattle for a demo.

    After interacting with the software, Wirtz was convinced that this technology could help The Walsh Group. “Ubi is futuristic-like technology,” he noted—but a technology that he and his colleagues are able to use today. Wirtz immediately saw the potential: Walsh’s building information models could now be interactive displays. Instead of merely presenting drawings to clients, Walsh can now set up an interactive 3-D blueprint on the wall. Clients can walk up to the blueprint and discover what the building will look like by touching and interacting with the display. In use at Walsh headquarters since June 2012, Ubi Interactive brings client engagement to an entirely new level.

    Similarly, Evan Collins, a recent graduate of California Polytechnic State University, used the Ubi software as part of an architecture show he organized. The exhibition showcased 20 interactive displays that allowed the fifth-year architecture students to present their thesis projects in a way that was captivating to audience members. Collins said the interactive displays, “…allowed audience members to choose what content they interacted with instead of listening to a static slideshow presentation.”

    Twenty Ubi Interactive displays at California Polytechnic University 
    Twenty Ubi Interactive displays at California Polytechnic University

    Wirtz’s and Collins’ cases are just two ways that people are currently using Ubi. Because the solution is so affordable, people from a wide range of industries have found useful applications for the Ubi software. Wirtz said, “I didn’t want to spend $10,000. I already had a projector and a computer. All I needed to purchase was the software and a $250 Kinect for Windows sensor. With this small investment, I can now turn any surface into a touch screen. It’s a powerful piece of technology.”

    In addition to small- and mid-sized companies, several Fortune 500 enterprises like Microsoft and Intel are also using the software in their conference rooms. And the use of the technology goes beyond conference rooms:

    • Ubi Interactive makes it possible for teachers to instruct classes in an interactive lecture hall.
    • Shoppers can access product information on a store’s window front, even after hours.
    • Recipes can be projected onto kitchen countertops without having to worry about getting anything dirty.
    • Children can use their entire bedroom wall to play interactive games like Angry Birds.
    • The possibilities are endless.

    At Ubi Interactive, it is our goal to make the world a more interactive place. We want human collaboration and information to be just one finger touch away, no matter where you are. By making it possible to turn any surface into a touch screen, we eliminate the need for screen hardware and thereby reduce the cost and extend the possibilities of enabling interactive displays in places where they were not previously feasible—such as on walls in public spaces. Our technology has implications of revolutionizing the way people live their lives on a global level. After private beta evaluation with more than 50 organizations, the Ubi software is now available for ordering at ubi-interactive.com.

    Anup Chathoth
    Co-Founder and CEO, Ubi Interactive

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows announces new version of SDK coming March 18

    • 12 Comments

    Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.

    On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:

    Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.

    Kinect for Windows Interactions transform how people interact with computers in settings ranging from retail to education, training, and physical therapy.
    Kinect for Windows Interactions transform how people interact with computers in
    settings ranging from retail to education, training, and physical therapy.

    Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.

    Kinect Fusion enables developers to create accurate 3-D renderings in real time.Kinect Fusion enables developers to create accurate 3-D renderings in real time.

    The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:

    • OpenCV and MATLAB Samples to allow developers to build advanced Kinect-enabled applications while using industry standards.
    • Kinect for Windows Code Samples on CodePlex, marking the first time that select Kinect for Windows code samples will be available through an open-source resource, enabling existing and potential partners to explore and develop new scenarios.

    Seeing is believing
    We demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!

    As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.

    The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.

    Bob Heddle, Director
    Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    Pre-order your Kinect for Windows v2 sensor starting today

    • 32 Comments

    At BUILD in April, we told the world that the Kinect for Windows v2 sensor and SDK would be coming this summer, and with them, the ability for developers to start creating Windows Store apps with Kinect for the first time. Well here in Redmond, Washington, it’s not summer yet. But today we are pleased to announce that developers can pre-order the Kinect for Windows v2 sensor. Developers who take advantage of this pre-order option will be able to start building solutions ahead of the general public.

    Sensors purchased during the pre-order phase will be shipped in July, at which time we will also release a public beta of our software development kit (SDK). All of this will happen a few months ahead of general availability of sensors and the SDK, giving pre-order customers a head start on using the v2 sensor’s new and improved features, including increased depth-sensing capabilities, full 1080p video, improved skeletal tracking, and enhanced infrared technology.

    Kinect for Windows v2 sensor

    Thousands of developers wanted to take part in our Developer Preview program but were unable to do so—in fact, we’re still receiving requests from all around the world. So for these and other developers who are eager to start using the Kinect for Windows v2, the pre-order option offers access to the new sensor ahead of general availability. Bear in mind, however, that we have limited quantities of pre-order sensors, so order while supplies last.

    The v2 sensors will also be shipped in July to those who participated in the Developer Preview program. For these early adopters, it’s been an amazing six months: we’ve seen more stunning designs, promising prototypes, and early apps than we can count—from finger tracking to touch-free controls for assembly line workers to tools for monitoring the environment. At BUILD, we showed you what Reflexion Health and Freak’n Genius were able to achieve with the v2 sensor in just a matter of weeks. And in July, when the sensor and SDK are more broadly available, we can only imagine what’s next.

    Kinect for Windows will continue to feature more innovative uses of the v2 technology on this blog in the coming months. As Microsoft Corporate Vice President and Chief Evangelist Steven Guggenheimer notes, “I love what the Kinect sensor and SDK can do. Getting the v2 sensor into the hands of more developers and getting the SDK more widely available is the next step.”

    We are committed to a future where humans and technology can interact more seamlessly—in the living room, on their PCs, and beyond.

    —The Kinect for Windows Team

    Key links


  • Kinect for Windows Product Blog

    The latest Kinect for Windows SDK is here

    • 14 Comments

    Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately. 

    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
    A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7:
    counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify
    primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.

    Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here.  Now you can untether yourself and move around a conference room naturally.
    Kinect Interactions are designed to let users intuitively do things like press their hand forward
    a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you
    can untether yourself and move around a conference room naturally.

    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
    In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the
    computer without leaving her patient’s side.

    Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
    Customers can virtually try on merchandise, such as sunglasses, by using business solutions
    created
    with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in
    stock, you can still
    see what they look like on you.

    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
    Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate
    3-D renderings of people and objects in real time.

    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
    Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the
    price it would cost with traditional high-end 3-D scanners.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including
    augmented reality, 3-D printing, interior and industrial design, and body scanning for
    things like custom fitting and improved clothes shopping.

    Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality,
    3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .

    The Kinect for Windows Team

    Key Links

  • Kinect for Windows Product Blog

    The Kinect for Windows v2 sensor and free SDK 2.0 public preview are here

    • 60 Comments

    Today, we began shipping thousands of Kinect for Windows v2 sensors to developers worldwide. And more sensors will leave the warehouse in coming weeks, as we work to fill orders as quickly as possible.

    Additionally, Microsoft publicly released a preview version of the Kinect for Windows SDK 2.0 this morning—meaning that developers everywhere can now take advantage of Kinect’s latest enhancements and improved capabilities. The SDK is free of cost and there are no fees for runtime licenses of commercial applications developed with the SDK.

    The new sensor can track as many as six complete skeletons and 25 joints per person.
    The new sensor can track as many as six complete skeletons and 25 joints per person.

    We will be releasing a final version of the SDK 2.0 in a few months, but with so many of you eagerly awaiting access, we wanted to make the SDK available as early as possible. For those of you who were unable to take part in our developer preview program, now you can roll up your sleeves and start developing. And for anyone else out there who has been waiting—well, the wait is over!

    The new sensor’s key features include:

    • Improved skeletal tracking: The enhanced fidelity of the depth camera, combined with improvements in the software, have led to a number skeletal tracking developments. In addition to now tracking as many as six complete skeletons (compared to two with the original sensor), and tracking 25 joints per person (as compared to 20 with the original sensor), the tracked positions are more anatomically correct and stable—and the range of tracking is broader. This enables and simplifies many scenarios, including more stable avateering, more accurate body position evaluation, crisper interactions, and more bystander involvement in interactive scenarios.
    • Higher depth fidelity: With higher depth fidelity and a significantly improved noise floor, the v2 sensor gives you better 3D visualization, increased ability to see smaller objects and all objects more clearly, and more stable skeletal tracking.
    • 1080p HD video: The color camera captures full, beautiful 1080p video that can be displayed in the same resolution as the viewing screen, allowing for a broad range of powerful scenarios. In addition to improving video communications and video analytics applications, this provides a great input on which to build high-quality, augmented reality scenarios, digital signage, and more.
    • New active infrared capabilities: In addition to allowing the Kinect for Windows v2 sensor to see in the dark, the new infrared (IR) capabilities produce a lighting-independent view, which makes machine learning or computer-vision–based tasks much easier—because you don’t have to account for or model lighting-based variation. And, you can now use IR and color at the same time. We look forward to the many new and innovative uses that the community will develop to use this fundamentally new capability.
    • Wider/expanded field of view: The expanded field of view enables a larger area of a scene to be captured by the camera. As a result, users can be closer to the camera and still in view, and the camera is effective over a larger total area.

    With the ability to track new joints for hand tips and thumbs—as well as improved understanding of the soft connective tissue and body positioning—you get more anatomically correct positions for crisp interactions.
    With the ability to track new joints for hand tips and thumbs—as well as improved understanding of the soft connective tissue and body positioning—you get more anatomically correct positions for crisp interactions.

    In addition to the new sensor’s key features, the Kinect for Windows SDK 2.0 includes:

    • Improved skeletal, hand, and joint orientation: With the ability to track as many as six people and 25 skeletal joints per person—including new joints for hand tips, thumbs, and shoulder center—as well as improved understanding of the soft connective tissue and body positioning—you get more anatomically correct positions for crisp interactions and more accurate avateering. These improved capabilities result in more lifelike avatars and open up new and better scenarios in fitness, wellness, education and training, entertainment, gaming, movies, communications, and more.
    • Support for new development environments: New Unity support provides faster, cost-efficient, and high quality support for cross-platform development, enabling developers to build their apps for the Windows Store using tools they already know.
    • Powerful tooling: Thanks to Kinect Studio’s enhanced recording and playback features, developers can develop on the go, without the need to have a Kinect sensor with them at all times. And Gesture Builder lets developers build their own custom gestures that the system recognizes and uses to write code by using machine learning. These features increase productivity and keep costs down.
    • Advanced face tracking: With significantly increased resolution, applications can capture a face with a 2,000-point mesh that looks more true to life. This means that avatars will look more lifelike.
    • Simultaneous multi-app support: New multi-app support enables more than one application to access a single sensor simultaneously. This means you could have a business intelligence app running at the same time that a training or retail or education experience were running, allowing you to get analytics in real time.

    When the final version of the SDK is available, people will be able to start submitting their apps to the Windows Store, and companies will be able to make their v2 solutions available commercially. We look forward to seeing what everyone does with the new NUI.

    The new SDK 2.0 public preview includes Unity support for faster, cost-efficient, and high quality support for cross-platform development, enabling developers to build their apps for the Windows Store using tools they already know.
    The new SDK 2.0 public preview includes Unity support for faster, cost-efficient, and high quality support for cross-platform development, enabling developers to build their apps for the Windows Store using tools they already know.

    We’ve already shown you what several partners are working on, including Reflexion Health and Freak n’ Genius. Most recently, Walt Disney Studios Motion Pictures have developed an interactive experience to help promote their upcoming movie, Planes 2: Fire & Rescue. One of seven experience kiosks will debut in London at the end of the week in time for school holidays. Disney is confident it will receive an enthusiastic reception from users of all ages, creating an engaging experience associated with the Disney brand and, of course, sparking interest in the movie which releases nationwide from August 8. Read more.

    We will showcase more partner solutions here in coming months, so stay tuned. In the meantime, order your new sensor, download the SDK 2.0 public preview, and start developing your NUI apps. And please join our Microsoft Virtual Academy to learn from our experts and jump start your development.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect for Windows releases SDK update and launches in China

    • 10 Comments

    I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.

    Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.

    Kinect for Windows availability: current and coming soon

     

    The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:

    Extended sensor data access

    • Data from the sensor's 3-axis accelerometer is now exposed in the API. This enables detection of the sensor's orientation.
    • Extended-range depth data now provides details beyond 4 meters. Extended-range depth data is data beyond the tested and certified ranges and is therefore lower accuracy. For those developers who want access to this data, it’s now available.
    • Color camera settings, such as brightness and exposure, can now be set explicitly by the application, allowing developers to tune a Kinect for Windows sensor’s environment.
    • The infrared stream is now exposed in the API. This means developers can use the infrared stream in many scenarios, such as calibrating other color cameras to the depth sensor or capturing grayscale images in low-light situations.
    • The updated SDK used with the Kinect for Windows sensors allows for faster toggling of IR to support multiple overlapping sensors.

    Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.

    One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows

    One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows
    Human Interface Guidelines called the Basic Interactions – WPF sample.


    Improved developer tools

    • Kinect Studio has been updated to support all new sensor data features.
    • The SDK ships with a German speech recognition language pack that has been optimized for the sensor's microphone array.
    • Skeletal tracking is now supported on multiple sensors within a single application.
    • New samples show how to use all the new SDK features. Additionally, a fantastic new sample has been released that demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.

    We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.

    Greater support for operating systems

    • Windows 8 compatibility. By using the updated Kinect for Windows SDK, you can develop a Kinect for Windows solution for Windows 8 desktop applications.
    • The latest SDK supports development with Visual Studio 2012 and the new Microsoft .NET Framework 4.5.
    • The Kinect for Windows sensor now works on Windows running in a virtual machine (VM) and has been tested with the following VM environments: Microsoft Hyper-V, VMWare, and Parallels. 

    Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.

    This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.

    As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.

    Craig Eisler
    General Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    Kinect for Windows shines at the 2014 NRF Convention

    • 1 Comments

    This week, some 30,000 retailers from around the world descended on New York’s Javits Center for the 2014 edition of the National Retail Federation’s Annual Convention and Expo, better known as “Retail’s BIG Show.” With an exhibit space covering nearly four football fields and featuring more than 500 vendors, an exhibitor could have been overlooked easily—but not when your exhibit displayed retailing innovations that use the power of the Microsoft Kinect for Windows sensor and SDK. Here are some of the Kinect experiences that attracted attention on the exhibit floor.

    Visitors at the Kinect for Windows demo station

    NEC Corporation of America demonstrated a “smart shelf” application that makes the most of valuable retail space by tailoring the messaging on digital signage to fit the shopper. At the heart of this system is Kinect for Windows, which discerns shoppers who are interested in the display and uses analytics to determine such consumer attributes as age, gender, and level of engagement. On the back end, the data captured by Kinect is delivered to a dashboard where it can be further mined for business intelligence. Allen Ganz, a senior account development manager at NEC, praises the Kinect-based solution, noting that it “provides unprecedented actionable insights for retailers and brands at the point-of-purchase decision.”

    Razorfish displayed two different Kinect-based scenarios, both of which highlight an immersive consumer experience that’s integrated across devices. The first scenario engages potential customers by involving them in a Kinect-driven beach soccer game. In this dual-screen experience, one customer has the role of striker, and uses his or her body movements—captured by the Kinect for Windows sensor—to dribble the ball and then kick it toward the goal. The other customer assumes the role of goalie; his or her avatar appears on the second display and its actions are controlled by the customer’s movements—again captured via the Kinect for Windows sensor—as he or she tries to block the shot. Customers who succeed, accumulate points that can be redeemed for a real (not virtual) beverage from a connected vending machine. Customers can work up a sweat in this game, so the beverage is a much-appreciated reward. But the real reward goes to the retailer, as this compelling, gamified experience creates unique opportunities for sales associates to connect with the shoppers.

    The second scenario from Razorfish also featured a beach theme. This sample experience is intended to take place in a surf shop, where customers design their own customized surfboard by using a Microsoft Surface. Then they use a Kinect-enabled digital signage application to capture images of the customized board against the background of one of the world’s top beaches. This image is immediately printed as a postcard, and a second copy is sent to the customer in an email. Here, too, the real goal is to engage customers, pulling them into an immersive experience that is personal, mobile, and social.

    Razorfish demos their customized surfboard scenario

    Above all, the Razorfish experiences help create a bond between the customer and a brand. “Kinect enables consumers to directly interact personally with a brand, resulting in a greater sense of brand loyalty,” notes Corey Schuman, a senior technical architect at Razorfish.

    Yet another compelling Kinect-enabled customer experience was demonstrated by FaceCake, whose Swivel application turns the computer into a virtual dressing room where a shopper can try on clothes and accessories with a simple click. The customer poses in front of a Kinect for Windows sensor, which captures his or her image. Then the shopper selects items from a photo display of clothing and accessories, and the application displays the shopper “wearing” the selected items. So, a curious shopper can try on, say, various dress styles until she finds one she likes. Then she can add a necklace, scarf, or handbag to create an entire ensemble. She can even split the screen to compare her options, showing side-by-side images of the same dress accessorized with different hats. And yes, this app works for male shoppers, too.

    The common theme in all these Kinect-enabled retail applications is customer engagement. Imagine seeing a digital sign respond to you personally, or getting involved in the creation of your own product or ideal ensemble. If you’re a customer, these are the kinds of interactive experiences that draw you in. In a world where every retailer is looking for new ways to attract and connect with customers, Kinect for Windows is engaging customers and helping them learn more about the products. The upshot is a satisfied customer who's made a stronger connection during their shopping experience, and a healthier bottom line for the retailer.

    The Kinect for Windows Team

    Key links

  • Kinect for Windows Product Blog

    Kinect Fusion demonstrated at Microsoft Research TechFest, coming soon to SDK

    • 9 Comments

    Revealed in November as a future addition to the Kinect for Windows SDK, Kinect Fusion made a big impression at the annual TechFest event hosted by Microsoft Research this week in Redmond, Washington.

    Kinect Fusion pulls depth data that is generated by the Kinect for Windows sensor and, from the sequence of frames, constructs a highly detailed 3-D map of objects or environments. The tool averages readings over hundreds or even thousands of frames to create a rich level of detail.

    The Kinect Fusion project, shown during TechFest 2013, enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor.
    Kinect Fusion, shown during TechFest 2013, enables high-quality scanning and reconstruction of
    3-D models using just a handheld Kinect for Windows sensor and the Kinect for Windows SDK.

    "The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly," said Shahram Izadi, senior researcher at Microsoft Research Cambridge. "Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them."

    When scanning smaller objects, you also have the option to simply move the object instead of the sensor.

    The Cambridge researchers and Kinect for Windows team collaborated closely on Kinect Fusion to construct a tool that can enable businesses and developers to devise new types of applications.

    "This has been a wonderful example of collaboration between Microsoft Research and our product group," said Kinect for Windows Senior Program Manager Chris White. "We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper."

    "This kind of collaboration is one of the unique strengths of Microsoft, where we can bring together world-class researchers and world-class engineers to deliver real innovation," White added. "Kinect Fusion opens up a wide range of development possibilities—everything from gaming and augmented reality to industrial design. We're really excited to be able to include it in a future release of the Kinect for Windows SDK."

    Kinect for Windows team

    Key Links

Page 2 of 11 (109 items) 12345»