• Kinect for Windows Product Blog

    Starting February 1, 2012: Use the Power of Kinect for Windows to Change the World

    • 55 Comments

    Kinect for Windows hardware We’re thrilled at the sales we’ve seen for Kinect - 18 million sold in the past year – and we were honored to receive a Guinness World Record for the fastest selling consumer electronics device ever. As consumers, we may take devices like Kinect for granted, but in fact electronic devices are the fruit of a great deal of behind-the-scenes ingenuity and experimentation.  Kinect is a shining example of this.  Instead of mimicking the handheld motion-sensing controllers already on the market, Microsoft shattered the existing controller paradigm by inventing a new natural user interface system that enables advanced human tracking, gesture recognition, voice control and more.  Our answer to the “wand” controller was no controller at all, or as we say, “YOU are the controller.” 

    Getting there wasn’t easy.  Without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and the deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist.   And as amazing a piece of hardware as Kinect is, it is much more than that. At the heart of the Kinect experience lies sophisticated software that meaningfully deciphers the images and gestures captured by the 3D sensor as well as the voice commands captured by the microphone array from someone much further away than someone using a headset or a phone. More importantly, Kinect software can understand what each user means by a particular gesture or command across a wide range of possible shapes, sizes, and actions of real people.

    The incredible amount of innovation on Kinect for Xbox 360 this past year shows the potential for Kinect as a platform for developers and businesses to build new and innovative offerings.  Along with many others, we have only begun to explore the potential of this amazing technology.  This proliferation of creative and imaginative new ideas for Kinect, which we call the Kinect Effect, will expand even further with our commercial release of Kinect for Windows.

    Today, we are announcing that the new Kinect for Windows hardware and accompanying software will be available on February 1st, 2012 in 12 countries (United States, Australia, Canada, France, Germany, Ireland, Italy, Japan, Mexico, New Zealand, Spain, United Kingdom), at a suggested retail price of US $249.  Kinect for Windows hardware will be available, in limited quantities at first, through a variety of resellers and distributors.  The price includes a one-year warranty, access to ongoing software updates for both speech and human tracking, and our continued investment in Kinect for Windows-based software advancements.  Later this year, we will offer special academic pricing (planned at US $149) for Qualified Educational Users.

    We love the innovation we have seen built using Kinect for Xbox 360 – this has been a source of inspiration and delight for us and compelled us to create a team dedicated to serving this opportunity.   We are proud to bring technology priced in the tens of thousands of dollars just a few years ago to the mainstream at extremely low consumer prices. And although Kinect for Windows is still value-priced for the technology, some will ask us why it isn’t the same price as Kinect for Xbox.

    The ability to sell Kinect for Xbox 360 at its current price point is in large part subsidized by consumers buying a number of Kinect games, subscribing to Xbox LIVE, and making other transactions associated with the Xbox 360 ecosystem.  In addition, the Kinect for Xbox 360 was built for and tested with the Xbox 360 console only, which is why it is not licensed for general commercial use, supported or under warranty when used on any other platform. 

    With Kinect for Windows, we are investing in creating a platform that is optimized for scenarios beyond the living room, and delivering new software features on an ongoing basis, starting with “near mode” (see my earlier blog post for more about this).  In addition to support for Windows 7 and the Windows 8 developer preview (desktop apps only), Kinect for Windows will also support gesture and voice on Windows Embedded-based devices and will enhance how data is captured and accessed within intelligent systems across manufacturing, retail  and many more industries. We are building the Kinect for Windows platform in a way that will allow other companies to integrate Kinect into their offerings and we have invested in an approach that allows them to develop in ways that are dependable and scalable.

    Kinect for Windows includes hardware and SDKWe have chosen a hardware-only business model for Kinect for Windows, which means that we will not be charging for the SDK or the runtime; these will be available free to developers and end-users respectively.  As an independent developer, IT manager, systems integrator, or ISV, you can innovate with confidence knowing that you will not pay license fees for the Kinect for Windows software or the ongoing software updates, and the Kinect for Windows hardware you and your customers use is supported by Microsoft.

    Although we encourage all developers to understand and take advantage of the additional features and updates available with the new Kinect for Windows hardware and accompanying software, those developers using our SDK and the Kinect for Xbox 360 hardware may continue to use these in their development activities if they wish.  However, non-commercial deployments using Kinect for Xbox 360 that were allowed using the beta SDK are not permitted with the newly released software. Non-commercial deployments using the new runtime and SDK will require the fully tested and supported Kinect for Windows hardware and software platform, just as commercial deployments do. Existing non-commercial deployments using our beta SDK may continue using the beta and the Kinect for Xbox 360 hardware; to accommodate this, we are extending the beta license for three more years, to June 16, 2016.  

    We expect that as Kinect for Windows hardware becomes readily available, developers will shift their development efforts to Kinect for Windows hardware in conjunction with the latest SDK and runtime.  The combination of Kinect for Windows hardware and software creates a superior development platform for Windows and will yield a higher quality, better performing experience for end users. 

    We are excited for the new possibilities that Kinect will enable on the Windows platform, and to see how businesses and developers reimagine their processes and their products, and the many different ways each Kinect could enrich lives and make using technology more natural for everyone.

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Kinect for Windows – Building the Future

    • 34 Comments

    Since announcing a few weeks ago that the Kinect for Windows commercial program will launch in early 2012, we’ve been asked whether there will also be new Kinect hardware especially for Windows. The answer is yes; building on the existing Kinect for Xbox 360 device, we have optimized certain hardware components and made firmware adjustments which better enable PC-centric scenarios. Coupled with the numerous upgrades and improvements our team is making to the Software Development Kit (SDK) and runtime, the new hardware delivers features and functionality that Windows developers and Microsoft customers have been asking for.

    Simple changes include shortening the USB cable to ensure reliability across a broad range of computers and the inclusion of a small dongle to improve coexistence with other USB peripherals.  Of particular interest to developers will be the new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.  “Near Mode” will enable a whole new class of “close up” applications, beyond the living room scenarios for Kinect for Xbox 360. This is one of the most requested features from the many developers and companies participating in our Kinect for Windows pilot program and folks commenting on our forums, and we’re pleased to deliver this, and more, at launch.

    Another thing we’ve heard from our pilot customers is that companies exploring commercial uses of Kinect want to operate with the assurance of support and future innovation from Microsoft. As part of Microsoft’s deep commitment to NUI, we designed the Kinect for Windows commercial program to give licensed customers access to ongoing updates in both speech and human tracking (where Microsoft has been investing for years), in addition to providing fully supported Kinect hardware for Windows. We’ve been captivated by the countless creative ways companies worldwide envision how their businesses and industries can be revolutionized with Kinect, and are proud to be helping those companies to explore the profound implications NUI has for the future.

    Microsoft also has just launched a new initiative, the Kinect Accelerator incubation project run by Microsoft BizSpark. I will be serving as a Mentor for this program, along with a number of other folks from Microsoft. BizSpark helps software startups through access to Microsoft software development tools, connection to key industry players (including investors) and by providing marketing visibility.  The Kinect Accelerator will give 10 tech-oriented companies using Kinect (on either Windows or Xbox360) an investment of $20,000 each, plus a number of other great perks. Applications are being accepted now through January 25th, 2012. At the end of the program, each company will have an opportunity to present at an Investor Demo Day to angel investors, venture capitalists, Microsoft executives (including me), media and industry influentials. I can’t wait to see what they (and maybe you?) come up with!

     

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    The New Generation Kinect for Windows Sensor is Coming Next Year

    • 32 Comments

    The all-new active-infrared capabilities allow the new sensor to work in nearly any lighting condition. This makes it possible for developers to build apps with enhanced recognition of facial features, hand position, and more.By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year. 

    Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.

    Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.

    Some of the key capabilities of the new Kinect sensor include:

    • Higher fidelity
      The new sensor includes a high-definition (HD) color camera as well as a new noise-isolating multi-microphone array that filters ambient sounds to recognize natural speaking voices even in crowded rooms. Also included is Microsoft’s proprietary Time-of-Flight technology, which measures the time it takes individual photons to rebound off an object or person to create unprecedented accuracy and precision. All of this means that the new sensor recognizes precise motions and details, such as slight wrist rotation, body position, and even the wrinkles in your clothes. The Kinect for Windows community will benefit from the sensor’s enhanced fidelity, which will allow developers to create highly accurate solutions that see a person’s form better than ever, track objects and environments with greater detail, and understand voice commands in noisier settings than before.

    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
    The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to
    create apps that see a person's form better, track objects with greater detail, and understand
    voice commands in noisier settings.

    • Expanded field of view
      The expanded field of view accommodates a multitude of differently sized rooms, minimizing the need to modify existing room configurations and opening up new solution-development opportunities. The combination of the new sensor’s higher fidelity plus expanded field of view will give businesses the tools they need to create truly untethered, natural computing experiences such as clicker-free presentation scenarios, more dynamic simulation and training solutions, up-close interactions, more fluid gesture recognition for quick interactions on the go, and much more.
          
    • Improved skeletal tracking
      The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This not only yields more accurate skeletal tracking, it opens up a range of new scenarios, including improved “avateering,” the ability to develop enhanced rehabilitation and physical fitness solutions, and the possibility to create new experiences in public spaces—such as retail—where multiple users can participate simultaneously.

    The new sensor tracks more points on the human body than previously and tracks six skeletons at once, opening a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
    The new sensor tracks more points on the human body than previously, including the tip of the hand
    and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved
    "avateering" to experiences in which multiple users can participate simultaneously.
      

    • New active infrared (IR)
      The all-new active-IR capabilities allow the new sensor to work in nearly any lighting condition and, in essence, give businesses access to a new fourth sensor: audio, depth, color…and now active IR. This will offer developers better built-in recognition capabilities in different real-world settings—independent of the lighting conditions—including the sensor’s ability to recognize facial features, hand position, and more. 

    I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.

    A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.

    Bob Heddle
    Director, Kinect for Windows

    Key links

     
    Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images

     

  • Kinect for Windows Product Blog

    Kinect for Windows is now Available!

    • 44 Comments

    Kinect for Windows sensorsOn January 9th, Steve Ballmer announced at CES that we would be shipping Kinect for Windows on February 1st.  I am very pleased to report that today version 1.0 of our SDK and runtime were made available for download, and distribution partners in our twelve launch countries are starting to ship Kinect for Windows hardware, enabling companies to start to deploy their solutions. The suggested retail price is $249, and later this year, we will offer special academic pricing of $149 for Qualified Educational Users.

    In the three months since we released Beta 2, we have made many improvements to our SDK and runtime, including:

    • Support for up to four Kinect sensors plugged into the same computer
    • Significantly improved skeletal tracking, including the ability for developers to control which user is being tracked by the sensor
    • Near Mode for the new Kinect for Windows hardware, which enables the depth camera to see objects as close as 40 centimeters in front of the device
    • Many API updates and enhancements in the managed and unmanaged runtimes
    • The latest Microsoft Speech components (V11) are now included as part of the SDK and runtime installer
    • Improved “far-talk” acoustic model that increases speech recognition accuracy
    • New and updated samples, such as Kinect Explorer, which enables developers to explore the full capabilities of the sensor and SDK, including audio beam and sound source angles, color modes, depth modes, skeletal tracking, and motor controls
    • A commercial-ready installer which can be included in an application’s set-up program, making it easy to install the Kinect for Windows runtime and driver components for end-user deployments.
    • Robustness improvements including driver stability, runtime fixes, and audio fixes

     More details can be found here.

    Kinect for Windows can capture depth data on up to six peopleAs I mentioned in an earlier blog post, without many years of intense R&D efforts, including research investments of hundreds of millions of dollars, and deep partnership between our research teams, software teams, hardware teams, manufacturing teams, and games studios, Kinect simply wouldn’t exist.  Shipping Kinect for Windows was another cross-Microsoft effort: not only did the hardware and software teams work closely together to create an integrated solution, but our support, manufacturing, supply chain, reverse logistics, and account teams have all been working hard to prepare for today’s launch.  As well, our research, speech, and Xbox NUI teams have contributed to making Kinect for Windows a better product.   Microsoft’s ability to make these kinds of deep investments makes Kinect for Windows a product that companies can deploy with confidence, knowing you have our support and our ongoing commitment to make Kinect for Windows the best it can be.

    Looking towards the future, we are planning on releasing updates to our SDK and runtime 2-3 times per year – in fact, the team is already hard at work on the next release.  We are continuing to invest in programs like our Testing and Adoption Program and the Kinect Accelerator, and will work to create new programs in the future to help support our developer and partner ecosystem. We will also continue to listen to our developer community and business customers for the kinds of features and capabilities they need, as they re-imagine the future of computing using the power of Kinect.

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Updated SDK, with HTML5, Kinect Fusion improvements, and more

    • 9 Comments

    I am pleased to announce that we released the Kinect for Windows software development kit (SDK) 1.8 today. This is the fourth update to the SDK since we first released it commercially one and a half years ago. Since then, we’ve seen numerous companies using Kinect for Windows worldwide, and more than 700,000 downloads of our SDK.

    We build each version of the SDK with our customers in mind—listening to what the developer community and business leaders tell us they want and traveling around the globe to see what these dedicated teams do, how they do it, and what they most need out of our software development kit.

    The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
    The new background removal API is useful for advertising, augmented reality gaming, training
     and simulation, and more.

    Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:

    • New background removal. An API removes the background behind the active user so that it can be replaced with an artificial background. This green-screening effect was one of the top requests we’re heard in recent months. It is especially useful for advertising, augmented reality gaming, training and simulation, and other immersive experiences that place the user in a different virtual environment.
    • Realistic color capture with Kinect Fusion. A new Kinect Fusion API scans the color of the scene along with the depth information so that it can capture the color of the object along with its three-dimensional (3D) model. The API also produces a texture map for the mesh created from the scan. This feature provides a full fidelity 3D model of a scan, including color, which can be used for full color 3D printing or to create accurate 3D assets for games, CAD, and other applications.
    • Improved tracking robustness with Kinect Fusion. This algorithm makes it easier to scan a scene. With this update, Kinect Fusion is better able to maintain its lock on the scene as the camera position moves, yielding a more reliable and consistent scanning.
    • HTML interaction sample. This sample demonstrates implementing Kinect-enabled buttons, simple user engagement, and the use of a background removal stream in HTML5. It allows developers to use HTML5 and JavaScript to implement Kinect-enabled user interfaces, which was not possible previously—making it easier for developers to work in whatever programming languages they prefer and integrate Kinect for Windows into their existing solutions.
    • Multiple-sensor Kinect Fusion sample. This sample shows developers how to use two sensors simultaneously to scan a person or object from both sides—making it possible to construct a 3D model without having to move the sensor or the object! It demonstrates the calibration between two Kinect for Windows sensors, and how to use Kinect Fusion APIs with multiple depth snapshots. It is ideal for retail experiences and other public kiosks that do not include having an attendant available to scan by hand.
    • Adaptive UI sample. This sample demonstrates how to build an application that adapts itself depending on the distance between the user and the screen—from gesturing at a distance to touching a touchscreen. The algorithm in this sample uses the physical dimensions and positions of the screen and sensor to determine the best ergonomic position on the screen for touch controls as well as ways the UI can adapt as the user approaches the screen or moves further away from it. As a result, the touch interface and visual display adapt to the user’s position and height, which enables users to interact with large touch screen displays comfortably. The display can also be adapted for more than one user.

    We also have updated our Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample, including the following:

    Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
    Design a transition that reveals or hides additional information
    without obscuring the anchor points in the overall UI.

    Design UI where users can accomplish all tasks for each goal within a single range.
    Design UI where users can accomplish all tasks for each goal
    within a single range.

    My team and I believe that communicating naturally with computers means being able to gesture and speak, just like you do when communicating with people. We believe this is important to the evolution of computing, and are committed to helping this future come faster by giving our customers the tools they need to build truly innovative solutions. There are many exciting applications being created with Kinect for Windows, and we hope these new features will make those applications better and easier to build. Keep up the great work, and keep us posted!

    Bob Heddle, Director
    Kinect for Windows

    Key links

  • Kinect for Windows Product Blog

    Near Mode: What it is (and isn’t)

    • 14 Comments

    There has been a lot of speculation on what near mode is since we announced it.  As I mentioned in the original post, the Kinect for Windows device has new firmware which enables the depth camera to see objects as close as 50 centimeters in front of the device without losing accuracy or precision, with graceful degradation down to 40 centimeters.

    The lenses on the Kinect for Windows sensor are the same as the Kinect for Xbox 360 sensor, so near mode does not change the field of view as some people have been speculating.  As some have observed, the Kinect for Xbox 360 sensor was already technically capable of seeing down 50 centimeters – but with the caveat “as long as the light is right”.

    That caveat turned out to be a pretty big caveat.  The Kinect for Windows team spent many months developing a way to overcome this so the sensor would properly detect close up objects in more general lighting conditions.  This resulted not only in the need for new firmware, but changes to the way the devices are tested on the manufacturing line. In addition to allowing the sensor to see objects as close as 40 centimeters, these changes make the sensor less sensitive to more distant objects: when the sensor is in near mode, it has full accuracy and precision for objects 2 meters away, with graceful degradation out to 3 meters. Here is a handy chart one of our engineers made that shows the types of depth values returned by the runtime:

    Kinect for Windows default and near modeIn Beta 2, for an object 800-4000 millimeters from the sensor  the runtime would return the depth value, and the runtime returned a 0 regardless if the detected depth was unknown, too near or too far.  Our version 1.0 runtime will return depth values if an object is in the above cyan zone.  If the object is in the purple, brown or white zones, the runtime will return a distinct value indicating the appropriate zone.

    Additionally, in version 1.0 of the runtime, near mode will have some skeletal support, although not full 20-joint skeletal tracking (ST).  The below table outlines the differences between the default mode and near mode:

    Table_Kinect for Windows default and near mode

    We believe that near mode, with its operational envelope of 40 centimeters to 3 meters, will enable many new classes of applications. While full 20-joint ST will not be supported in near mode with version 1.0 of the runtime, we will be working hard to support ST in near mode in the future!

    Craig Eisler
    General Manager, Kinect for Windows

  • Kinect for Windows Product Blog

    Kinect for Windows: SDK and Runtime version 1.5 Released

    • 21 Comments

    Kinect for Windows sensorI am pleased to announce that today we have released version 1.5 of the Kinect for Windows runtime and SDK.  Additionally, Kinect for Windows hardware is now available in Hong Kong, South Korea, Singapore, and Taiwan. Starting next month, Kinect for Windows hardware will be available in 15 additional countries: Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, South Africa, Sweden, Switzerland and the United Arab Emirates. When this wave of expansion is complete, Kinect for Windows will be available in 31 countries around the world.  Go to our Kinect for Windows website to find a reseller in your region.

     We have added more capabilities to help developers build amazing applications, including:

    • Kinect Studio, our new tool which allows developers to record and play back Kinect data, dramatically shortening and simplifying the development lifecycle of a Kinect application. Now a developer writing a Kinect for Windows application can record clips of users in the application’s target environment and then replay those clips at a later time for testing and further development.
    • A set of Human Interface Guidelines (HIG) to guide developers on best practices for the creation of Natural User Interfaces using Kinect.
    • The Face Tracking SDK, which provides a real-time 3D mesh of facial features—tracking the head position, location of eyebrows, shape of the mouth, etc.
    • Significant sample code additions and improvements.  There are many new samples in both C++ and C#, plus a “Basics” series of samples with language coverage in C++, C#, and Visual Basic.
    • SDK documentation improvements, including new resources as well as migration of documentation to MSDN for easier discoverability and real-time updates.

     We have continued to expand and improve our skeletal tracking capabilities in this release:

    • Kinect for Windows SDK v1.5 offers 10-joint head/shoulders/arms skeletal trackingSeated Skeletal Tracking is now available. This tracks a 10-joint head/shoulders/arms skeleton, ignoring the leg and hip joints. It is not restricted to seated positions; it also tracks head/shoulders/arms when a person is standing. This makes it possible to create applications that are optimized for seated scenarios (such as office work with productivity software or interacting with 3D data) or standing scenarios in which the lower body isn’t visible to the sensor (such as interacting with a kiosk or when navigating through MRI data in an operating room).
    • Skeletal Tracking is supported in Near Mode, including both Default and Seated tracking modes. This allows businesses and developers to create applications that track skeletal movement at closer proximity, like when the end user is sitting at a desk or needs to stand close to an interactive display.

    We have made performance and data quality enhancements, which improve the experience of all Kinect for Windows applications using the RGB camera or needing RGB and depth data to be mapped together (“green screen” applications are a common example):

    • Performance for the mapping of a depth frame to a color frame has been significantly improved, with an average speed increase of 5x.
    • Depth and color frames will now be kept in sync with each other. The Kinect for Windows runtime continuously monitors the depth and color streams and corrects any drift.
    • RGB Image quality has been improved in the RGB 640x480 @30fps and YUV 640x480 @15fps video modes. The image quality is now sharper and more color-accurate in high and low lighting conditions.

    New capabilities to enable avatar animation scenarios, which makes it easier for developers to build applications that control a 3D avatar, such as Kinect Sports.

    • Kinect for Windows skeletal tracking is supported in near mode, including both default and seated tracking modesKinect for Windows runtime provides Joint Orientation information for the skeletons tracked by the Skeletal Tracking pipeline.  
    • The Joint Orientation is provided in two forms:  A Hierarchical Rotation based on a bone relationship defined on the Skeletal Tracking joint structure, and an Absolute Orientation in Kinect camera coordinates.

    Finally, as I mentioned in my Sneak Peek Blog post, we released four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we released new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain, and Spanish/Mexico.

    As we have worked with customers large and small over the past months, we’ve seen the value in having a fully integrated approach: the Kinect software and hardware are designed together; audio, video, and depth are all fully supported and integrated; our sensor, drivers, and software work together to provide world class echo cancellation; our approach to human tracking, which is designed in conjunction with the Kinect sensor, works across a broad range of people of all shapes, sizes, clothes, and hairstyles, etc. And because we design the hardware and software together, we are able to make changes that open up exciting new areas for innovation, like Near Mode.

    Furthermore, because Kinect for Windows is from Microsoft, our support, distribution, and partner network are all at a global scale. For example, the Kinect for Windows hardware and software are tested together and supported as a unit in every country we are in (31 countries by June!), and we will continue to add countries over time. Microsoft’s developer tools are world class, and our SDK is built to fully integrate with Visual Studio. Especially important for our global business customers is Microsoft’s ability to connect them to partners and experts who can help them use Kinect for Windows to re-imagine their brands, their products, and their processes.

    It is exciting for us to have built and shipped such a significantly enhanced version of the Kinect for Windows SDK less than 16 weeks after launch. But we are even more excited about our plans for the future – both in country expansion for the sensor, and in enhanced capabilities of our runtime and SDK.  We believe the best is yet to come, and we can’t wait to see what developers will build with this!

    Craig Eisler
    General Manager, Kinect for Windows

     

  • Kinect for Windows Product Blog

    Revealing Kinect for Windows v2 hardware

    • 53 Comments

    As we continue the march toward the upcoming launch of Kinect for Windows v2, we’re excited to share the hardware’s final look.

    Sensor

    The sensor closely resembles the Kinect for Xbox One, except that it says “Kinect” on the top panel, and the Xbox Nexus—the stylized green “x”—has been changed to a simple, more understated power indicator:

    Kinect for Windows v2 sensor
    Kinect for Windows v2 sensor

    Hub and power supply

    The sensor requires a couple other components to work: the hub and the power supply. Tying everything together is the hub (top item pictured below), which accepts three connections: the sensor, USB 3.0 output to PC, and power. The power supply (bottom item pictured below) does just what its name implies: it supplies all the power the sensor requires to operate. The power cables will vary by country or region, but the power supply itself supports voltages from 100–240 volts.

    Kinect for Windows v2 hub (top) and power supply (bottom)

    Kinect for Windows v2 hub (top) and power supply (bottom)

    As this first look at the Kinect for Windows v2 hardware indicates, we're getting closer and closer to launch. So stay tuned for more updates on the next generation of Kinect for Windows.

    Kinect for Windows Team

    Key links


  • Kinect for Windows Product Blog

    Kinect Fusion Coming to Kinect for Windows

    • 11 Comments

    Last week, I had the privilege of giving attendees at the Microsoft event, BUILD 2012, a sneak peek at an unreleased Kinect for Windows tool: Kinect Fusion.

    Kinect Fusion was first developed as a research project at the Microsoft Research lab in Cambridge, U.K.  As soon as the Kinect for Windows community saw it, they began asking us to include it in our SDK. Now, I’m happy to report that the Kinect for Windows team is, indeed, working on incorporating it and will have it available in a future release.

    In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming
    In this Kinect Fusion demonstration, a 3-D model of a home office is being created by capturing multiple views of the room and the objects on and around the desk. This tool has many practical applications, including 3-D printing, digital design, augmented reality, and gaming.

    Kinect Fusion reconstructs a 3-D model of an object or environment by combining a continuous stream of data from the Kinect for Windows sensor. It allows you to capture information about the object or environment being scanned that isn’t viewable from any one perspective. This can be accomplished either by moving the sensor around an object or environment or by moving the object being scanned in front of the sensor.

    Onlookers experience the capabilities of Kinect Fusion as a member of the Kinect for Windows team performs a live demo during BUILD 2012. Kinect Fusion takes the incoming depth data from the Kinect for Windows sensor and uses the sequence of frames to build a highly detailed 3-D map of objects or environments.  The tool then averages the readings over hundreds or thousands of frames to achieve more detail than would be possible from just one reading. This allows Kinect Fusion to gather and incorporate data not viewable from any single view point.  Among other things, it enables 3-D object model reconstruction, 3-D augmented reality, and 3-D measurements.  You can imagine the multitude of business scenarios where these would be useful, including 3-D printing, industrial design, body scanning, augmented reality, and gaming.

    We look forward to seeing how our developer community and business partners will use the tool.

    Chris White
    Senior Program Manager, Kinect for Windows

    Key Links

  • Kinect for Windows Product Blog

    What’s Ahead: A Sneak Peek

    • 29 Comments

    The momentum continues for Kinect for Windows. I am pleased to announce that we will be launching Kinect for Windows in nineteen more countries in the coming months.  We will have availability in Hong Kong, South Korea, and Taiwan in late May.  In June, Kinect for Windows will be available in Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, Singapore, South Africa, Sweden, Switzerland and the United Arab Emirates.Current and future availability of Kinect for Windows sensors

    We are also hard at work on our 1.5 release, which will be available at the end of May.  Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications.  Also coming is what we call “seated” or “10-joint” skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user. What is extra exciting to me about this functionality is that it will work in both default and near mode!

    Also included in our 1.5 release will be four new languages for speech recognition – French, Spanish, Italian, and Japanese.  In addition, we will be releasing new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain and Spanish/Mexico.

    In a future blog post, I’ll discuss the features and capabilities we are releasing in more detail.  We are excited by the enthusiasm for Kinect for Windows, and will continue to work on bringing Kinect for Windows to more countries, supporting more languages with our speech engine, and continuing to evolve our human tracking capabilities.

    Craig Eisler
    General Manager, Kinect for Windows

     

Page 1 of 11 (101 items) 12345»