In addition to being a great day for Xbox One, today is also a great day for Kinect for Windows. We have started delivering Kinect for Windows v2 Developer Preview kits to program participants. The Developer Preview includes a pre-release Kinect for Windows v2 sensor, access to the new generation Kinect for Windows software development kit (SDK), as well as ongoing updates and access to private program forums. Participants will also receive a Kinect for Windows v2 sensor when they become available next summer (northern hemisphere).
Microsoft is committed to making the Kinect for Windows sensor and SDK available early to qualifying developers and designers so they can prepare to have their new-generation applications ready in time for general availability next summer. We continue to see a groundswell for Kinect for Windows. We received thousands of applications for this program and selected participants based on the applicants’ expertise, passion, and the raw creativity of their ideas. We are impressed by the caliber of the applications we received and look forward to seeing the innovative NUI experiences our Developer Preview customers will create.
The new Kinect for Windows v2 sensor will feature the core capabilities of the new Kinect for Xbox One sensor. With the first version of Kinect for Xbox 360, developers and businesses saw the potential to apply the technology beyond gaming—in many different computing environments. Microsoft believes that the opportunities for revolutionizing computing experiences will be even greater with this new sensor. The benefits will raise the bar and accelerate the development of NUI applications across multiple industries, from retail and manufacturing to healthcare, education, communications, and more:
Real VisionKinect Real Vision technology dramatically expands its field of view for greater line of sight. An all-new active IR camera enables it to see in the dark. And by using advanced three-dimensional geometry, it can even tell if you’re standing off balance.
Real MotionKinect Real Motion technology tracks even the slightest gestures. So a simple squeeze of your hand results in precise control over an application, whether you’re standing up or sitting down.
Real VoiceKinect Real Voice technology focuses on the sounds that matter. Thanks to an all-new multi-microphone array, the advanced noise isolation capability lets the sensor know who to listen to, even in a crowded space.
2014 will be exciting, to say the least. We will keep you updated as the Developer Preview program evolves and we get closer to the Kinect for Windows v2 worldwide launch next summer. Additionally, follow the progress of the early adopter community by keeping an eye on them (#k4wdev) and by following us (@kinectwindows).
The Kinect for Windows Team
It is essential for retailers to find ways to attract and connect with customers—and to stand out from the competition. To help them do so, the industry is grappling with how to build interactive experiences at scale that engage and truly help customers make satisfying purchasing decisions while also using retail space strategically to provide the best possible experience.
To get a deeper understanding of what this means, we did extensive first-hand research with dozens of retailers and big brands . We learned how retailers think about implementing natural user interface technology (NUI) and how they see these experiences helping propel their businesses forward.
What we heard:
We agree. And we believe it’s important for us to bring these findings back into Kinect for Windows by delivering features that facilitate the best retail innovations. To help support this, we recently released an update to our SDK (Kinect for Windows SDK 1.8) that includes new features specifically designed to enable the development of higher-quality digital signage applications. Key features include the ability to remove backgrounds, an adaptive UI sample, and an HTML interaction sample.
To help illustrate what this all means, our team developed the following three videos. They show how Kinect for Windows experiences can help retailers attract new customers and engage customers in deeper ways. They offer examples of ways that digital signs powered by Kinect for Windows can draw customers into the business—making it possible for retailers to share offerings, cross-sell and upsell merchandise, bring the “endless aisle” concept to life, and, ultimately, inspire shoppers to purchase. And all of this is accomplished in a beautiful way that feels natural to the customer.
These videos highlight some of the core benefits retailers tell us Kinect for Windows offers them:
Kinect for Windows does this by optimizing interactions with existing large screens and enhancing the overall retail space—using gesture and voice control, background removal, proximity-based interface, and more.
So many companies have already created exciting retail experiences with Kinect for Windows: Bloomingdales, Build-a-Bear, Coca-Cola, Mattel, Nissan, Pepsi, and others. We are excited to see the new ways that Kinect for Windows is being applied in retail. The dramatic shifts in consumer shopping behaviors, preferences, and expectations in retail today are driving innovation to new levels. The possibilities are endless when we use the latest technology to put the customer at the heart of the business.
Kinect for Windows Team
I am pleased to announce that we released the Kinect for Windows software development kit (SDK) 1.8 today. This is the fourth update to the SDK since we first released it commercially one and a half years ago. Since then, we’ve seen numerous companies using Kinect for Windows worldwide, and more than 700,000 downloads of our SDK.
We build each version of the SDK with our customers in mind—listening to what the developer community and business leaders tell us they want and traveling around the globe to see what these dedicated teams do, how they do it, and what they most need out of our software development kit.
The new background removal API is useful for advertising, augmented reality gaming, training and simulation, and more.
Kinect for Windows SDK 1.8 includes some key features and samples that the community has been asking for, including:
We also have updated our Human Interface Guidelines (HIG) with guidance to complement the new Adaptive UI sample, including the following:
Design a transition that reveals or hides additional information without obscuring the anchor points in the overall UI.
Design UI where users can accomplish all tasks for each goal within a single range.
My team and I believe that communicating naturally with computers means being able to gesture and speak, just like you do when communicating with people. We believe this is important to the evolution of computing, and are committed to helping this future come faster by giving our customers the tools they need to build truly innovative solutions. There are many exciting applications being created with Kinect for Windows, and we hope these new features will make those applications better and easier to build. Keep up the great work, and keep us posted!
Bob Heddle, DirectorKinect for Windows
The following blog post was guest authored by Anup Chathoth, co-founder and CEO of Ubi Interactive.
Ubi Interactive is a Seattle startup that was one of 11 companies from around the world selected to take part in a three-month Microsoft Kinect Accelerator program in the spring of 2012. Since then, the company has developed the software with more than 100 users and is now accepting orders for the software.
Patrick Wirtz, an innovation manager for The Walsh Group, spends most of his time implementing technology that will enhance Walsh’s ability to work with clients. It’s a vital role at The Walsh Group, a general building construction organization founded in 1898 that has invested more than US$450 Million in capital equipment and regularly employs more than 5,000 engineers and skilled tradespeople.
"It’s a powerful piece of technology," says Patrick Wirtz, shown here using Ubi in The Walsh Group offices. By setting up interactive 3-D blueprints on the walls, Walsh gives clients the ability to explore, virtually, a future building or facility.
In the construction industry, building information modeling (BIM) is a critical component of presentations to clients. BIM allows construction companies like The Walsh Group to represent the functional characteristics of a facility digitally. While this is mostly effective, Wirtz wanted something that would really “wow” his clients. He wanted a way for them to not only see the drawings, but to bring the buildings to life by allowing clients to explore the blueprints themselves.
Wirtz found the solution he had been seeking when he stumbled upon an article about Ubi. At Ubi Interactive, we provide the technology to transform any surface into an interactive touch screen. All the user needs is a computer running our software, a projector, and the Kinect for Windows sensor. Immediately, Wirtz knew Ubi was something he wanted to implement at Walsh: “I contacted the guys at Ubi and told them I am very interested in purchasing the product.” Wirtz was excited about the software and flew out to Seattle for a demo.
After interacting with the software, Wirtz was convinced that this technology could help The Walsh Group. “Ubi is futuristic-like technology,” he noted—but a technology that he and his colleagues are able to use today. Wirtz immediately saw the potential: Walsh’s building information models could now be interactive displays. Instead of merely presenting drawings to clients, Walsh can now set up an interactive 3-D blueprint on the wall. Clients can walk up to the blueprint and discover what the building will look like by touching and interacting with the display. In use at Walsh headquarters since June 2012, Ubi Interactive brings client engagement to an entirely new level.
Similarly, Evan Collins, a recent graduate of California Polytechnic State University, used the Ubi software as part of an architecture show he organized. The exhibition showcased 20 interactive displays that allowed the fifth-year architecture students to present their thesis projects in a way that was captivating to audience members. Collins said the interactive displays, “…allowed audience members to choose what content they interacted with instead of listening to a static slideshow presentation.”
Twenty Ubi Interactive displays at California Polytechnic University
Wirtz’s and Collins’ cases are just two ways that people are currently using Ubi. Because the solution is so affordable, people from a wide range of industries have found useful applications for the Ubi software. Wirtz said, “I didn’t want to spend $10,000. I already had a projector and a computer. All I needed to purchase was the software and a $250 Kinect for Windows sensor. With this small investment, I can now turn any surface into a touch screen. It’s a powerful piece of technology.”
In addition to small- and mid-sized companies, several Fortune 500 enterprises like Microsoft and Intel are also using the software in their conference rooms. And the use of the technology goes beyond conference rooms:
At Ubi Interactive, it is our goal to make the world a more interactive place. We want human collaboration and information to be just one finger touch away, no matter where you are. By making it possible to turn any surface into a touch screen, we eliminate the need for screen hardware and thereby reduce the cost and extend the possibilities of enabling interactive displays in places where they were not previously feasible—such as on walls in public spaces. Our technology has implications of revolutionizing the way people live their lives on a global level. After private beta evaluation with more than 50 organizations, the Ubi software is now available for ordering at ubi-interactive.com.
Anup ChathothCo-Founder and CEO, Ubi Interactive
Today at Microsoft BUILD 2013, we made two important announcements for our Kinect for Windows developer community.
First, starting today, developers can apply for a place in our upcoming developer kit program. This program will give participants exclusive early access to everything they need to start building applications for the recently-announced new generation Kinect for Windows sensor, including a pre-release version of the new sensor hardware and software development kit (SDK) in November, and a replacement unit of the final sensor hardware and firmware when it is publicly available next year. The cost for the program will be US$399 (or local equivalent). Applications must be received by July 31 and successful applicants will be notified and charged in August. Interested developers are strongly encouraged to apply early, as spots are very limited and demand is already great for the new sensor. Review complete program details and apply for the program.
The upcoming Kinect for Windows SDK 1.8 will include more realistic color capture with Kinect Fusion.
Additionally, in September we will again refresh the Kinect for Windows SDK with several exciting updates including:
The feature enhancements will enable even better Kinect for Windows-based applications for businesses and end users, and the convenience of HTML5 will make it easier for developers to build leading-edge touch-free experiences.
This will be the fourth significant update to the Kinect for Windows SDK since we launched 17 months ago. We are committed to continuing to improve the existing Kinect for Windows platform as we prepare to release the new generation Kinect for Windows sensor and SDK. If you aren’t already using Kinect for Windows to develop touch-free solutions, now is a great time to start. Join us as we continue to make technology easier to use and more intuitive for everyone.
Bob HeddleDirector, Kinect for Windows
The following blog post was guest authored by Celeste Humphrey, business development consultant at nsquared, and Dr. Neil Roodyn, director of nsquared.
A company that is passionate about learning, technology, and creating awesome user experiences, nsquared has developed three new applications that take advantage of Kinect for Windows to provide users with interactive, natural user interface experiences. nsquared is located in Sydney, Australia.
At nsquared, we believe that vision-based interaction is the future of computing. The excitement we see in the technology industry regarding touch and tablet computing is a harbinger of the changes that are coming as smarter computer vision systems evolve.
Kinect for Windows has provided us with the tools to create some truly amazing products for education, hospitality, and events.
Education: nsquared sky spelling
We are excited to announce nsquared sky spelling, our first Kinect for Windows-based educational game. This new application, aimed at children aged 4 to 12, makes it fun for children to learn to spell in an interactive and collaborative environment. Each child selects a character or vehicle, such as a dragon, a biplane, or a butterfly, and then flies as that character through the sky to capture letters that complete the spelling of various words. The skeleton recognition capabilities of the Kinect for Windows sensor and software development kit (SDK) track the movement of the children as they stretch out their arms as wings to navigate their character through hoops alongside their wingman (another player). The color camera in the Kinect for Windows sensor allows each child to add their photo, thereby personalizing their experience.
nsquared sky spelling
Hospitality: nsquared hotel kiosk
The nsquared hotel kiosk augments the concierge function in a hotel by providing guidance to hotel guests through an intuitive, interactive experience. Guests can browse through images and videos of activities, explore locations on a map, and find out what's happening with a live event calendar. It also provides live weather updates and has customizable themes. The nsquared hotel kiosk uses the new gestures supported in the Kinect for Windows SDK 1.7, enabling users to use a “grip” gesture to drag content across the screen and a “push” gesture to select content. With its fun user interface, this informative kiosk provides guests an interactive alternative to the old brochure rack.
Kinect for Windows technology enables nsquared to provide an interactive kiosk experience for less than half the price of a similar sized touchscreen (see note).
nsquared hotel kiosk
Events: nsquared media viewer
The new nsquared media viewer application is a great way to explore interactive content in almost any environment. Designed for building lobbies, experience centers, events, and corporate locations, the nsquared media viewer enables you to display images and video by category in a stylish, customizable carousel. Easy to use, anyone can walk up and start browsing in seconds.
In addition to taking advantage of key features of the Kinect for Windows sensor and SDK, nsquared media viewer utilizes Windows Azure, allowing clients to view reports about the usage of the screen and the content displayed.
nsquared media viewer
Kinect for Windows technology has made it possible for nsquared to create applications that allow people to interact with content in amazing new ways, helping us take a step towards our collective future of richer vision-based computing systems.
Celeste Humphrey, business development consultant, andDr. Neil Roodyn, director, nsquared
____________Note: Based on the price of 65-inch touch overlay at approximately US$900 compared to the cost of a Kinect for Windows sensor at approximately US$250. For integrated touch solutions, the price can be far higher. Back to blog...
By now, most of you likely have heard about the new Kinect sensor that Microsoft will deliver as part of Xbox One later this year.
Today, I am pleased to announce that Microsoft will also deliver a new generation Kinect for Windows sensor next year. We’re continuing our commitment to equipping businesses and organizations with the latest natural technology from Microsoft so that they, in turn, can develop and deploy innovative touch-free applications for their businesses and customers. A new Kinect for Windows sensor and software development kit (SDK) are core to that commitment.
Both the new Kinect sensor and the new Kinect for Windows sensor are being built on a shared set of technologies. Just as the new Kinect sensor will bring opportunities for revolutionizing gaming and entertainment, the new Kinect for Windows sensor will revolutionize computing experiences. The precision and intuitive responsiveness that the new platform provides will accelerate the development of voice and gesture experiences on computers.
Some of the key capabilities of the new Kinect sensor include:
The enhanced fidelity and depth perception of the new Kinect sensor will allow developers to create apps that see a person's form better, track objects with greater detail, and understand voice commands in noisier settings.
The new sensor tracks more points on the human body than previously, including the tip of the hand and thumb, and tracks six skeletons at once. This opens up a range of new scenarios, from improved "avateering" to experiences in which multiple users can participate simultaneously.
I’m sure many of you want to know more. Stay tuned; at BUILD 2013 in June, we’ll share details about how developers and designers can begin to prepare to adopt these new technologies so that their apps and experiences are ready for general availability next year.
A new Kinect for Windows era is coming: an era of unprecedented responsiveness and precision.
Photos in this blog by STEPHEN BRASHEAR/Invision for Microsoft/AP Images
Reflexion Health, founded with technology developed at the West Health Institute, realized years ago that assessing physical therapy outcomes is difficult for a variety of reasons, and took on the challenge of designing a solution to help increase the success rates of rehabilitation from physical injury.
In 2011, the Reflexion team approached the Orthopedic Surgery Department of the Naval Medical Center San Diego to help test their new Rehabilitation Measurement Tool (RMT). This software solution was developed to make physical therapy more engaging, efficient, and successful. By using the Kinect for Windows sensor and software development kit (SDK), the RMT allows clinicians to measure patient progress. Patients often do much of their therapy alone and because they can lack immediate feedback from therapists, it can be difficult for them to be certain that they are performing the exercises in a manner that will provide them with optimal benefits. The RMT can indicate if exercises were performed properly, how frequently they were performed, and give patients real-time feedback.
Reflexion Health's Kinect for Windows-based tool helps measure how patients respond to physical therapy.
“Kinect for Windows helps motivate patients to do physical therapy—and the data set we gather when they use the RMT is becoming valuable to demonstrate what form of therapy is most effective, what types of patients react better to what type of therapy, and how to best deliver that therapy. Those questions have vexed people for a long time,” says Dr. Ravi Komatireddy, co-founder at Reflexion Health.
The proprietary RMT software engages patients with avatars and educational information, and a Kinect for Windows sensor tracks a patient’s range of motion and other clinical data. This valuable information helps therapists customize and deliver therapy plans to patients.
“RMT is a breakthrough that can change how physical therapy is delivered,” Spencer Hutchins, co-founder and CEO of Reflexion Health says. “Kinect for Windows helps us build a repository of information so we can answer rigorous questions about patient care in a quantitative way.” Ultimately, Reflexion Health has demonstrated how software could be prescribed—similarly to pharmaceuticals and medical devices—and how it could possibly lower the cost of healthcare.
More information about RMT and the clinical trials conducted by the Naval Medical Center can be found in the newly released case study.
Kinect for Windows team
As you might imagine, working in a nuclear power plant provides special challenges. One crucial aspect for any project is the need to minimize employee exposure to radiation by applying a standard known as As Low As Reasonably Achievable—ALARA for short.
How this works: Plant ALARA managers work with the maintenance groups to estimate how much time is required to perform a task and, allowing for exposure limits, they determine how many employees may be needed to safely complete it. Today, that work is typically done with pen and paper. But new tools from Siemens PLM Software that incorporate the Kinect for Windows sensor could change this by providing a 3-D virtual interactive modeling environment.
Kinect for Windows is used to capture realistic movement for use in the Siemens Teamcenter solution for ALARA radiation planning.
The solution, piloted at a US nuclear power plant last year, is built on Siemens’ Teamcenter software, using its Tecnomatix process simulate productivity product. Siemens PLM Software Tecnomatix provides virtual 3-D human avatars—“Jack” and “Jill”—that are integrated to model motion-controlled actions input with a Kinect for Windows sensor. This solution is helping to usher in a new era of industrial planning applications for employee health and safety in the nuclear industry.
"We're really revolutionizing the industry," said Erica Simmons, global marketing manager for Energy, Oil, and Gas Industries at Siemens PLM Software. "For us, this was a new way to develop a product in tandem with the industry associations. We created a specific use case with off-the-shelf technology and tested and validated it with industry. What we have now is a new visual and interactive way of simulating potential radiation exposure which can lead to better health and safety strategies for the plant."
Traditional pencil-and-paper planning (left) compared to the Siemens PLM Software Process Simulate on Teamcenter solution (right) with “Jack” avatar and Kinect for Windows movement input.
The Siemens Tecnomatix process planning application, integrated with the Kinect for Windows system, will give nuclear plant management the ability to better manage individual worker radiation exposure and optimize steps to reduce overall team exposure. As a bonus, once tasks have been recorded by using “Jack,” the software can be used for training. Employees can learn and practice an optimized task by using Kinect for Windows and Siemens “Jack” and “Jill”—safely outside of the radiation zone—until they have mastered it and are ready to perform the actual work.
"We wanted to add something more for the user of this solution in addition to our 3-D human avatars and the hazard zones created by our visual cartography; this led us to exploring what we could do with the Kinect for Windows SDK for this use case," said Dr. Ulrich Raschke, director of Human Simulation Technologies at Siemens PLM Software. “User feedback has been good so far; the addition of the Kinect for Windows system adds another level of interactivity to our application."
This Siemens solution grew out of a collaborative effort with Electric Power Research Institute (EPRI) and Fiatech industry association, which identified the need for more technologically advanced estimation tools for worker radiation dosage. Kinect for Windows was incorporated when the developers were tailoring the avatar system to the solution and exploring ways to make the user experience much more interactive.
"Collaboration with several key stakeholders and industry experts led to this innovative solution," said Phung Tran, senior project manager at EPRI. "We're pleased the industry software providers are using it, and look forward to seeing the industry utilize these new tools."
“In fact,” Tran added, “the tool is not necessarily limited to radiation work planning. It could help improve the management and execution of many operation, maintenance, and project-based tasks.”
Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately.
A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you can untether yourself and move around a conference room naturally.
In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .