The following blog post was guest authored by Anup Chathoth, co-founder and CEO of Ubi Interactive.
Ubi Interactive is a Seattle startup that was one of 11 companies from around the world selected to take part in a three-month Microsoft Kinect Accelerator program in the spring of 2012. Since then, the company has developed the software with more than 100 users and is now accepting orders for the software.
Patrick Wirtz, an innovation manager for The Walsh Group, spends most of his time implementing technology that will enhance Walsh’s ability to work with clients. It’s a vital role at The Walsh Group, a general building construction organization founded in 1898 that has invested more than US$450 Million in capital equipment and regularly employs more than 5,000 engineers and skilled tradespeople.
"It’s a powerful piece of technology," says Patrick Wirtz, shown here using Ubi in The Walsh Group offices. By setting up interactive 3-D blueprints on the walls, Walsh gives clients the ability to explore, virtually, a future building or facility.
In the construction industry, building information modeling (BIM) is a critical component of presentations to clients. BIM allows construction companies like The Walsh Group to represent the functional characteristics of a facility digitally. While this is mostly effective, Wirtz wanted something that would really “wow” his clients. He wanted a way for them to not only see the drawings, but to bring the buildings to life by allowing clients to explore the blueprints themselves.
Wirtz found the solution he had been seeking when he stumbled upon an article about Ubi. At Ubi Interactive, we provide the technology to transform any surface into an interactive touch screen. All the user needs is a computer running our software, a projector, and the Kinect for Windows sensor. Immediately, Wirtz knew Ubi was something he wanted to implement at Walsh: “I contacted the guys at Ubi and told them I am very interested in purchasing the product.” Wirtz was excited about the software and flew out to Seattle for a demo.
After interacting with the software, Wirtz was convinced that this technology could help The Walsh Group. “Ubi is futuristic-like technology,” he noted—but a technology that he and his colleagues are able to use today. Wirtz immediately saw the potential: Walsh’s building information models could now be interactive displays. Instead of merely presenting drawings to clients, Walsh can now set up an interactive 3-D blueprint on the wall. Clients can walk up to the blueprint and discover what the building will look like by touching and interacting with the display. In use at Walsh headquarters since June 2012, Ubi Interactive brings client engagement to an entirely new level.
Similarly, Evan Collins, a recent graduate of California Polytechnic State University, used the Ubi software as part of an architecture show he organized. The exhibition showcased 20 interactive displays that allowed the fifth-year architecture students to present their thesis projects in a way that was captivating to audience members. Collins said the interactive displays, “…allowed audience members to choose what content they interacted with instead of listening to a static slideshow presentation.”
Twenty Ubi Interactive displays at California Polytechnic University
Wirtz’s and Collins’ cases are just two ways that people are currently using Ubi. Because the solution is so affordable, people from a wide range of industries have found useful applications for the Ubi software. Wirtz said, “I didn’t want to spend $10,000. I already had a projector and a computer. All I needed to purchase was the software and a $250 Kinect for Windows sensor. With this small investment, I can now turn any surface into a touch screen. It’s a powerful piece of technology.”
In addition to small- and mid-sized companies, several Fortune 500 enterprises like Microsoft and Intel are also using the software in their conference rooms. And the use of the technology goes beyond conference rooms:
At Ubi Interactive, it is our goal to make the world a more interactive place. We want human collaboration and information to be just one finger touch away, no matter where you are. By making it possible to turn any surface into a touch screen, we eliminate the need for screen hardware and thereby reduce the cost and extend the possibilities of enabling interactive displays in places where they were not previously feasible—such as on walls in public spaces. Our technology has implications of revolutionizing the way people live their lives on a global level. After private beta evaluation with more than 50 organizations, the Ubi software is now available for ordering at ubi-interactive.com.
Anup ChathothCo-Founder and CEO, Ubi Interactive
The momentum continues for Kinect for Windows. I am pleased to announce that we will be launching Kinect for Windows in nineteen more countries in the coming months. We will have availability in Hong Kong, South Korea, and Taiwan in late May. In June, Kinect for Windows will be available in Austria, Belgium, Brazil, Denmark, Finland, India, the Netherlands, Norway, Portugal, Russia, Saudi Arabia, Singapore, South Africa, Sweden, Switzerland and the United Arab Emirates.
We are also hard at work on our 1.5 release, which will be available at the end of May. Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications. Also coming is what we call “seated” or “10-joint” skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user. What is extra exciting to me about this functionality is that it will work in both default and near mode!
Also included in our 1.5 release will be four new languages for speech recognition – French, Spanish, Italian, and Japanese. In addition, we will be releasing new language packs which enable speech recognition for the way a language is spoken in different regions: English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, French/France, French/Canada, Italian/Italy, Japanese/Japan, Spanish/Spain and Spanish/Mexico.
In a future blog post, I’ll discuss the features and capabilities we are releasing in more detail. We are excited by the enthusiasm for Kinect for Windows, and will continue to work on bringing Kinect for Windows to more countries, supporting more languages with our speech engine, and continuing to evolve our human tracking capabilities.
Craig EislerGeneral Manager, Kinect for Windows
The Kinect for Windows team has been hard at work this summer, and I have some very exciting developments to share with you regarding our roadmap between now and the end of the year.
On October 8, Kinect for Windows is coming to China. China is a leader in business and technology innovation. We are very excited to make Kinect for Windows available in China so that developers and businesses there can innovate with Kinect for Windows and transform experiences through touch-free solutions.
Kinect for Windows hardware will be available in seven additional markets later this fall: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico.
In addition to making Kinect for Windows hardware available in eight new markets this fall, we will be releasing an update to the Kinect for Windows runtime and software development kit (SDK) on October 8. This release has numerous new features that deliver additional power to Kinect for Windows developers and business customers. We will share the full details when it’s released on October 8, but in the meantime here are a few highlights:
It has been a little more than seven months since we first launched Kinect for Windows in 12 markets. By the end of the year, Kinect for Windows will be available in 38 markets and we will have shipped two significant updates to the SDK and runtime beyond the initial release—and this is this just the beginning. Microsoft has had a multi-decade commitment to natural user interface (NUI), and my team and I look forward to continuing to be an important part of that commitment. In coming years, I believe that we will get to experience an exciting new era where computing becomes invisible and all of us will be able to interact intuitively and naturally with the computers around us.
To commemorate the one-year anniversary of Kinect and the Kinect Effect, I sent an email to my team earlier this week. I’d like to quote for you what I said to them, “It all started with a revolutionary sensor and amazing software that turned voice and movement into magic. With that magical combination, last year the Interactive Entertainment Business at Microsoft showed the world how to re-imagine gaming. This year, we’re showing the world how to re-imagine entertainment. Next year, with Kinect for Windows, we will help the world re-imagine everything else.”
To mark the milestone, the Kinect for Windows team is celebrating with our own milestones: We’re starting up this blog, launching the official Kinect for Windows web site, and releasing beta 2 of the Kinect for Windows SDK. (And, yes, we will celebrate the anniversary more this evening– it’s been an amazing journey these past months!)
I know many of you are eagerly awaiting the Kinect for Windows commercial program coming in early 2012. My team is working hard to deliver a great product and I’m confident that it will be worth the wait.
We’ve already seen strong enthusiasm for Kinect among developers who have done amazing things with it in countless different ways, from education to healthcare, gaming to art installations, manufacturing to retail.
Currently, we have more than 200 companies taking part in our pilot program. They are telling us how Kinect for Windows will help them transform their products, their processes, their brands, and their businesses. Putting the power of Kinect + Windows into the hands of business leaders and technical visionaries will give them the tools they need to develop novel solutions for everything from training employees to visualizing data, from configuring a car to managing an assembly line.
The updated software development kit that we released today includes some great new features that help us get closer to realizing this vision, including faster skeletal tracking, better accuracy rate when it comes to skeletal tracking and joint recognition, and the ability to plug and unplug your Kinect without losing work/productivity.
Every day, I come to work and learn about another amazing application that a partner or other developer is doing with Kinect for Windows. I look forward to next year, when the potential goes exponential and everyone’s ideas, including yours, are part of that equation.
If you haven’t done so already, download the SDK and re-imagine the world with us.
--Craig Eisler General Manager, Kinect for Windows
Today at Engadget Expand, I announced that Kinect for Windows SDK 1.7 will be made available this coming Monday, March 18. This is our most significant update to the SDK since we released the first version a little over a year ago, and I can’t wait to see what businesses and developers do with the new features and enhancements.
On Monday, developers will be able to download the SDK, developer toolkit, and the new and improved Human Interface Guidelines (HIG) from our website. In the meantime, here’s a sneak peek:
Kinect Interactions give businesses and developers the tools to create intuitive, smooth, and polished applications that are ergonomic and intelligently based on the way people naturally move and gesture. The interactions include push-to-press buttons, grip-to-pan capabilities, and support for smart ways to accommodate multiple users and two-person interactions. These new tools are based on thousands of hours of research, development, and testing with a broad and diverse group of people. We wanted to save businesses and developers hours of development time while making it easier for them to create gesture-based experiences that are highly consistent from application to application and utterly simple for end users. With Kinect Interactions, businesses can more quickly develop customized, differentiated solutions that address important business needs and attract, engage, and delight their customers.
Kinect for Windows Interactions transform how people interact with computers insettings ranging from retail to education, training, and physical therapy.
Kinect Fusion is one of the most affordable tools available today for creating accurate 3-D renderings of people and objects. Kinect Fusion fuses together multiple snapshots from the Kinect for Windows sensor to create accurate, full, 3-D models. Developers can move a Kinect for Windows sensor around a person, object, or environment and “paint” a 3-D image of the person or thing in real time. These 3-D images can then be used to enhance countless real-world scenarios, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things such as improved clothes shopping experiences and better-fitting orthotics. Kinect Fusion is something many of our partners have been asking for and we’re thrilled to be delivering it now.
Kinect Fusion enables developers to create accurate 3-D renderings in real time.
The updated SDK also includes an enhanced developer toolkit and additional developer resources, including:
Seeing is believingWe demonstrated Kinect Interactions and Kinect Fusion live, onstage at Engadget Expand. You can watch the webcast of those demos now—and then come back to download the latest SDK on March 18. It’s fully compatible with all previous commercial releases, so we encourage everyone to upgrade to the new version. There’s no reason not to!
As always, we are constantly evolving the technology and want to know what you think. And we love hearing about the solutions you’re developing with Kinect for Windows, so please join us at Facebook and Twitter.
The Kinect for Windows sensor, together with the SDK, can help you create engaging applications that take natural voice and gesture computing to the next level.
Bob Heddle, DirectorKinect for Windows
At BUILD in April, we told the world that the Kinect for Windows v2 sensor and SDK would be coming this summer, and with them, the ability for developers to start creating Windows Store apps with Kinect for the first time. Well here in Redmond, Washington, it’s not summer yet. But today we are pleased to announce that developers can pre-order the Kinect for Windows v2 sensor. Developers who take advantage of this pre-order option will be able to start building solutions ahead of the general public.
Sensors purchased during the pre-order phase will be shipped in July, at which time we will also release a public beta of our software development kit (SDK). All of this will happen a few months ahead of general availability of sensors and the SDK, giving pre-order customers a head start on using the v2 sensor’s new and improved features, including increased depth-sensing capabilities, full 1080p video, improved skeletal tracking, and enhanced infrared technology.
Thousands of developers wanted to take part in our Developer Preview program but were unable to do so—in fact, we’re still receiving requests from all around the world. So for these and other developers who are eager to start using the Kinect for Windows v2, the pre-order option offers access to the new sensor ahead of general availability. Bear in mind, however, that we have limited quantities of pre-order sensors, so order while supplies last.
The v2 sensors will also be shipped in July to those who participated in the Developer Preview program. For these early adopters, it’s been an amazing six months: we’ve seen more stunning designs, promising prototypes, and early apps than we can count—from finger tracking to touch-free controls for assembly line workers to tools for monitoring the environment. At BUILD, we showed you what Reflexion Health and Freak’n Genius were able to achieve with the v2 sensor in just a matter of weeks. And in July, when the sensor and SDK are more broadly available, we can only imagine what’s next.
Kinect for Windows will continue to feature more innovative uses of the v2 technology on this blog in the coming months. As Microsoft Corporate Vice President and Chief Evangelist Steven Guggenheimer notes, “I love what the Kinect sensor and SDK can do. Getting the v2 sensor into the hands of more developers and getting the SDK more widely available is the next step.”
We are committed to a future where humans and technology can interact more seamlessly—in the living room, on their PCs, and beyond.
—The Kinect for Windows Team
Yes, it’s the moment many of you have been waiting for: Kinect for Windows SDK 1.7 is available for download! We’ve included a few photos of the key features: Kinect Interactions and Kinect Fusion. Or if you’re a developer, you can download the SDK and get started immediately.
A woman demonstrates the new Kinect Interactions, which are included in the Kinect for Windows SDK 1.7: counter-clockwise from top left: “push” to select, “grab” to scroll and pan, and wave to identify primary user. Two-handed zoom (top right) is not included but can be built with this new SDK.
Kinect Interactions are designed to let users intuitively do things like press their hand forward a few inches to push a button, or close their hands to “grip and pan” as seen here. Now you can untether yourself and move around a conference room naturally.
In this physical therapy scenario, Kinect for Windows enables a therapist to interact with the computer without leaving her patient’s side.
Customers can virtually try on merchandise, such as sunglasses, by using business solutions created with the new Kinect for Windows SDK 1.7. If colors, models, or sizes are not in stock, you can still see what they look like on you.
Kinect Fusion, a tool also included in Kinect for Windows SDK 1.7, can create highly accurate 3-D renderings of people and objects in real time.
Kinect Fusion makes it possible to create highly accurate 3-D renderings at a fraction of the price it would cost with traditional high-end 3-D scanners.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping.
Kinect Fusion opens up a variety of new scenarios for businesses and developers, including augmented reality, 3-D printing, interior and industrial design, and body scanning for things like custom fitting and improved clothes shopping .
The Kinect for Windows Team
I’m very pleased to announce that the latest Kinect for Windows runtime and software development kit (SDK) have been released today. I am also thrilled to announce that the Kinect for Windows sensor is now available in China.
Developers and business leaders around the world are just beginning to realize what’s possible when the natural user interface capabilities of Kinect are made available for commercial use in Windows environments. I look forward to seeing the innovative things Chinese companies do with this voice and gesture technology, as well as the business and societal problems they are able to solve with it.
The updated SDK gives developers more powerful sensor data tools and better ease of use, while offering businesses the ability to deploy in more places. The updated SDK includes:
Extended sensor data access
Access to all this data means new experiences are possible: Whole new scenarios open up, such as monitoring manufacturing processes with extended-range depth data. Building solutions that work in low-light settings becomes a reality with IR stream exposure, such as in theaters and light-controlled museums. And developers can tailor applications to work in different environments with the numerous color camera settings, which enhance an application’s ability to work perfectly for end users.
One of the new samples released demonstrates a best-in-class UI based on the Kinect for Windows Human Interface Guidelines called the Basic Interactions – WPF sample.
Improved developer tools
We are committed to continuing to make it easier and easier for developers to create amazing applications. That’s why we continue to invest in tools and resources like these. We want to do the heavy lifting behind the scenes so the technologists using our platform can focus on making their specific solutions great. For instance, people have been using our Human Interface Guidelines (HIG) to design more natural, intuitive interactions since we released last May. Now, the Basic Interactions sample brings to life the best practices that we described in the HIG and can be easily repurposed.
Greater support for operating systems
Windows 8 compatibility and VM support now mean Kinect for Windows can be in more places, on more devices. We want our business customers to be able to build and deploy their solutions where they want, using the latest tools, operating systems, and programming languages available today.
This updated version of the SDK is fully compatible with previous commercial versions, so we recommend that all developers upgrade their applications to get access to the latest improvements and to ensure that Windows 8 deployments have a fully tested and supported experience.
As I mentioned in my previous blog post, over the next few months we will be making Kinect for Windows sensors available in seven more markets: Chile, Colombia, the Czech Republic, Greece, Hungary, Poland, and Puerto Rico. Stay tuned; we’ll bring you more updates on interesting applications and deployments in these and other markets as we learn about them in coming months.
This week, some 30,000 retailers from around the world descended on New York’s Javits Center for the 2014 edition of the National Retail Federation’s Annual Convention and Expo, better known as “Retail’s BIG Show.” With an exhibit space covering nearly four football fields and featuring more than 500 vendors, an exhibitor could have been overlooked easily—but not when your exhibit displayed retailing innovations that use the power of the Microsoft Kinect for Windows sensor and SDK. Here are some of the Kinect experiences that attracted attention on the exhibit floor.
NEC Corporation of America demonstrated a “smart shelf” application that makes the most of valuable retail space by tailoring the messaging on digital signage to fit the shopper. At the heart of this system is Kinect for Windows, which discerns shoppers who are interested in the display and uses analytics to determine such consumer attributes as age, gender, and level of engagement. On the back end, the data captured by Kinect is delivered to a dashboard where it can be further mined for business intelligence. Allen Ganz, a senior account development manager at NEC, praises the Kinect-based solution, noting that it “provides unprecedented actionable insights for retailers and brands at the point-of-purchase decision.”
Razorfish displayed two different Kinect-based scenarios, both of which highlight an immersive consumer experience that’s integrated across devices. The first scenario engages potential customers by involving them in a Kinect-driven beach soccer game. In this dual-screen experience, one customer has the role of striker, and uses his or her body movements—captured by the Kinect for Windows sensor—to dribble the ball and then kick it toward the goal. The other customer assumes the role of goalie; his or her avatar appears on the second display and its actions are controlled by the customer’s movements—again captured via the Kinect for Windows sensor—as he or she tries to block the shot. Customers who succeed, accumulate points that can be redeemed for a real (not virtual) beverage from a connected vending machine. Customers can work up a sweat in this game, so the beverage is a much-appreciated reward. But the real reward goes to the retailer, as this compelling, gamified experience creates unique opportunities for sales associates to connect with the shoppers.
The second scenario from Razorfish also featured a beach theme. This sample experience is intended to take place in a surf shop, where customers design their own customized surfboard by using a Microsoft Surface. Then they use a Kinect-enabled digital signage application to capture images of the customized board against the background of one of the world’s top beaches. This image is immediately printed as a postcard, and a second copy is sent to the customer in an email. Here, too, the real goal is to engage customers, pulling them into an immersive experience that is personal, mobile, and social.
Above all, the Razorfish experiences help create a bond between the customer and a brand. “Kinect enables consumers to directly interact personally with a brand, resulting in a greater sense of brand loyalty,” notes Corey Schuman, a senior technical architect at Razorfish.
Yet another compelling Kinect-enabled customer experience was demonstrated by FaceCake, whose Swivel application turns the computer into a virtual dressing room where a shopper can try on clothes and accessories with a simple click. The customer poses in front of a Kinect for Windows sensor, which captures his or her image. Then the shopper selects items from a photo display of clothing and accessories, and the application displays the shopper “wearing” the selected items. So, a curious shopper can try on, say, various dress styles until she finds one she likes. Then she can add a necklace, scarf, or handbag to create an entire ensemble. She can even split the screen to compare her options, showing side-by-side images of the same dress accessorized with different hats. And yes, this app works for male shoppers, too.
The common theme in all these Kinect-enabled retail applications is customer engagement. Imagine seeing a digital sign respond to you personally, or getting involved in the creation of your own product or ideal ensemble. If you’re a customer, these are the kinds of interactive experiences that draw you in. In a world where every retailer is looking for new ways to attract and connect with customers, Kinect for Windows is engaging customers and helping them learn more about the products. The upshot is a satisfied customer who's made a stronger connection during their shopping experience, and a healthier bottom line for the retailer.
Revealed in November as a future addition to the Kinect for Windows SDK, Kinect Fusion made a big impression at the annual TechFest event hosted by Microsoft Research this week in Redmond, Washington.
Kinect Fusion pulls depth data that is generated by the Kinect for Windows sensor and, from the sequence of frames, constructs a highly detailed 3-D map of objects or environments. The tool averages readings over hundreds or even thousands of frames to create a rich level of detail.
Kinect Fusion, shown during TechFest 2013, enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor and the Kinect for Windows SDK.
"The amazing thing about this solution is how you can take an off-the-shelf Kinect for Windows sensor and create 3-D models rapidly," said Shahram Izadi, senior researcher at Microsoft Research Cambridge. "Normally when you think of Kinect, you think of a static sensor in a living room. But with Kinect Fusion, we allow the user to hold the camera, explore their space, and rapidly scan the world around them."
When scanning smaller objects, you also have the option to simply move the object instead of the sensor.
The Cambridge researchers and Kinect for Windows team collaborated closely on Kinect Fusion to construct a tool that can enable businesses and developers to devise new types of applications.
"This has been a wonderful example of collaboration between Microsoft Research and our product group," said Kinect for Windows Senior Program Manager Chris White. "We have worked shoulder-to-shoulder over the last year to bring this technology to our customers. The deep engagement that we have maintained with the original research team has allowed us to incorporate cutting edge research, even beyond what was shown in the original Kinect Fusion paper."
"This kind of collaboration is one of the unique strengths of Microsoft, where we can bring together world-class researchers and world-class engineers to deliver real innovation," White added. "Kinect Fusion opens up a wide range of development possibilities—everything from gaming and augmented reality to industrial design. We're really excited to be able to include it in a future release of the Kinect for Windows SDK."
Kinect for Windows team