Someday, people might reminisce about the days when ATMs were the state of the art in offsite banking. That day might come sooner than expected, thanks in part to Kinect for Windows technology. Diebold, Incorporated, a worldwide leader in integrated service solutions, has unveiled a prototype of a standalone banking platform that promises to make self-service banking more convenient, intuitive, and secure, for both the customer and the financial institution. Called the Responsive Banking Concept, the prototype is equipped with touch screens and sensing devices that simplify and protect offsite banking transactions. Kinect for Windows is one of the key underlying technologies, providing motion and voice sensing to help recognize customers and provide personalized service for even complex transactions. The Responsive Banking Concept prototype made its debut on November 2 at the 2014 Money 20/20 conference, a global event for financial service industries. Designed to bring secure, personalized financial services to places where customers work and play, the full-scale Responsive Banking Concept could be implemented in airports, shopping malls, and other high-traffic areas, while smaller modular versions could be placed in retail shops. So someday soon, a Kinect-enabled installation might be your new personal banker.
The Kinect for Windows Team
Today, we're extremely excited to announce some major news about Kinect:
You can find more details about these developments in Microsoft Technical Fellow Alex Kipman's post on the Official Microsoft Blog. As Alex says, "these updates are all part of our desire to make Kinect accessible and easy to use for every developer."
The Kinect for Windows Team
We recently traveled to the Netherlands’ capital for our latest developer hackathon. The venue, Pakhuis de Zwijger, a former refrigerated warehouse located on one of Amsterdam’s many canals, made for a very unique setting. Developers from all over Europe came for the 28-hour event, which was hosted by Dare to Difr. There were some very innovative projects, and we couldn’t have been happier with the energy of the attendees and the quality of their work.
The participants’ energy and creativity resulted in innovative projects during the Kinect for Windows hackathon in Amsterdam (September 5–6, 2014).
Team Hoog+Diep took the top prize (€1000 and one Windows v2 sensor and carrying bag per team member) for their app My First Little Toy Story 3D, which allows users to capture playful adventures with favorite toys and share them as videos with friends. The app tracks the movement of dinosaurs and helicopters while the user plays with them, then it “magically” makes the user disappear from the video before sharing it.
Team Hoog+Diep took first place for their augmented play app, My First Little Toy Story 3D.
Team AK earned second place (€500 and one Windows v2 sensor and carrying bag per team member) for Clara, an app that provides real-time analytics for a retail store, showing how many shoppers came through and providing insights on customer behavior and product popularity.
Team motognosis won third place (one Windows v2 sensor and carrying bag per team member) for their work on In exTremory, a “catch-the-shape” game for tremor analysis in clinical, rehabilitation, and home scenarios.
Other projects presented
Developers from all over Europe came for the 28-hour event.
Our next hackathon will take place in Vancouver, British Columbia, November 8–9; registration opens in October, so keep an eye on our blog.
Thanks to all the attendees of the Amsterdam event and to our wonderful hosts at Dare to Difr. I look forward to watching the projects progress and to seeing you all again at a future event!
Ben Lower, Developer Community Manager, Kinect for Windows
_________________*Denotes projects awarded an Honorable Mention by the judges
Today we released another update to the Kinect for Windows SDK 2.0 public preview. This release contains important product improvements that add up to a more stable and feature-rich product. This updated SDK lets you get serious about finalizing your applications for commercial deployment and, later this year, for availability in the Windows Store. Please install, enjoy, and let us know what you think.
A member of team Kwartzlab++ demonstrates his team's project VR Builder at the Kinect Hackathon in Kitchener, Ontario.
Last week, we headed north to Canada for the latest stop on our Kinect Hackathon world tour: a three-day event (August 8–10) in Kitchener, Ontario, where developers gathered to develop applications* using Kinect for Windows v2. One of the three cities that make up the Regional Municipality of Waterloo, Kitchener has a booming tech community, fueled in part by the renowned computer science program at the University of Waterloo. So it was no surprise that the Kitchener attendees exhibited boundless energy and enormous creativity. Equally impressive was the hospitality of the people in Kitchener, especially Jennifer Janik and Rob Soosaar of Deep Realities, who were awesome hosts.
And the winners* are…
Team CleanSweep took first place.
Hard at work: members of team BearHunterNinja (left) and team Titan (right)
Other projects* presented
Thanks to everyone who came to the event in Kitchener. I hope to see you at another event in the future!
_____________________*The names of the hackathon projects and teams are determined solely by the participants and are not intended to be used commercially.
As Microsoft Most Valuable Professional (MVP) James Ashley points out in a recent blog, it’s a whole lot easier to create 3D movies with the Kinect for Windows v2 sensor and its preview software development kit (SDK 2.0 public preview). For starters, the v2 sensor captures up to three times more depth information than the original sensor did. That means you have far more depth data from which to construct your 3D images.
The next big improvement is in the ability to map color to the 3D image. The original Kinect sensor used an SD camera for color capture, and the resulting low-resolution images made it difficult to match the color data to the depth data. (RGB+D, a tool created by James George, Jonathan Porter, and Jonathan Minard, overcame this problem.) Knowing that the v2 sensor has a high-definition (1080p) video camera, Ashley reasoned that he could use the camera's color images directly, without a workaround tool. He also planned to map the color data to depth positions in real-time, a new capability built into the preview SDK.
Ashley shot this 3D video of his daughter Sophia by using Kinect for Windows v2 and a standard laptop.
Putting these features together, Ashley wrote an app that enabled him to create 3D videos on a standard laptop (dual core Intel i5, with 4 GB RAM and an integrated Intel HD Graphics 4400). While he has no plans at present to commercialize the application, he opines that it could be a great way to bring real-time 3D to video chats.
Ashley also speculates that since the underlying principle is a point cloud, stills of the volumetric recording could be converted into surface meshes that can be read by CAD software or even turned into models that could be printed on a 3D printer. He also thinks it could be useful for recording biometric information in a physician’s office, or for recording precise 3D information at a crime scene, for later review.
Those who want to learn more from Ashley about developing cool stuff with the v2 sensor should note that his book, Beginning Kinect Programming with Kinect for Windows v2, is due to be published in October.
Today, we are releasing an updated version of the Kinect for Windows SDK 2.0 public preview. This new SDK includes more than 200 improvements to the core SDK. Most notably, this release delivers the much sought after Kinect Fusion tool kit, which provides higher resolution camera tracking and performance. The updated SDK also includes substantial improvements in the tooling, specifically around Visual Gesture Builder (VGB) and Kinect Studio, and it offers 10 new samples (such as Discrete Gestures Basics, Face, and HD Face Basics) to get you coding faster. All of this adds up to a substantially more stable, more feature-rich product that lets you to get serious about finalizing your applications for commercial deployment and, later this year, for availability in the Windows Store.
The SDK is free and there will be no fees for runtime licenses of commercial applications developed with the SDK.
If you’ve already downloaded the public preview, please be sure to take advantage of today’s updates. And for developers who haven’t used Kinect for Windows v2 yet, there’s no better time to get started!
The Kinect for Windows Team
You’re the hero, blasting your way through a hostile battlefield, dispatching villains right and left. You feel the power as you control your well-armed, sculpted character through the game. But there is always the nagging feeling: that avatar doesn’t really look like me. Wouldn’t it be great if you could create a fully animated 3D game character that was a recognizable version of yourself? Well, with the Kinect for Windows v2 sensor and Fuse from Mixamo, you can do just that—no prior knowledge of 3D modelling required. In almost no time, you’ll have a fully armed, animated version of you, ready to insert into selected games and game engines.
The magic begins with the Kinect for Windows v2 sensor. You simply pose in front of the Kinect for Windows v2 sensor while its 1080p high-resolution camera captures six images of you: four of your body in static poses, and two of your face. With its enhanced depth sensing—up to three times greater than the original Kinect for Windows sensor—and its improved facial and body tracking, the v2 sensor captures your body in incredible, 3D detail. It tracks 25 joint positions and, with a mesh of 2,000 points, a wealth of facial detail.
You begin creating your personal 3D avatar by posing in front of the Kinect for Windows v2 sensor.
Once you have captured your image with the Kinect sensor, you simply upload it to Body Snap or a similar scanning software program, which will render it as a 3D model. This model is ready for download into an .obj file format designed for Fuse import requirements, which takes place in Body Hub, which, like Body Snap, is a product of Body Labs.
In Body Hub, your 3D model is prepared for download as an .obj file.
Next, you upload the 3D model to Fuse, where you can take advantage of more 280 “blendshapes” that you can push and pull, sculpting your 3D avatar as much as you want. You can also change your hairstyle and your coloring, as well as choose from a large assortment of clothing.
With your model imported to Fuse, you can customize its shape, hair style, and coloring.
The customization process also gives you an extensive array of wardrobe choices.
Once you’ve customized your newly scanned image, you export it to Mixamo, where it gets automatically rigged and animated. The process is so simple that it seems almost unreal. Rigging prepares your static 3D model for animation by inserting a 3D skeleton and binding it to the skin of your avatar. Normally, you would need to be a highly skilled technical director to accomplish this, but with Maximo, any gamer can rig a character. Now you’re ready to save and export your animated self into Garry’s Mod and Team Fortress 2—which are just the first two games that have community-made workflows for Fuse-created characters. Support for exporting directly from Fuse to other popular "modding" games is on the Fuse development roadmap.
On the left is a customized 3D avatar created from the scans of the gamer on the right.
The beauty of this system is not only its simplicity, but also its speed and relatively low cost. Within just minutes, you can create a fully rigged and animated 3D character. The Kinect for Windows v2 sensor costs just US$199 in the Microsoft Store, and Body Snap from Body Labs is free to download. Fuse can be purchased through Steam for $99, and includes two free auto-rigs per week.
In Mixamo, your avatar really comes to life, as auto-rigging makes it fully animated.
The speed and low cost of this system make it appealing to professional game developers and designers, too, especially since workflows exist for Unity, UDK, Unreal Engine, Source Engine, and Source Filmmaker.
Rigged and ready for action, your personal 3D avatar can be added to games and game engines, as in this shot from a game being developed with Unity.
The folks at Mixamo are committed to making character creation as easy and accessible as possible. “Mixamo’s mission is to make 3D content creation accessible for everyone, and this is another step in that direction,” says Stefano Corazza, CEO of Mixamo. “Kinect for Windows v2 and Fuse make it easier than ever for gamers and game developers to put their likeness into a game. In minutes, the 3D version of you can be running around in a 3D scene.”
And here's the payoff—the gamer plays the 3D avatar of himself. Now that’s putting yourself in the action!
The expertise and equipment required for 3D modeling have long thwarted players and developers who want to add more characters to games, but Kinect for Windows v2 plus Fuse is poised to break down this barrier. Soon, you can thrill to an animated version of you fulfilling your gaming desires, be it holding off alien hordes or building a virtual community. It’s just one more example of how Kinect for Windows technology and partnerships are enhancing entertainment and creativity.
Kinect for Windows Team
This past weekend, we were delighted to host a Kinect for Windows v2 hackathon on the Microsoft campus in Redmond, Washington. We saw some very cool, extremely ambitious projects. We were also joined by some special guests: Christian Schaller and Hannes Hofman of Metrilus came from Germany to share their finger tracking library with participants, and Adrian Ferrier, Mitch Altman, and Aaron Bryden came to show the progress they’ve made on their New York Hackathon project, lightspeed.And the winners are…
Three other projects were recognized by the judges. Each team received a Kinect for Windows v2 sensor.
Other projects presented
Thanks again to everyone who came to the event in Redmond this past weekend! It was great to meet new people and to see innovative ideas put into action.
I hope to see you at another event in the future!
With the launch of the Kinect for Windows v2 public preview, we want to ensure that developers have access to the SDK so that you can start writing Kinect-based applications. As you may be aware, the Kinect for Windows SDK 2.0 public preview will run only on Windows 8 and Windows 8.1 64-bit systems. If you have a Windows 8 PC that meets the minimum requirements, you’re ready to go.
For our Macintosh developers, this may be bittersweet news, but we’re here to help. There are two options available for developers who have an Intel-based Mac: (1) install Windows to the Mac’s hard drive, or (2) install Windows to an external USB 3.0 drive. Many Mac users are aware of the first option, but the second is less well known.
First, you need to ensure that your hardware meets the minimum requirements for Kinect for Windows v2.
Due to the requirements for full USB 3.0 bandwidth and GPU Shader Model 5 (DirectX 11), virtualization products such as VMWare Fusion, Parallels Desktop, or Oracle VirtualBox are not supported. If you’re not sure what hardware you have, you can find out on these Apple websites:
Installing Windows on the internal hard drive of your Intel-based Macintosh
We’re going to focus on getting Windows 8.1 installed, since this is typically the stumbling block. (If you need help installing Visual Studio or other applications on Windows, you can find resources online.)
Apple has provided a great option called Boot Camp. This tool will download the drivers for Windows, set up bootable media for installation, and guide you through the partitioning process. Please refer to Apple’s website on using this option:
Alternative to installing Windows on your primary drive
Boot Camp requires Windows to be installed on your internal hard drive. This might be impractical or impossible for a variety of reasons, including lack of available free space, technical failures during setup, or personal preferences.
An alternative is to install Windows to an external drive using Windows To Go, a feature of Windows 8 and 8.1 Enterprise. (Learn more about this feature in Windows 8.1 Enterprise.)
In the section, Hardware considerations for Windows To Go, on Windows To Go: Feature Overview, you can find a list of recommended USB 3.0 drives. These drives have additional security features that you may want to review with your systems administrators, to ensure you are in compliance with your company’s security policies.
Getting started with Windows To Go
You will need to log in as the administrator. Start the Windows to Go tool, press Win-Q to start the search, and enter Windows To Go:
Launch the Windows To Go application from the list. From the main application window, you will see a list of the attached drives that you can use with the tool. As shown below, you may be alerted of USB 3.0 drives that are not Windows To Go certified. You can still use the drive but understand that it might not work or could have an impact on performance. If you are using a non-certified USB 3.0 drive, you will have do your own testing to ensure it meets your needs. (Note: while not officially supported by Microsoft, we have used the Western Digital My Passport Ultra 500 GB and 1 TB drives at some of our developer hackathons to get people using Macs up and running with our dev tools on Windows.)
Select the drive you wish to use and click Next. If you have not already done so, insert the Windows 8.1 Enterprise CD at this time. If you have the .ISO, you can double-click the icon or right-click and select Mount to use it as a virtual drive.
If you do not see an image in the list, click the Add search location button and browse your system to find the DVD drive or mounted CD partition:
It should now appear in the list, and you can select it and click Next.
If you need or wish to use BitLocker, you can enable that now. We will Skip this.
The confirmation screen will summarize the selections you have made. This is your last chance to ensure that you are using the correct drive. Please avail yourself of this opportunity, as the Windows To Go installation process will reformat the drive and you will not be able to recover any data that is currently on the drive. Once you have confirmed that you are using the correct drive, click Create to continue.
Once the creation step is complete, you are ready to reboot the system. But first, you’ll need to download the drivers necessary for running Windows on Macintosh hardware from the Apple support page, as, by default, Windows setup does not include these drivers.
I recommend that you create an Extras folder on your drive and copy the files you’ll need. As shown below, I downloaded and extracted the Boot Camp drivers in this folder, since this will be the first thing I’ll need after logging in for the first time.
Disconnect the hard drive from the Windows computer and connect it to your Mac. Be sure that you are using the USB 3.0 connection if you have both USB 2 and USB 3.0 hardware ports. Once the drive is connected, boot or restart your system while holding down the option key. (Learn more about these startup key shortcuts for Intel-based Macs.)
During the initial setup, you will be asked to enter your product key, enter some default settings, and create an account. If your system has to reboot at any time, repeat the previous step to ensure that you return to the USB 3.0 workspace. Once you have successfully logged in for the first time, install the Boot Camp driver and any other applications you wish to use. Then you’ll have a fully operational Windows environment you can use for your Kinect for Windows development.
Carmine Sirignano Developer Support Escalation Engineer Kinect for Windows