Natural Interaction and Microsoft BI

Natural Interaction and Microsoft BI

Rate This
  • Comments 14

Today, Microsoft released the official Kinect for Windows SDK. Here is something we have been experimenting with here in the SQL Server BI product group that you might find interesting.

A while ago, at SQL PASS Summit 2011 in Seattle, WA, I demoed turning any random surface or wall into a touch screen using a single Microsoft Kinect device. I also demoed controlling Microsoft’s latest BI reporting tool which provides easy data exploration, visualization and presentation capabilities, Power View, through natural language and simple gestures.

Let’s start with a video of the above scenarios (You hear me say the word "Crescent" in the video. Crescent is the code-name for Power View):

Using Kinect to turn any wall into a touch screen

Using the Skeletal Tracking feature of the official Kinect for Windows SDK, it is possible to track the position of “joints” such as arms and hands. The data points provided for each joint include depth. It is, therefore possible to gather enough calibration data to correctly map the position of the hand to screen coordinates. It is also possible to calculate how far the projection surface area is from the Kinect allowing for the detection of the touch gesture: 

 

The part that is not demoed in the above video is the calibration stage. Currently it takes about a couple of minutes for one to calibrate the system by touching the wall at different locations. One could imagine advancing the algorithm to remove the need for that level of calibration.

Using Kinect and Microsoft Speech to control Power View

In the second section of the video, you saw some really interesting ways of controlling Power View through voice. As you saw, we can navigate between the views, sort different data regions, and filter the selected view based on the attributes in the semantic model.

The voice is picked up by the microphone array built into Kinect, and then passed onto Microsoft Speech for recognition. The rest is pretty straight forward: voice commands are converted to commands within Power View.

Pedram Rezaei

Leave a Comment
  • Please add 6 and 3 and type the answer here:
  • Post
  • This is awesome!!

  • This is very nice. The testing is performed in a no-noise setting though! How would the performance be in a conference room or a class room where the noise is pretty high?

  • Sweeeeet :)

  • Reposting from Pedram Rezaei’s blog: “Today, Microsoft released the official Kinect for Windows SDK

  • That's a great demo providing the strong integration of MS products and technologies

  • Hey Pedram, looks fantastic. You accent seems to have travelled :) (Or does the Kinect require you to speak American?)

  • Rabib, this is very much a prototype. We however tested this in front a large audiance and it worked just fine.

    James, Interesting, I should listen to it again.

  • You could use kinect lasers to determ where the wall is. You could also use the projector to project a QR code and the Kinects camera. Can you put up your project on CodePlex and share it with the rest of us?

  • @Rabih, as a "human", I struggle in those kind of environments!  If Kinect can do it, that would be a huge improvement :)

  • Ok, we shipped ! Kinect for Windows SDK 1.0 is now available to all the Windows developers to change

  • @Fredrik: Unfortunately the code for this cannot be shared at this time.

  • People have been using Open Kinect to do variety of things including playing instruments. .this probably too late :) Youtube has the proof..

  • Any plans to release this to CodePlex?

  • @Essam, unfortunately the code for this cannot be shared at this time.

Page 1 of 1 (14 items)