Yesterday I was giving a career talk to a group from Year Up in Microsoft’s Technology Center. (Picture below) A great bunch of people and we had a good time. Well I did and they laughed at all the right places and ate a lot of pizza so I am assuming they did as well.


I have been playing with some Kinect Sensor for Windows code lately and one of the samples I picked up allows one to send keyboard signals to applications based on you’re their movements. Ah, ha, I decided “I can make my talk more cool by using hand signals to advance the PowerPoint slides.” Sounds great right? So with almost no understanding of the program I brought my Kinect Sensor and this sample code with me. The practice went great. I waved my hand and the slide advanced. I felt all Jedi Knight – “This is not the slide you want to look at.”

Then the students came in and I started my presentation. Now it turns out that I am not very good at standing still when I present. Nor do my hands stay quietly at my side. You know where this is going right? Yep, the PowerPoint did all sorts of interesting things from advancing to retreating to jumping completely out of the presentation. I finally gave up and used a hand held clicker. What went wrong was that the software, a simple demo after all, didn’t have the “smarts” to know what was a “please move to the next slide” motion and an “I’m just fidgety and can’t stand still while I talk” motion. Could that sort of smarts be programmed in? Yes, I think so. But it would take some work. Someone is going to do this work. I may even give it a try myself. But the point is that the computer, even with highly sophisticated sensors like those on the Kinect, is not really all that smart on its own.

A co-worker and I were talking about what else, besides programming concepts, could we teach with the Kinect SDK. I’m thinking, based on this experience, that some basic artificial intelligence is one such extra topic. Of course we can teach about the math involved in depth perception, the techniques that are used with the RGB camera to pick out different objects, and many more such obvious things. But along the way I suspect we are going to find some unexpected lessons that need to be taught as well. I find this exciting. It’s a new world in user interfaces!

Related posts: