Today marks the release of a video the Surface team put together highlighting the power of PixelSense™. Microsoft’s PixelSense, in the new Samsung SUR40 for Microsoft Surface, allows a display to recognize fingers, hands, and objects placed on the screen, enabling vision-based interaction without the use of cameras. The individual pixels in the display see what’s touching the screen and that information is immediately processed and interpreted.
Think of it like the connection between the eye and the brain. You need both, working together, to see. In this case, the eye is the sensor in the panel, it picks up the image and it feeds that to the brain which is our vision input processor that recognizes the image and does something with it. Taken in whole…this is PixelSense technology. We’ve gone behind-the-scenes to show you the creation of the technology and some of the people involved. It’s a little longer than most web videos but we wanted to go deeper than normal and really explain what’s going on.
In conjunction with the video, let’s walk through the high-level steps of how PixelSense actually works:
Right now PixelSense is only available in the Samsung SUR40 for Microsoft Surface and we believe it’s going to change the way you interact with touch-enabled content.
So the question to ask next.... when are we getting the 2.0 SDK? It's now Summer 11, so hopefully soon. Keep up the good work.
I love this technologie :)
@shaggygu, technicaly, summer extent up to september 21. I won't be surprise at all that the sdk 2.0 come only after de Build conference september 13, where the faith of WPF will be exposed.
@fred. good point. i hope the sdk along with wpf goodness will be announced. cross fingers!
hi, how does pixelsense handle intense light (ie. reflectors above it) or changing light conditions? IR cameras often get... confused by intense light - thinking that a reflector is a IR blob. how does pixelsense handle that? any experience in using SUR40 in daylight?