Stepping back a little bit from my previous posts, I’d like to lay out some of the basic terminology you’ll see myself and others using here.
Surface & surface
Surface with a big ‘S’ is our product name and refers to the entire hardware+software+services package. With a little ‘s’, surface refers to the part of the hardware that users actually interact with. You may also occasionally see references to “InteractiveSurface” which is the class in our SDK representing properties of the surface.
Things interacting with the surface could be fingers, gadgets, loyalty cards, paint brushes, game pieces, or really just about anything. For a long time, a popular interview question was “what would you call this generic group of things?” That led to some insightful conversations but we eventually settled on calling them “contacts” because the only thing they all have in common is being temporarily in contact with the surface. There is theoretically no limit to how many contacts an app can be getting input from simultaneously but we are using "dozens and dozens" for our v1 benchmarks. Specifically, the dev team has choosen 52. Why 52? I’ll save that for a later post ;)
As shown in the announcement demos, certain contacts (like loyalty cards) can be uniquely identified. We say that these contacts are “tagged”. There are a couple of formats & technologies for tagging objects that we support for v1 and several others are being investigated for the future. The format used in the announcement demos is what we call “domino” tags and this Ars Technica article describes the format pretty well.
Gestures & Manipulations
What people commonly group together as "gestures", we actually break into two categories: Gestures & Manipulations. Gestures are interactions that can be recognized (but not necessarily processed) independent of the UI. Most of the UI’s response to a gesture occurs once the interaction is complete. Manipulations, on the other hand, are interactions that can only be recognized and interpreted in the context of UI elements and those UI elements constantly change based on in-progress manipulations. The "golden rule" of manipulations is that whatever pixel(s) you touch on a UI element should stay under your finger(s) as you move (within certain physical constraints, of course - most items can't be skewed, some items only resize with a locked aspect ratio, etc).
In the announcement videos, when you see someone press-and-hold to bring up a context menu or tap an album cover to flip it over, that’s a gesture. When you see someone move, rotate, and resize a photo, that’s a manipulation. These semantics aren’t really important to end users and I probably won’t succeed at getting the rest of the world to share this terminology – but for developers working with our platform, the distinction is important because the way you encorporate gestures and manipulations into an application are quite different (more on that in a later post).
PingBack from http://code-inside.de/blog/2007/12/10/wchentliche-rundablage-aspnet-35-extensions-google-chart-api-surface-sql-server-2008-adonet-acroplois-silverlight/
I think that surface and gesture computing is going to explode over the next five years. We're going