Samsung SUR40 with Microsoft PixelSensesamsunglfd.com/solution/sur40.do
Continuing the list from my previous post
The Surface is all about multiple simultaneous users interacting with each other and the Surface. The typical PC is designed for a single user.
There are a lot of implications of having multiple people using your application at once. Global states should be avoided. You wouldn’t want one user to be able to delete all the photos from a photos application while another user is correcting redeye.
You can use per-user UI to give a user their own section of the screen (like where an individual player would keep their cards and chips in a poker game.) This can take a lot of screen space. It also assumes you know how many users are actually using the app.
Sometimes you can depend on social interaction to mitigate multi-user problems (like people agreeing on a color change in the paint app.) Other times you can’t. It’s probably a good thing to reduce the amount of negotiations between your users. As a parent I’ve had this made very clear whenever I ask my kids to share a single toy. There has been some interesting research on this.
There may be trouble with multiple users using, say, an instant messaging application. The client side of the IM service may only allow a single logon per machine. The server may only allow one logon per IP address. If your application uses local OS security features, your application may have difficulty getting at restricted resources for different users simultaneously.
There aren’t any magic fixes for this other than to architect your system to support this end to end.
On a desktop OS, drag and drop is a system modal operation. The APIs for this type of operation really only expect one drag and drop operation active at a time. On the Surface you can have many users dragging many different things from many “drag sources” to many “drop targets.”
System-wide we do not have any great solutions for this yet. Within a single application there are ways to accomplish this in a fairly clean manner (the Music demo has to deal with this.) The SDK team is producing samples to demonstrate one way to do this.
On a desktop OS, only one edit control on the system will receive keyboard input at any time. There is also only one edit control that will have an insertion caret.
On the Surface, this is not such a big deal since there is no hardware keyboard anyway. However if you want to enter text in your app using an on-screen keyboard, you will have difficulties using the standard Windows/WPF controls.
For the best Surface user experience you want to design applications to not require text input. For the times where text input is absolutely required we are producing a soft keyboard and related components that will make it easy for application developers to do text input.
CES is right around the corner and the Surface team is getting ready to show off some new demo experiences. Over the past few weeks (and nights, and weekends, and mornings...ugh) we've been dogfooding these demos and making sure everything is tip-top. So, the Super Friends had their Hall of Justice...well, we have the Hall of Surface...
We are excited to share more potential applications for Surface, a few new scenarios where you might encounter Surface and even a potential integration opportunity with one of the other Microsoft teams.
That's all I'm saying now...you'll just have to wait until CES to learn more. We'll have three Surface units in the Microsoft booth and I'll be giving daily demos in the presentation theater. I'll also be doing a daily blog from the show floor (well maybe from the hotel). Swing on by and say HI.
K Robert Warnick
Let's dive into some design. Obviously, there's way too much to cover so I'll start small and specific and we'll gradually cover more ground over time. So, I'll start with general navigation and a key navigational element we call "Access Points".
In creating a navigation model for any system, you generally take stock of all the places users will go and things they will do and see (information architecture). How much of anything exists, how it's grouped, relationships, tasks, etc. all impose requirements on the navigation model you create. Fortunately for us, we're currently dealing with a fairly simple model: provide a place (our Lobby) for users to find and select an app, launch it, enjoy it, and then return to the Lobby to launch a different app.
We made a decision very early on that every app runs full-screen (perhaps a topic for another blog posting). Once again, keeping it simple. Because apps run full-screen, the Lobby would not be visible so we provide users with a mechanism to navigate back to the Lobby: our "Access Points".
These Access Points should be available to users in a consistent manner regardless of which app is running or the state of that app. However, how many should we have and where to put them? Given the state of our project at the time such decisions were necessary, we were constrained to a software solution. Now, we know as well as anyone how valuable screen real-estate is, so of course we explored solutions in which our Access Points could live off-screen and appear on-screen only when called upon. We did come up with various clever solutions, but in the end we knew our product was destined for commercial venues. This meant we needed walk-up -and-use simple solutions requiring zero instruction or learning. We landed on the inevitable conclusion that we had to have something continually visible and available, which meant consuming some valuable app screen real-estate!
So, Access Points on the screen, but how many and where? Well, we looked at scenarios in which four people were sitting around the table, one at each edge, to ensure each person had easy access. We also looked at it from an app designer / developer perspective. We researched all of our own prototype apps, demos, etc. and found that in most cases information and controls for a particular user sitting at one edge of the table were positioned toward the center of that table edge (or display edge). For example, imagine a card game wherein each player's cards are centered in front of him/her at his/her edge of the display. Accordingly, we narrowed our possible screen regions to the four corners. Every user has easy access to at least one and we leave all the central screen areas open for the apps.
Today, the Access Points are buttons residing in each corner of the screen, and designed to be semi-transparent so as not to compete with the app for the user's attention. To see them in action, check out this video. Side note: In the early Surface demos and videos, you won't see these access points. However, you'll know you're looking at something new and closer to our finished designs when you see them in our latest videos and demos.
Here's a video showing the access points in action...
Already looking beyond our version 1 launch, we're exploring additional functions and capabilities for these Access Points , but I can't talk about that just yet :-)
Happy holidays everyone,