In winter/spring of this year I was asked to do some applications-development training for our internal teams in China and Japan as well as our external partners. In addition to my own experiences developing Surface apps, I talked with many team members about what they thought future developers should know. One result of this process was a list of how developing for the Surface is different from the desktop. Over the next two posts I’ll share this list. I admit some of these things seem pretty obvious after you hear them. That’s often the mark of a very usable piece of information (at other times it can be the mark of restating the obvious.) Here goes:
The assumption that computer displays have one orientation starts high in the system and goes down deep. Even the computer in the projector has what it thinks is “up.” The OS, UI frameworks, and development tools all think you want an application where everything is oriented the same way.
We are highly reliant on WPF’s ability to rotate user interface elements in any direction you want. This ability to set a “transform” at any level in your UI is one of the primary reasons we decided to use WPF as the main platform for Surface applications development. In developing for Surface we often put the bulk of the UI in a “UserControl” so it can be replicated for multiple users and oriented to face them. The Photos demo is a good example of where each photo or video is a UserControl. There is a transform set on each of these so that the photo is scaled, positioned, and rotated however the user wants.
Designing the user interface without an orientation is also very difficult. It usually takes a few iterations before all the elements of the UI do not imply an orientation. A close look at the demo applications will reveal some places where the design assumes the user is on one side of the Surface.
There is a lot of similarity between a single “contact” on the Surface and the mouse pointer on a regular PC screen. Dragging your finger across the Surface is very similar to dragging a mouse. Unfortunately, the conventional computing system is built to expect just one mouse pointer. Even if you connect multiple mice to a single PC, you still just get one mouse pointer on the screen.
Fortunately, WPF is flexible enough to allow us to put Surface "contact" events into its event stream. So in addition to seeing the mouse events, your UI will see events generated from Surface interaction. WPF does not do much more for you at this point though. What your UI does with a bunch of contacts moving over it is up to the UI. In a paint application it can just draw on the screen lines that reflect the positions of the contacts it sees. For an application like photos it has to do some math with all the contacts it sees in order for the photo to respond intuitively to the user. This can be a lot more complicated than things are in the single-mouse world. A goal of the SDK is to simplify this for Surface application developers by providing controls that give you the behavior you want without having to handle all the events directly. Robert Levy will talk more about this in his posts.
More in Part 2.
"WPF is flexible enough to allow us to put Surface "contact" events into its event stream."
How do you make this? Can you give me example?
Over at the Surface blog they're talking about what goes into creating an application for Surface. Two
You can do this using RoutedEvents. If you have input that corresponds to positions on the screen you go through the following (very high level) steps:
1) Use the Win32 WindowFromPoint to determine if this input should be routed to your window.
2) On the WPF side, you then use UIElement.InputHitTest on your window’s root UIElement to determine which descendent element should receive the input
3) Then you send a RoutedEvent to that UIElement with whatever information is important.
4) The UIElement can then handle this however it wants. If it is a button UIElement and the RoutedEvent was about some contact with the screen over it, then the button can show itself in the pressed state.
There is a lot more that needs to go on if you want other people to be happy using this (Robert could cover that if there is interest.) but that is basically all I did when I first got WPF apps working on Surface.
I have found a class named InputManager. I looks like it responsible for all input in WPF. It has support for Mouse,Keyboard and Stylus. I wonder if it could be used to implement "touch" devices?
I understand that controls like ScatterView is a custom control. But how about a Button. Do I have to inherit from button and write my own code so that it response to contacts? Or it is possible to emulate mouse input and use standard wpf controls?
that's a great question, nesher. i'll try to cover that in a more detailed post about our WPF layer in a few weeks.
great news rlevy, I'm waiting for your post :)
"Unfortunately, the conventional computing system is built to expect just one mouse pointer."
Unfortunately, the conventional computing system uses only one mouse pointer by default. Support for multiple mouse pointers or generally, multiple input devices is present in the form of low level raw-input in Win32.
Microsoft does support multiple mice in Managed apps via the MultiPoint SDK - http://channel9.msdn.com/Showpost.aspx?postid=266221
Download at: http://www.microsoft.com/downloads/details.aspx?FamilyID=a137998b-e8d6-4fff-b805-2798d2c6e41d&DisplayLang=en
Adamhill, unfortunatly MultiPoint does not support touchpad and stylus on my Tablet PC (Toshiba Portege m400).
Wenn can we await Part 2?
Also I would like to read more about SDK.
How creating Surface apps is different, part 2 Continuing the list from my previous post Multiple simultaneous
The information u provided was really helpfull,i am waiting for ur post part 2.