Developer EventsWindows Azure Developer Stories
General ResourcesWindows PhoneWindows Azure
These postings are provided "AS IS" with no warranties, and confers no rights. You assume all risk for your use.
I think that we – and by we, I mean we developers and developer evangelist types at Microsoft – get touch and tablets, or slates, or pads, or whatever you’d like to call them, better than the Ars Technica article Ballmer (and Microsoft) still doesn't get the iPad (written by Peter Bright and posted in the One Microsoft Way section) implies. I believe that over the next few months, you’ll see some interesting touch-related stuff coming from Microsoft, and that we have a responsibility to help developers understand the differences between mouse/keyboard computing and touch computing.
In anticipation of this, I’ve been make my move towards touch- (and other sensor-based) computing over the past little while, by migrating to the following devices:
The idea behind this purposeful move towards touch-equipped devices is to truly understand touch-based interfaces, which UI elements work and which ones don’t, and then to pass the lessons learned to my audience – developers and designers, whether you build for the Microsoft platform or the platforms of the Esteemed Competition.
My own move towards touch-based devices is a microcosmic example of the larger changes taking place at The Empire. The move to touch interfaces is taking place on Microsoft computing platforms of all sizes:
As the Ars Technica article points out, one of the signs that we do get touch is the new interface design of Windows Phone 7. The design philosophy is build around touch (and other sensors), and the WP7 “design bible”, the Windows Phone User Interface Design and Interaction Guide [12 MB PDF], explains this philosophy beyond the mere technical details. Here’s the introduction to its section WP7’s touch interface (any emphasis in the quote below is mine):
Touch input is a core experience of Windows Phone 7 and has inherent differences from traditional keyboard and mouse input systems. Designed for natural and intuitive user interaction, touch input in Windows Phone 7 enables users to interact with application content such as a photo or a web page. Touch input enables simple and consistent user touch gestures that imitate real life behavior, such as panning on a photo to move it. Single-touch gestures make interaction easier with one hand, but multi-touch gestures are also available to provide more advanced gesture functionality. Application developers should strive to create unique and exciting experiences that encourage the discovery of content through the use of touch gestures. Users should enjoy the experience of navigating through the steps of a task as well as the completion of the task itself. Touch gestures should provide a delightful, more colorful, intuitive experience within applications Touch delights the senses as the user gets to see the interaction match the performance. The touch UI should always have aware and responsive performance, just like how real world objects respond to touch immediately, and applications on Windows Phone 7 should as well, by performing the action in real time and by providing immediate feedback that an event or process is occurring. Users should not have to wait as it breaks their immersion, flow, and concentration, especially as their gestures transition from one to the other. For example, a pan may turn into a flick or a tap can become a double tap, and the user should not be aware that the UI is switching gesture support.
Touch input is a core experience of Windows Phone 7 and has inherent differences from traditional keyboard and mouse input systems. Designed for natural and intuitive user interaction, touch input in Windows Phone 7 enables users to interact with application content such as a photo or a web page. Touch input enables simple and consistent user touch gestures that imitate real life behavior, such as panning on a photo to move it. Single-touch gestures make interaction easier with one hand, but multi-touch gestures are also available to provide more advanced gesture functionality.
Application developers should strive to create unique and exciting experiences that encourage the discovery of content through the use of touch gestures. Users should enjoy the experience of navigating through the steps of a task as well as the completion of the task itself. Touch gestures should provide a delightful, more colorful, intuitive experience within applications
Touch delights the senses as the user gets to see the interaction match the performance. The touch UI should always have aware and responsive performance, just like how real world objects respond to touch immediately, and applications on Windows Phone 7 should as well, by performing the action in real time and by providing immediate feedback that an event or process is occurring. Users should not have to wait as it breaks their immersion, flow, and concentration, especially as their gestures transition from one to the other. For example, a pan may turn into a flick or a tap can become a double tap, and the user should not be aware that the UI is switching gesture support.
There’s a great amount of understanding behind the nuances of touch-based interfaces in the Windows Phone User Interface Design and Interaction Guide, and over the next few months, we’ll be covering them in great detail in this blog.
When the Surface, a.k.a. the “Big-Ass Table”, came out, a number of people asked why such a big, expensive thing was built and what practical purpose such a beast would serve.
For starters, there are a number of customers who use it, from casinos in Vegas to bible study classes in megacurches to places closer to home (by which I mean Canada), from the company that did the security for President Obama’s visit to Ottawa to super-sexy Toronto design firm Teehan+Lax to Ontario College of Art and Design to Infusion, who’ve built applications such as Noront Resources’s GSI Surface tool to the security app Falcon Eye.
Equally important are lessons to be learned about input from touch and other sensors from a “concept” machine like the Surface, whose built-in camera systems allow for way more touch points than a resistive or capacitive touch screen will allow, as well as the ability to “see” objects on the tabletop. By being empirical and building such a computer, developing software for it and watching people interact with it, we learn more about touch and sensor-based computing way more than we could from mere theorizing.
I think Des Traynor captured our intent quite nicely in his article about Surface and other Microsoft efforts in the field of user interface:
When the Surface was released two years ago it was chastised by the public. The joke at the time was: “Apple and Microsoft both invest in multi-touch technology, Apple release the iPhone, Microsoft release a $15,000 coffee table!”. But Surface wasn’t about “re-inventing the coffee table”, so much as it was prototyping a vision of the future of computing. There will come a time when “gathering around a laptop” will seem as ridiculous as connecting an ethernet cable; a time when everyone gathers around a multi-user computer to have a meeting or debate a design. With something like surface, Microsoft are preparing for that day.
When the Surface was released two years ago it was chastised by the public. The joke at the time was: “Apple and Microsoft both invest in multi-touch technology, Apple release the iPhone, Microsoft release a $15,000 coffee table!”.
But Surface wasn’t about “re-inventing the coffee table”, so much as it was prototyping a vision of the future of computing. There will come a time when “gathering around a laptop” will seem as ridiculous as connecting an ethernet cable; a time when everyone gathers around a multi-user computer to have a meeting or debate a design. With something like surface, Microsoft are preparing for that day.
A lot of the knowledge from Surface applications have been injected into Windows 7 in the form of the Windows 7 Touch Pack. This pack gives Windows 7 a touch-based API and a set of apps originally designed for the Surface, so that they can run on touch-enabled computers, such as HP’s TouchSmart series, touch-enabled laptops like my own Dell Latitude XT2 as well as any computer connected to one of the new touch-enabled monitors (our manager John Oxley has one in his office).
The Ars Technica article goes on and on about Windows 7’s standard interface controls being too tiny for touch, but a quick look at the Touch Pack apps reveals that they don’t use the standard controls; rather, they use controls better-suited to touch. Here’s a screenshot of Surface Collage, the photo-collage application, running on my XT2:
No standard Windows controls here! You manipulate the photos directly using gestures, and the strip along the bottom is a photo list, which you also manipulate through gestures. The closest thing to a standard Windows control is the “close” button near the upper-right hand corner of the screen, which is larger than the typical “close” button – small enough to be out of the way, yet large enough to click with a finger.
Here’s another app from the Touch Pack, Surface Globe, also running on my XT2:
Once again, no standard Windows 7 controls here, but a map that you directly manipulate, augmented by finger-friendly controls.
The Touch Pack apps all follow this philosophy: when going touch, eschew the standard Windows 7 UI controls in favour of touch-friendly ones, and then back to bog-standard Windows 7 when exiting them. These apps show not just that we understand that touch computing is a different beast from mouse-and-keyboard computing, but that we also understand where they intersect.
We’re working on what I like to call “the touch continuum”, which spans pocket devices such as the Zune HD and Windows Phone, to portable computing with netbooks, laptops and soon, tablets, to desktop and tabletop and wall-sized units. And yes, we get that new types of user input call for new user interfaces and give rise to new usage patterns. We’re aware of the challenges of touch (and other sensor) input and over the next little while, you’ll see our answers to those challenges. And better still, we’ll share what we’ve learned in order to make you better developers and designers of software that use these new interfaces.
This article also appears in Global Nerdy.
"I'm a snap-judger" is the short version of what I'm about to say:
When I reached the part of your article where you say you've immersed yourself in touch-technology, I felt the need to comment before continuing the reading. You listed various models of kit that operate on wildly different use-cases, have various levels of success in how well they employ "touch," and are pretty much all over the place in terms of relevance to the modern multitouch computing model.
Now, I get that this was a learning exercise, with the intent of passing the lessons learned on to your readers. But the immediate impression I get upon reading that you've immersed yourself in all these wildly varying "touch-based" devices is "This guy doesn't *get* touch-based computing yet, and kudos to him for trying, but while reading the rest of this article, the only question I'm looking to find an answer for is 'does he get it yet?'"
Beyond that, I've found myself nodding at everything "DrPizza" has said in the comments. Now I'll read the rest of the article.
The Touch Pack does *not* 'give Windows 7 a touch-based API' - it takes advantage of the built-in touch APIs (which I'm sure do take a lot of learning from Surface) to run versions of the fun apps Microsoft built for Surface to give end users who pay for touch systems at least a handful of apps that use touch until developers actually ship some touch-aware apps.
We have decades of research on touch technologies, not as much on good touch interfaces and a gulf between the legacy mouse-happy apps we still want to run and the kind of interface you need to make a touch-happy app. Yes, a stylus works just fine as a mouse - except for how tiring and fiddly it is to use. I speak as someone who loves tablet PC to bits, but I can see that there's a road to travel here and I'm hoping Win 8 has the map handy.
And if you're suggesting Win CE and WP7 as the iPad competition, you're missing the point of what Windows can actually do: run Windows apps and give me handwriting recognition.
@Bob XP Tablet had an input panel for writing into any app, a line or two at a time - not ideal, but it was part of the OS, used the same handwriting recognition as Word and OneNote. It's the same in Win 7.
In response to Joey,
The .NET platform will not help the tablet situation until the proper touch frameworks are in place. Not shims, not patches and not workarounds. Windows Phone 7 may be a basis for a future tablet interface. But all of your examples are pretty much spackle on top of Windows 7. I think we all get that Microsoft has been working on touch interfaces and ways to use them for a while.
The problem is that they are always shackled to Windows Compatibility. To make a tablet, or slate or whatever is the PC term at Microsoft, that is usable means that you need to understand that not every application made in the last decade will work on it. I have to give Microsoft credit for Windows Phone 7 in that they drew a line in the sand. They need to do the same thing with their tablet/slate.
Wow, you guys are bound and determined to keep me working on a long weekend (it's a holiday here in Canada). As they say, "keep those cards and letters coming", and I'll answer as best I can.
Thanks to everyone who took the time to write in!
I figure you can look at it this way:
Either way, it's a good thing.
Mary Branscombe writes, “And if you're suggesting Win CE and WP7 as the iPad competition, you're missing the point of what Windows can actually do: run Windows apps and give me handwriting recognition.”
Saint-Exupéry writes, “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.”
OK, I get it: Microsoft will compete against the iPad with Windows 7 except that every user interface will be duplicated between what is appropriate for the mouse/keyboard/stylus people, and the touch people. Izzat right? Two independent interfaces that operate against synchronized data? Third party devs will have to qualify their apps for both interfaces? Both will be tuned to work effectively and look good on screen sizes ranging from 5" to 13"? Apps will anticipate something like the 2GB RAM environment that's about right-sized for running a couple dozen daemons plus a half dozen apps, with VM when the user has the corporate products database open in SQL Server?
On another forum, a Microsoftie complained that raising the Brooks' Mythical Man Month objection to the Kin project was dissed by Ms. Ho; surely if anybody on the planet knows more about software than some 30-year-old wives' tales, it would be Microsoft. And yet… the effort failed for trying to throw more bodies at a spiralled-out-of-control list of desiredments. Now, WinSlate7 is going to ignore some children's book writer, and rewrite an entire OS shell (is it the world's largest by an order of magnitude?), keep it in sync with the data of a different shell and cram it into a box that people will carry around with them in ways that they did not for the various tablet PCs?
Call me unimaginative, I guess. But I am staggered.
I don't know. It all sounds cool and all, until you have to spend lots of hours (with a few breaks here and there) manipulating a touchable device without a way to rest your arms. You can sit at a computer keyboard plus mouse for lots of hours because for the most part you can rest your arms on a desk-like surface. So far I haven't seen an example where the elbows would be down on somthing to manipulate the screen. "Minority Report" is all well and good in small bursts, but NOT sustained. Has Microsoft done any studies yet on how much faster work could be done via touch technology so that the rest periods can be more frequent and longer?
I'm familiar with that particular usability phenomenon, and I'm sure the UX team in Redmond is as well. I've heard it go by a few names, the most memorable of which is the "Gorilla Arm", which appears in the Jargon File as follows:
The side-effect that destroyed touch-screens as a mainstream input technology despite a promising start in the early 1980s. It seems the designers of all those spiffy touch-menu systems failed to notice that humans aren't designed to hold their arms in front of their faces making small motions. After more than a very few selections, the arm begins to feel sore, cramped, and oversized — the operator looks like a gorilla while using the touch screen and feels like one afterwards. This is now considered a classic cautionary tale to human-factors designers; “Remember the gorilla arm!” is shorthand for “How is this going to fly in real use?”.
The Gorilla Arm also sounded the death knell for light pens.
This may be the reason why Apple has concentrated its multi-touch efforts on things like trackpads and the iPhone/Pod/Pad, all of which don't require you to hold out your arm like a touch monitor does. However, there's no reason why a touch screen can't be laid flat or nearly flat on a table, book-style, and there are many cases for short-term use of touch interfaces that might strain your arms over long-term use, such as kiosks, interactive signage, and wall-mounted controls.
This one goes out to Peter "DrPizza" Bright, Walt French and good number of other commenters, as well as John "Daring Fireball" Gruber, who linked to this article. It's an expansion of an earlier comment of mine which went like this:
I'd like to remind everyone that Windows 7 isn't the only OS option available to us, as evidenced by Windows Phone. What's far more important, as far as I'm concerned, is the .NET framework, which is quite portable to the smallest of devices.
What I was trying to say is that I would personally prefer a Windows Phone 7-like approach to a Windows tablet. With Windows Phone 7, as well as with the iPhone/Pad/Pod, the idea wasn't to attempt to force-fit a desktop OS into a smaller system, but also not to start from scratch. Instead, a good chunk of the class libraries and framework used in the desktop OS were used to build an OS written with the constraints of mobile in mind. iOS apps are written in the same language using the same tools and use a lot of the same class libraries as OS X apps, and WP7 apps are written in C# and Visual Studio and use a subset of the full .NET framework. It's good all 'round: the mobile OS maker isn't starting completely from scratch, application developers aren't learning an entirely new system, and the end result isn't a bogged-down mobile device.
I hope that this is the approach being taken by the people working on the Windows tablet; I don't know because I don't work with that team. I do, however, have regular communication with the Windows Phone team and understand and agree with their approach.
Please keep those comments coming! I like to think of my job as a two-way street: I evangelize to developers -- as well as the users for whom they develop -- on behalf of Microsoft, but I also evangelize to Microsoft on your behalf. If you'd like to tell Microsoft something, this is a great place to do it.
@Joe Clark said:
the result will by default be inaccessible to blind people...What does the Esteemed Competition do?
iOS has a complete accessibility API. You can have the device "read" all of the controls on screen to you, and then you select which one you want to use.
Does the Nikon Coolpix S70 run Windows? If not, it should be listed under 'Esteemed competition' heading
This is a bit out of topic, but I see you talking about how .Net gets stripped on WP7. Please do not do that. I see no reason, barring non-touch WPF/WinForms being replaced by 'TouchForms', to strip the framework of anything that much as it has been done with Compact Framework. Seriously, stripping away Exception message strings to gain a measly 600k, giving birth to a host of obscure assembly errors, or partially implementing stuff, like System.Diagnostics, or even worse making it incoherent in use, like XML deserialization and XPath support, or obliterating truly useful stuff like System.Configuration... are all glaring mistakes. And this is only what I've encountered myself on a single multi-device project.
We developers end up with an insane amount of lost time and frustration working around limitations or reinventing the wheel (obviously non-optimized and possibly differing in behavior on corner cases) to have really portable application libraries.
Of course we probably don't need the ASP.NET stuff on WP7 but 90% of the remaining should stay as it is in the regular framework. And classes with the same name in both frameworks should behave EXACTLY the same, and even whole namespaces should contain the same stuff. Indeed knowing what you don't have at hand on a platform should be as easy as looking at the assemblies/namespaces.
Rule of thumb: it should be stripped only if it's blatantly irrelevant to the platform.
It would appear that some within Microsoft "get it" with regard to user interface issues on a "slate" type computer as defined by Apple's iPad. Whether those who do "get it" within Microsoft will be able to convince upper management to rethink tying the baggage of Windows to a slate OS is doubtful, given Steve Ballmer's recent statements.
Peter Bright (aka DrPizza) has nailed Microsoft's collective failure to understand the scope of their problem in trying to compete in this new paradigm of personal computing put forth by Apple.
I'm posting this on my iPad, and after using one for a month I totally "get it". Somebody should give one to Steve Ballmer to use for a month. He would be a changed man, and a better person for it.
Does the Nikon Coolpix S70 run Windows? If not, it should be listed under 'Esteemed competition' heading.
No, but Nikon's a sponsor of our events and provides Microsoft Canada's Developer and Platform Evangelism team with cameras. Between that and their not being in direct competition with us, I don't think of them as competition.
If you want to take a message to Microsoft here's my suggestion; bolting touch features onto Windows will not produce an effective mainstream competitor to the iPad. I think that the success of the iPad shows that to be effective Touch has to central to the entire platform. I'd go as far as to say that unmodified keyboard-and-mouse applications have no place on a Touch platform.
Maybe there is a secret project inside Microsoft working on such a platform; however this definitely isn't the message that Ballmer was giving recently. He compared the iPad situation to the rise of Linux based Netbooks and how Windows was able to easily replace those OSs. He also mentioned 'tuning' Windows to work better on tablets. Both of these statements indicate to me that, at least in the short-term, the top of Microsoft really believes that Windows with Touch is the answer.
The annoying thing about this is that Microsoft really have something to offer in this area if they are prepared to truly commit. I think that an iPad style user interface with the option of using a pen for hand-writing/drawing would be a compelling alternative for a lot of people.