Notes on comments.
Welcome to our blog dedicated to the engineering of Microsoft Windows 7
We’ve come a long way in engineering Windows 7 since we first provided an engineering preview of Windows 7 and the work we are doing to support the touch interface paradigm back at the D: All Things Digital conference. We chose to kick-off the discussion about engineering Windows 7 with touch scenarios because we know this is a long-lead effort that requires work across the full ecosystem to fully realize the benefit. For Windows 7, touch support is engineered by building on our advances in input technology we began with the TabletPC work on Windows XP. Touch in Windows 7 requires improvements in hardware, driver software, core Windows user experience, and of course application support. By having this support in an open platform, consumers and developers will benefit from a wide variety of choices in hardware, software, and different PC form factors. Quite a few folks have been a little skeptical of touch, often commenting about having fingerprints on their monitor or something along those lines. We think touch will become broadly available as the hardware evolves and while it might be the primary input for some form factors (such as a wall mounted display in a hospital, kiosk, or point of sale) it will also prove to richly augment many scenarios such as reading on a convertible laptop or a “kitchen PC”. One of my favorite experiences recently was watching folks at a computer retailer experience one of the currently available all-in-one touch desktops and then moving to another all-in-one and continuing to interact with the screen—except the PC was not interacting back. The notion that you can touch a screen seems to be becoming second nature rather quickly. This post is our first dedicated blog on the subject. This is a joint effort by several people from the touch team, mostly Reed Townsend, Dave Matthews, and Ian LeGrow. -Steven
Windows Touch is designed to enhance how you interact with a PC. For those of us that have been living and breathing touch for the last two years we’re excited to be able to deliver the capability to people using Windows 7. In this blog we’re going to talk about what we’ve done to make Windows touchable. We approached this from a number of different directions: key improvements to the core Windows UI, optimizing for touch in key experiences, working with hardware partners to provide robust and reliable touch PCs, and providing a multitouch platform for applications.
With Windows 7 we have enriched the Windows experience with touch, making touch a first-class way to interact with your PC alongside the mouse and keyboard. We focused on common activities and refined them thoughtfully with touch in mind. You will have the freedom of direct interaction, like being able to reach out and slowly scroll a web page then flick quickly to move through it. With new touch optimized applications from creative software developers you will be able to immerse yourself as you explore you photos, browse the globe, or go after bad guys in your favorite games.
While providing this touchable experience, we made sure you are getting the full Windows 7 experience and not a sub-set just for touch. We’ve been asked if we are creating a new Touch UI, or “Touch Shell” for Windows – something like Media Center that completely replaces the UI of Windows with a version that is optimized for touch. As you can see from the beta, we are focused on bringing touch through the Windows experience and delivering optimized touch interface where appropriate. A touch shell for launching only touch-specific applications would not meet customers’ needs – there would be too much switching between “touch” mode and Windows applications. Instead, we focused our efforts on augmenting the overall experience so that Windows works great with touch.
We took a variety of approaches – some broad, and some very targeted to support this goal:
Overall, the Windows Touch features are designed to work together to deliver a great end-to-end touch experience. For example, the goal with IE8 was to deliver a seamless touch browsing experience, this includes the panning, zooming, URL entry, and several interface enhancements. For this reason, all the new touch features require the presence of a multi-touch digitizer – more on that further down.
The Windows Touch gestures are the basic actions you use to interact with Windows or an application using touch. As we noted above, because the gestures are built into the core of Windows, they are designed to work with all applications, even ones that were never designed with touch in mind.
Our mantra with gestures has been “Predictable + Reliable = Habits”. To be predictable the action should relate to the result – if you drag content down, the content should move down. To be reliable, the gesture should do roughly the same action everywhere, and the gesture needs to be responsive and robust to reasonable variations. If these conditions are met then people are far more likely to develop habits and use gestures without consciously thinking about it.
We’ve intentionally focused on this small set of system-wide gestures in Win7. By keeping the set small we reduce misrecognition errors – making them more reliable. We reduce latencies since we need less data to indentify gestures. It’s also easier for all of us to remember a small set! The core gestures are:
For touch gestures, seeing them in action is important so here is a brief video showing the gestures in action:
In order to make the gestures reliable, we tuned the gesture detection engine with sample gesture input provided by real people using touch in pre-release builds; these tuned gestures are what you will see in the RC build. We have a rigorous process for tuning. Similar to our handwriting recognition data collection, we have tools to record the raw touch data from volunteers while they perform a set of scripted tasks. We collected thousands of samples from hundreds of people. These data were then mined looking for problems and optimization opportunities. The beauty of the system is that we can replay the test data after making any changes to the gesture engine, verifying improvements and guarding against regression in other areas.
This has led to several important optimizations. For example, we found that zooms and rotates were sometimes confused. Detecting zoom gestures only in applications that don’t use rotation has resulted in a 15% improvement in zoom detection.
Further analysis showed that many short gestures were going unrecognized. The gesture recognition heuristics needed to see 100ms or 5mm worth of data before making a decision about what gesture the user was performing. The concern that originally led to these limits was that making a decision about which gesture was being performed too early would lead to misrecognition. In fact, when we looked at the collected user data, we found we could remove those limits entirely – the gesture recognition heuristics performed very well in ambiguous situations. After applying the change and replaying the collected gesture sample data, we found zoom and rotate detection improved by about 6% each, and short scrolling improved by almost 20%!
Gestures are built into the system in such a way that many applications that have no awareness of touch respond appropriately, we have done this by creating default handlers that simulate the mouse or mouse wheel. Generally this gives a very good experience, but there are applications where some gestures don’t work smoothly or at all. In these cases the application needs to respond to the gesture message directly.
In Windows, several experiences have been gesture enabled. We’ve spent a considerable amount of effort on IE8 – ensuring scrolling and zooming are smooth and that back and forward are at your fingertips. Media Center, which is a completely custom interface ideally suited to touch, added smooth touch scrolling in galleries and the home screen. The XPS Viewer has gesture support that will could become a model for many document viewing apps. Scrolling and zoom work as you would expect. When zooming out beyond a single page, pages start to tile so you can view many at a time. When zoomed out in that fashion, double tapping on any page jumps back to the default view of that page. A two-finger tap restores the view to 100% magnification. These predictable behaviors become habit forming quickly.
A major benefit of the Windows ecosystem is diversity – PCs come in all shapes and sizes. To help ensure that there is a great Windows Touch experience across the many different types of PCs we have defined a set of measurements and tests for Windows Touch that are part of the Windows Logo. We’ve been working with touch hardware partners since the beginning of Windows 7 to define the requirements and ensure they are ready for launch.
Our approach has been to provide an abstraction of the underlying hardware technology. We’ve specified a requirements for the quantitative aspects of the device, such as accuracy, sample rate, and resolution, based on the requirements to successfully enable touch features. For example, we have determined the necessary accuracy values for a device so people can successfully target common UI elements like close boxes, or what sample rate and resolution are required to ensure quality gesture recognition.
The requirements form the basis for the Windows Touch logo program. For consumers, the logo tells you that the PC and all of its components are optimized for Windows. Component level logo, which is what we grant to Touch digitizers helps the OEMs choose a device that will deliver a great touch experience.
Based on the quantitative requirements, we built an interactive test suite that includes 43 separate tests, all validating the core requirements under different conditions. There are single point accuracy tests at various locations on the screen, including the corners which are often harder for accuracy but critical to Windows. There are also several dynamic tests where accuracy is measured while drawing lines on the screen – see the screenshot below of Test 7. In this test, two lines are simultaneously drawn using touch along the black line from the start to the end. The touch tracings must remain within 2.5 mm of the black line between the start and end points. The first image below shows a passing test where the entire tracing is green (apologies for the fuzziness – these are foot long tracings from a large screen that have been scaled down).
Figure 1: A passing line accuracy test from the Windows 7 Touch logo test tool
Not all devices pass the tests. Below is a screenshot of a device that is failing. This one has some noise – notice the deviation from the line in red. These errors need to be resolved before it would receive the logo. Errors like this can result in misrecognized gestures.
Figure 2: A failing line accuracy test from the Windows 7 Touch logo test tool
To ensure repeatability of the tests, we’ve built a set of plastic jigs with tracing cut-outs, see photo below. This particular jig is used for 5 of the tests and measures accuracy while tracing an arc.
Figure 3. Plastic jigs with tracing cut-outs for testing.
The testing tool is available to our partners now, we’re working closely with several of them to help tune the performance of their devices to meet the requirements and deliver a great touch experience. We have set-up an in-house testing facility that will be testing every device submitted for Logo.
With the Release Candidate, OEMs and IHVs will be able to finalize the logo process for systems designed for Windows 7. Today we already have several hardware partners that have provided us with devices and drivers for testing.
We also want to talk a little about the touch platform for software developers. Windows 7 provides a rich touch platform for applications. We’ve already mentioned gestures, there’s also a lower level platform that gives developers complete control over the touch experience. We think about it in a Good-Better-Best software stack.
The “good” bucket is what touch-unaware applications get for free from Windows 7. Windows provides default behaviors for many gestures, and will trigger those behaviors in your application in response to user input. For example, if someone tries touch scrolling over a window that is touch-unaware, we can detect the presence of various types of scrollbars and will scroll them. Similarly, when the user zooms, we inject messages that provide an approximation of the zoom gesture in many apps. As a developer you can ensure that the default gestures work just by using standard scrollbars and responding to ctrl-mouse wheel messages.
The “better” bucket is focused on adding direct gesture support and other small behavior and UI changes to make apps more touch-friendly. For instance, there is a new Win32 window message, WM_GESTURE (preliminary MSDN docs), that informs the application a gesture was performed over its window. Each message contains information about the gesture, such as how far the user is scrolling or zooming and where the center of the gesture is.
Applications that respond to gestures directly have full control over how they behave. For example, the default touch scrolling is designed to work in text centric windows that scroll primarily vertically (like web pages or documents), dragging horizontally does selection rather than scrolling. In most applications this works well, but if an app has primarily horizontal scrolling then the defaults would have to be overridden. Also, for some applications the default scroll can appear chunky. This is fine with a mouse wheel, but it feels unnatural with touch. Apps may also want to tune scrolling to end on boundaries, such as cells in a spreadsheet, or photos in a list. IE8 has a custom behavior where it opens a link in a new tab if you drag over it rather than click it.
In addition to gestures, there are subtle optimizations applications can make for touch if they check to see if touch is in use. Many of the subtle touch behavior optimizations in Windows were enabled in this manner. Larger Jump List item spacing for touch, larger hot spots for triggering window arranging, and the press and hold behavior on the desktop Aero Peek button with touch are all features written with the mouse in mind, but when activated via touch use slightly different parameters.
Applications or features that fall into the “best” bucket are designed from the ground up to be great touch experiences. Apps in this bucket would build on top of WM_TOUCH – the window message that provides raw touch data to the application. Developers can use this to go beyond the core system gestures and build custom gesture support for their applications. They can also provide visualizations of the touch input (e.g. a raster editing application), build custom controls, and other things we haven’t thought of yet!
We also provide a COM version of the Manipulations and Inertia APIs from Surface. The Manipulations API simplifies interactions where an arbitrary number of fingers are on an object into simple 2D affine transforms and also allows for multiple interactions to be occurring simultaneously. For instance, if you were writing a photo editing application, you could grab two photos at the same time using however many fingers you wanted and rotate, resize, and translate the photos within the app. Inertia provides a very basic physics model for applications and, in the example above, would allow you to “toss” the photos and have them decelerate and come to a stop naturally.
We’ve previously demonstrated, Microsoft Surface Globe, an interactive globe done in partnership with the Surface effort. Spinning the globe works as you would expect from a real-world globe, but with a touchable globe you can grab and stretch the view to zoom in, rotate, and move the view around. Interacting with the globe and exploring the world is the majority of the UI, and it is exceedingly easy to use with touch. Other features like search and adding markers to the map have also been designed with touch in mind.
Here’s another video to get an idea of what we’re talking about:
We’re eagerly looking forward to seeing new touch-optimized user interfaces and interactions. If you’re thinking about writing touch applications or adding touch support to your existing app, you should start with the MSDN documentation and samples.
We’ve noted several touch updates in the RC. If you have the Windows 7 Beta you can experiment with touch using a PC that supports multiple touch points. Please note that the multitouch PCs available today were developed while the Windows 7 requirements were also defined, so while we believe they can support Windows 7’s requirements, only the maker of the PC can provide the logoed drivers for Windows 7 and support the PC on Windows 7. Keeping that caveat in mind, today there are a few multitouch PCs on the market:
To enable multitouch capabilities on these PCs running the Windows 7 Beta you will need to make sure you have the latest multitouch beta drivers. Remember these are pre-release drivers and are not supported by Microsoft, Dell or HP. And again, they still need to pass through the Windows Logo process we described above before they are final.
We often get asked about single-touch PCs. Will they work with Windows 7? There are many types of hardware available for touch and many screens and PCs can provide single touch (usually based on resistive touch technology). A single-touch PC will have the same functionality on Windows 7 as it does on Vista, but this functionality will not be extended to the Windows 7 capabilities. As we noted earlier, Windows Touch in Windows 7 is comprised of a collection of touch enhancements, several of which require multitouch, that work together to deliver a great end-to-end touch experience.
As form factors change and the demands of our user interfaces change, input methods change and grow as well. We’re excited about the unique benefits touch offers the user, and the new places and new ways it enables PCs to be used. We expect PCs of all form factors and price points to provide touch support and so it makes sense that these PCs will be able to take advantage of the full range of Windows 7 capabilities.
Windows 7 is designed to provide efficient ways to use multitouch for the most common and important scenarios, while being a natural and intuitive complement to the mouse and keyboard people use today.
Keep in Touch!
- Windows Touch Team
@jcompagner: "Or is this touchpad multi touch purely a hardware/driver issue? Why is it then in the windows pc world not really happening?"
Works for me. Whomever you bought your laptop from should provide the correct driver. Who is it that failed to do so????
We can collectively take out this bad manufacturer to tar and feather. Who is it?
>> Tihiy relax, that's standard behavior of a default window procedure. (I think ;) )
No, it's not!
Touch seems to be not integrated into windows core libraries at all, with WinEvent hooks meaning that performance and stability is degraded (a bit, but matters). And that strange DefWindowProc leaking: no other sane message ever leaked memory in Windows.
Its a Dell Vostro 1700 (1.5 year old), and if you want to bash dell then i guess they should be bashed because of the very bad 64bit support for drivers. I needed to get it from another system of them (precision notebook) i would like that Microsoft would pressure Dell that it has to support both equally well.
But i guess the trackpad (the hardware) should also support multi touch, i guess that is for the Vostro 1700 not yet the case.
What things can you do then? Scroll with 2 fingers and so on?
I don't care whether you can touch it or not - where the hell is it?
Disturbing news: InfoWorld: Microsoft slates May date for Windows 7 RC download
It's suppose to be not later than April 10 ..
I'm saving 2.5 GB of my 5 GB per month allowance and lose that if I can't get this by April 16.
I guess I'll have to try for 5057 just before my allowance runs out. I mean, I wouldn't save it, but don't know for sure if it's coming or not!
I haven't commented posts here since longer time. And what can I read here now ? Mainly criticism. And I agree in big part with it. Why ?
People living in other world parts than America like nice looking things, but very often they're looking more into other aspects too.
For example: I can browse Internet using for example Damn Small Linux (50 MB), I can modify Windows XP installation for less than 1 GB of HDD (and it will be still able to run majority/all of my applications), etc. etc.
Windows Vista/Seven give various innovations, but they still are not so configurable or small. They also put various new hardcoded limitations (with codecs too). For many people nice looking Seven interface will be not enough reason to switch into it because of these limits.
And when we speak about RAM: KolibriOS needs 1 or 2 MB or RAM. I agree, that it's primitive. But some Linux environments can be modified for less than 100 or 50 MB. Speaking that few hundreds MB of RAM is "good result" is some misunderstanding in my opinion.
This is like with American cars (which are not too popular in Europe): many people don't like them, because they're too big or because they have "different" design.
If my opinion: if MS will continue his strategy, can fail. The more limits will be put, the more people will be interested in alternatives.
You can not agree with it. This is your choice :)
Go tu buy Windows Mobile or Windows embadded
You haven't commented posts here since longer time ?
Pls continue to no commented
@tihiy: Calling the default windows procedure for any message you don't process has always been a requirement on Windows. Failing to do so can lead to leaks, crashes or other undefined behaviour.
"An application-defined window procedure should pass any messages that it does not process to the DefWindowProc function for default processing."
I'm not so sure I like having a single finger moving up and down scroll but side to side selects. That just seems like totally different actions and so not very reliable as you pointed out. I mean if you use a scroll wheel that supports side to side scrolling up and down scrolls and side to side also scrolls. what if I'm reading a long document and want to select a large section? With a mouse I normally have more up and down movement then side to side.
Why not just have one finger always being scroll and two fingers do select? (the movement should allow you to know if its not a right click)
Is there any difference between having your index finger one the screen and then touching the screen with your middle finger and having your middle finger down and touching the screen with your index? (i.e. the left/right side the second finger touches is important) This would obviously mean you would need a left handed, right handed option in the control panel but would give more options on what the touch should do.
Any .Net API's planned for release or changes to the .Net framework to support multitouch?
I have a 3 month old Toshiba Satellite Pro, originally running Windows Vista.
My drivers are "up to date" and there is NO multi-touch supported in Windows Vista or 7. A near identical 1 year old model of Sat. Pro that my colleague is using supports multi-touch within a Linux environment, and I have searched for Drivers to enable to this hardware-supported function in Windows, but to no avail.
If anyone can link me through to Drivers that will work on a Toshiba, please do so.
If there are none, I move that MS thoroughly suggest that these drivers be created to, at the latest, coincide with the release of Win7 for all of the major companies.
This is a function that will be infinitely beneficial to all notebook users, not a select few touch users.
Just my thoughts,
- AeonSlayer / Simon
I'm very interested in the touch features of Windows 7. I was curious about what level of support / assistance there might be for people who have built their own touch devices? I know there's a reasonable community of people (e.g. places like http://nuigroup.com/) who are finding ways to build working devices - but I imagine the hard part would be getting the software libraries for these systems to somehow interface to whatever hardware abstraction layer Windows Touch uses. Will there be detailed information published on how the drivers / abstraction layer etc etc work?
I appreciate this would be a niche market, but people like this at the cutting edge would no doubt push the technology and also demonstrate to hardware companies that there is demand in the marketplace for touch devices.
I have quite blocking problem with touch. When I install the Windows 7 and touch the display, the cursor moves horizontally when I move the finger vertically.
I thought that the calibration would do, but after touching all 16 crosses, the touch becomes unusable, probably due the unexpected swap of axes.
Would be great if the Windows could adapt to such situation.
PS. The calibration displays UAC prompt even if the default UAC setting was kept.
@jcompagner: Yeah, scroll with two fingers, etc. It is a Eee Netbook. I have the 1000 model (10in screen). There are three finger gestures, but I don't bother. The two finger scrolling is very handy though. These are going for under $400 if you want one.
The functionality looks good; nice job so far. I would encourage you to give it a bit more polish next. E.g. one person's suggestion about making a picture actually rotate instead of just snapping 90 degrees. I've seen some amazing physics on Surface - things bounce around, hit one another, moving objects have inertia, objects can be elastic, etc. You could apply some of those principles to moving and resizing windows that would make Windows 7 touch look very slick.
If the other Toshiba supports it on Linux, I guess it might be a synaptics touchpad. If so you might try downloading the driver yourself:
Even with the driver installed, there is likely an option setting to activate gestures.
Here is a description of how to use the gestures when activated: