Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

  • Engineering Windows 7

    Engineering the Windows 7 Boot Animation

    • 103 Comments

    As we connect through this blog and through all of those talking about Windows 7 it is clear that folks have a lot of passion around many topics.  We learned early on about the passion around the boot/startup sequence and how important it was for that to go by quickly! At the same time, we know that it is really dull to watch a HDD light blink when resuming a machine from hibernate or powering up a machine. To improve this first connection with people, we set out to improve the boot sequence—jazz it up if you will. It sounds pretty easy, but as we looked into solving this we found it is a pretty significant engineering challenge. Our goal was to have fun while having no impact on the boot performance of the system. To explain this engineering and describe the boot sequence, Karen Wong, a program manager on our Core User Experience feature team, authored this post. --Steven

    Design

    We use the word “personality” to refer to some of the characteristics of software that connect emotionally with people.  ‘Light’ and ‘energy’ are some of the terms we use to describe the personality of Windows 7. As we designed Windows 7 it became clear that in order to showcase these elements of Win7’s personality, we needed go above and beyond what we did with Vista’s boot visuals.

    From a design perspective, we know that the visual presentation of a feature plays a key role in the user’s perception of performance and quality.  Our objective was to make Windows boot beautiful and was inspired by our Windows 7 personality of light and energy; and the way these forms reveal themselves in nature became our design palette.  Words such as “bioluminescence”, “organic”, “humble beauty”, and “atmosphere” came up frequently in our brainstorming sessions. We know that in isolation these might sound a bit corny, but this is all part of the overall goals of Windows 7.

    Over two dozen boot sequence designs were created, reviewed, and user tested to evaluate them against our goals.  Designs varied in the saturation and/or brightness of color, the complexity of motion, and lighting effects.  Here are some sketches from our design journey:

    Design sketches for the proposed boot animation

    The final design in Windows 7 shows energy approaching from four directions, that join to form a light that projects through a window (of course it is no coincidence that the Windows logo resembles a window!).  A subtle pulse indicates progress thereafter; another design detail that reinforces the liveliness of Windows 7’s personality.

    From a design perspective, this new boot sequence met all of our design goals, and we were excited to send it out into the world.  However, boot had to be more than just a pretty face.  From an engineering perspective, we had some clear challenges to overcome, as we knew that “time to desktop” was still the most important thing to users. Visual delight could not trump getting to the desktop faster and many of you have been critical of features that are dubbed “eye candy” – the boot sequence is not going to be one of those features for sure. 

    No compromise on performance

    If we had kept everything the same from Vista and simply updated the boot animation to the new Win7 look, we would not have achieved new levels performance and quality that we aspire to.  In fact, significant code changes were required in order to make the new boot animation even possible in Win7. 

    In Vista, the boot loader is using a low resolution 640x480 screen, and file size required for the green animated progress bar is very small. Furthermore, the Vista boot screen had low color depth – 16 bits per pixel (bpp). We increased this in Win7 to 32 bpp, which enabled the color richness you see in the new boot animation. Updates to the Vista boot progress indicator were achieved via the CPU, which was susceptible to I/O time, therefore causing occasional glitches in the animation. With the low resolution screen, limited color depth, and susceptibility to glitches – we knew we had our work cut out for us if we wanted to build something fancier for Win7.

    We started with the Win7 boot loader using a different mechanism to display the boot animation. It gets a pointer to the frame buffer from the firmware (either BIOS or UEFI firmware), and displays a higher resolution image (1024 x 768). It animates the image while the kernel and boot critical device drivers are loaded into memory.   Since the native graphics driver for the display is not loaded into memory and initialized yet, the animation is run by using the CPU, and by updating the frame buffer for the graphics display. We made an additional optimization - to have the CPU use write-combined caching to accelerate performance.

    Michael Fortin’s blog entry on boot performance describes how the early stage of boot is I/O bound, as it is loading the kernel, device driver files, and other system component files.  We therefore limited the dimensions of the boot animation to a small region of the screen, to avoid introducing any delay during the early stage of boot.  A larger animation area would require loading a larger set of animation images, which adds to the file I/O.  The animation images are compressed by incorporating the bitmaps as resources, which are then compressed using WIM image compression. WIM image compression reduces the overall file size, thereby reducing the I/O required to read them in. It also reduces the on-disk footprint. Animating a smaller region of the screen, and using a slightly lower frame rate also keeps the CPU overhead of updating the frame buffer to a low enough level, that there is no added overhead to the boot time.

    Another change we made that improved not only the performance of boot, but the quality, was the reduction of transitions in graphics mode. These transitions occur during initialization of the graphics subsystem and Windows shell. In Vista, these cause the boot experience to be less smooth, as the display changes (flashes black) a few times before presenting the user with a logon screen (or the user’s desktop if there is only one system user). 

    After looking deep into our boot architecture for performance and quality improvements to enable the new animation, we were pleasantly surprised that the act of beautifying the boot animation created a new opportunity to further decrease time to desktop.  In Vista, when a customer powered on the machine, the boot sequence included an animation of the Windows flag, or ‘pearl’, before reaching the login screen (or the desktop if the user is set to auto-login).  Due to the Vista boot architecture constraints, this pearl animation can only play after boot code has already completed. 

    Vista boot animation 
    Vista Boot Sequence, with Pearl Animation

    Now that new boot visuals display a rich animation that reflects the Windows 7 personality, the pearl animation seemed out-of-date and redundant, and was removed.  As a result, we saved the time it takes to play this animation after boot is complete.

    Windows 7 boot animation.
    Windows 7 Boot Sequence, Pearl Animation Removed

    You may also be wondering what happened to the startup sound.  In Vista, the sound had to be synchronized with the pearl animation to produce the highest quality experience.  This has potential performance impact on some hardware, as we require the system’s sound stack to be loaded to complete the pearl sequence.  In the cases when we are waiting for the system’s sound playback to be ready, a delay can occur in getting to the desktop.  As such, we changed the sound to now play asynchronously, anytime after the logon screen loads.  On most hardware that we tested, this is right when the logon screen displays.  We heard customer feedback in Vista that the sound played and caught your attention, but boot was not yet complete.  So in addition to performance benefits, this change also improves the user experience by letting users know when their machine is ready for use. 

    The sum of the boot code optimizations and removal of the pearl animation from Vista enabled us to add a rich, high-quality animation during boot, with no increase in the time it takes a user to reach the desktop.

    Designing for a wide range of hardware

    The boot experience varies depending on the user’s hardware.  We made some design decisions to ensure the best visual experience across a wide range of hardware, however the time it takes a system to get to the desktop is mainly hardware-dependent.

    For example, you may notice that there is a delay before the animation starts during boot, and this delay time varies depending on system hardware.  To optimize for showing immediate feedback, we actually display text on the boot screen before Windows has had a chance to start all the processors on the system. It is only when that is complete that the animation can run asynchronously to the rest of the I/O during boot (which as discussed earlier is necessary for optimal performance and quality).

    You may also notice that the Windows flag’s dimensions during boot may change slightly on different screen sizes.  Due to technical constraints in Win7, boot is always displayed in our recommended minimum resolution – 1024x768, regardless of the system’s native resolution.  Today, most hardware is set to stretch the boot sequence to fill the screen, as opposed to centering it.  Consequently, the boot animation is usually stretched on screens that are of different aspect ratio than 1024x768; however, we did test the sequence on common aspect ratios to ensure that visual quality was preserved.

    Boot, Reboot and Resume from Hibernate

    With all this hard work to improve the boot experience, we couldn’t let it go to waste.  As such, users will also have this experience when they resume from hibernate. 

    Personalization

    We know many of you might be asking if you could include your own animation or customize this sequence. This is not something we will support in Windows 7.  We’ve talked about and shown a great many “personalization” elements of Windows 7 already, such as the new themepacks which you can try out in the beta. The reasons for this should be pretty clear, which is that we cannot guarantee the security of the system to allow for arbitrary elements to be loaded into memory at boot time. In the early stages of starting Windows, the system needs to be locked down and execute along a very carefully monitored and known state as tools such as firewalls and anti-virus checking are not yet available to secure the system. And of course, even though we’re sure everyone would follow the requirements around image size, content, etc. due to performance we would not want to build in all the code necessary to guarantee that all third parties would be doing so. One of our design goals of Windows 7 was around making sure there are ample opportunities to express yourself and to make sure your PC is really your PC and so we hope that you’ll understand why this element is one we need to maintain consistently.

    This was a quick behind the scenes look at something that we hope you enjoy. With Windows 7 we set out to make the experience of starting a Windows PC a little more enjoyable, and from the feedback we’ve seen here and in other forums, we think we’re heading in the right direction. In addition to our efforts to make boot fast, we also have a goal to make the system robust enough, such that most of you will not see this new boot animation that often and when you do it will be both enjoyable and fast!

    --Karen

  • Engineering Windows 7

    Advances in typography and text rendering in Windows 7

    • 51 Comments

    Even with the pictures and videos so commonplace on PCs, many of us spend most of our time looking at and interacting with text. Yet few of us stop to think about the depth of technology required to render text well and that this is an area that continues to benefit from improved technology in displays, graphics cards, as well as the APIs available to developers. In Windows 7, The support for text and fonts in GDI continues to provide the foundation for compatibility and application support. Building on the foundation of the modern DirectX graphics infrastructure, Windows 7 enhances the text output available to developers with DirectWrite. This is a new API subsystem and one that over time you will see adopted more broadly by applications from Microsoft, independent software developers, and within Windows itself. This post will also talk about improvements to ClearType and the Fonts, both available as part of the improvements to the GDI-based text APIs. This work was introduced at the PDC (pointers towards the end of the post). This post is by Worachai Chaoweeraprasit, a development lead on our Graphics feature team. --Steven

    One of the high-level goals of Windows 7 is to have even better graphics – graphics with higher fidelity. To that end, my team is looking into how to improve one of the most basic graphic elements in Windows, and that is text – the thing that’s always right in your face, but we hope you’ll never actually see it.

    The need for good text

    About 80% of the time people spend with their PC is to either read or write. This should come as no surprise when you realize that text is essentially how the machine talks back to you, and until we have a technology that would allow it to interject thought directly into our brains, text would probably continue to be the way we receive information from the computer screen.

    Studies have shown that good text leads to better productivity. Essentially we are wired as human to be incredibly good at capturing words and making a smooth, rapid transition between them – the basis of reading. We’re so good at it that we can do it unconsciously with incredible speed given that the text is optimized for that process. This might explain why many can sink in to a good book for hours, but some quickly become tired after staring at the computer screen for a while. Any visual-related factor that could disrupt the reading process effectively slows us down. Good text, therefore, is text that is tuned to support the human reading process with minimal distraction possible.

    The evenness of the white surrounding each letter, word, line, and paragraph plays a huge role in keeping the pace of reading while the black elements holds our attention together. A line too long, a word too tight, a paragraph too uneven, any of these conditions take us farther and farther away from the message being delivered but closer and closer to the mere medium delivering it. The art of text is essentially to make the actual text itself disappears before your eyes, so that the ideas it delivers reappear in your head. The study of how to prepare proper text is known as typography. And, as a typographer would say: good typography is not to be seen; only the bad ones are. As a platform, the role of Windows is to deliver great presentation of text and offering software developers great tools for creating the best presentation possible in the context of the software they develop.

    Improving current techniques

    People tend to develop habits and often over time these become the preferred way of getting things done. The more mundane the activity is, the easier we become attach to it, and the harder we’re willing to change. When it comes to text on your screen, the same screen you look at days in and days out. It could quickly become awkward if that completely changes overnight – even for the better. So, how do we go about improving on what we all become so used to? We want to make sure to support what is there and improve it, while supporting existing methods. But, before we get to understand the improvement, let’s first take a closer look into the current implementation really is and what challenges it presents over the years.

    The current implementation is the product of text rendering design based on device pixel. The dimension of text at a certain size eventually translates into a fixed number of pixels in horizontal and vertical direction on the device surface. A 10-point text would translate to roughly 80 pixels height on a typical printer device of 600 dpi, while the same text would merely acquire 13 pixels on a 96 dpi monitor. This physical screen condition was hardly adequate for the quality we’re seeking for good text on screen.

    Fortunately, the advent of ClearType during the past decade has largely improved the clarity aspect of quality. ClearType leverages the anatomy of the LCD pixel structure and takes advantage of the human visual system to distribute the energy typically emit to a whole display pixel, across the neighboring sub-pixels in the LCD’s typical 3-color channels making up each individual pixel, to create the visual illusion of higher resolution raster quality on a lower resolution device. As the result, ClearType text looks significantly sharper than the typical text on an LCD display, mitigating a large portion of the quality problem on a display technology that would become hugely popular a few years later.

    Another pleasant design of the original ClearType in Windows was that it has improved the clarity of text without breaking application compatibility – that is, it doesn’t change the actual size of each individual glyph in either direction, nor did it change the distance between the two adjacent ones. This is the reason one could turn it on or off at will without having to “store” the selected option in the document or application. It is entirely per-user rendering preference. In Windows 7 we also improved the ClearType Text Tuner in keeping with our theme of being in control of your PC experience, by providing even more granular choices when tuning ClearType (and of course you can still turn it off).

    But like many other things in the world, the coin comes in two faces. While it is able to preserve backward compatibility, it is limited by its own leverage unable to advance the state of the art. The width and height of the individual glyph and the nominal distances between the adjacent two remain fixed to the rounded number of screen pixels at a given size.

    One of the graphics improvements we made in Windows 7, therefore, is to move from the physical pixel model of the past, and instead creating a new design around what we call the “device independent pixel” unit (or “DIP”), a “virtual pixel” that is one-ninety-sixth an inch in floating-point data type. In this model, a glyph (or any other geometric primitive for that matter) can size to fractional pixels, and be positioned anywhere in between the two pixels. The new ClearType improvement allows sizing and placement of glyph to the screen’s sub-pixel nearest to its ideal condition, creating a more natural looking word shape and making text on screen looks a lot closer to print quality.

    The following figure shows the side-by-side comparison of the same word between the original or today’s ClearType (above) and the Windows 7 improvement – Natural ClearType (below), which does require calling the new APIs to render. Notice the width of the letters in the word and the spacing between them, as well as how the more consistent width and spacing improves the overall appearance of the entire word. Note that all the letters are placed with its nominal spacing and there is no kerning adjustment being applied here. A great article by Kevin Larson – a researcher in the Advanced Reading Technology team, discusses in details the scientific aspect of word recognition.

    Comparing ClearType and Windows 7's Natural ClearType

    The ability to be more precise in approximating the screen placement of natural text also lends itself to a very nice side-effect, and that is the fact that text can now be placed on the line with no regards to the actual display device’s resolution. It means a UI designer can design an application UI knowing it’ll look the same on all other screens as it appears on his or her screen regardless of what type of display device the users might have. This fact is also particularly handy for software localization where the translated text produces the same layout everywhere.

    This improvement could also offer a more realistic view of a print document on screen, or make the screen document looks closer to its print counterpart. It could also improve the quality of document zooming. Imagine document zoom that could go in and out in the same manner as what you would see when pulling the actual print page closer and farther away from your sight. It could mean a more joyful experience for online reading.

    Fonts and Font Management

    The Font is the heart and soul to typography, much like photo is to photography. A lot more fonts are shipped with Windows these days while even more are developed around the world. Windows Vista shipped with 40% more fonts comparing to Windows XP. Windows 7 is expected to ship with 40+ new fonts, just to underscore this trend. We’ve also added some additional viewing/categorization capabilities using the Windows 7 Explorer to improve working with a large library of related fonts.

    The default common controls’ font dialog and the font chunk in Windows 7 Ribbon are also updated to be more intelligently selective of what fonts to be present to the user of the current user’s profile. Depending on a number of settings including the current UI language, the user locale, and the current set of keyboard input locales, the font list hides fonts of languages not typically used by the user of different culture and locale. For example, all the international fonts are automatically hidden away from a typical English user to reduce clutter and promote better productivity in common system applications such as NotePad, WordPad and Paint. Third-party application utilizing the Ribbon or the common controls’ font dialog could also have the same benefit. The user still retains the option of selecting any desired font back to the view by explicitly marking it in the Windows 7 Control Panel’s Font applet.

    Operating System Fonts shipped “in-box”
    Windows XP SP2 133
    Windows Vista 191
    Windows 7 235 (currently planned)

    This growth, however, introduces some new opportunities for improvement. We’ve long treated fonts as system-wide resources. It gets “installed” on the machine and kept in a single flat namespace managed by the core part of the operating system. It may be interesting to some that the font named “Arial Black” isn’t really in the same grouping as “Arial Narrow” or “Arial”. This is because as far as the operating system is concerned, they are just different fonts with different names. And because font is uniquely identified by its name, you can’t have multiple versions of the same font at the same time.

    Because font is system resource, non-traditional usage of font such as font embedded within the document, and font used exclusively in an application is done through the mechanism known as private installation, which involves making sure the font name is unique before installing it programmatically but doing so by hiding it from others to see. Private font is just like font installed publicly as far as the operating system internal is concerned.

    An important improvement in Windows 7’s new font system is the notion of “font collection” which allows partitioning of fonts sharing the same usage into a separate namespace. The system collection is similar to what exists today and is created and managed by the system whereas custom collection can be created and managed, as many as needed, entirely by the application program. This allows document to have its own set of fonts local to it, and third-party application or plug-in to ship with its own font used exclusively within the program. This partitioning not only reduces unnecessary system-wide font update and allows update to happen only locally as needed, it also allows access to multiple versions of the same font in different collections.

    The new font system also improves the way fonts are organized within the collection. It supports the notion of weight-width-slope variation where fonts with the same stylistic root but vary in its weight (thin, light, bold, black, etc.), width (wide, narrow, etc.), or slope (italic, oblique) are grouped together in the same font family. For instance, “Arial Narrow” becomes a variation or face in the “Arial” family. This grouping model is advocated by the CSS recommendation.

    Font Art

    Fonts also represent art and artistic expression. The technology helping create font is therefore the artist’s tool of expression. An important technology called OpenType emerged during the past decade. It enables new ways type design can be realized. OpenType allows designer to define how glyphs interact and transform in stages. The designer then exposes this function as an executable unit known as the “font feature” for application programmable access.

    OpenType was an offshoot of the TrueType Open technology Microsoft developed in 1994-95. The TrueType Open technology added the GSUB, GPOS, BASE, JSTF, and GDEF tables to the TrueType format. The primary usage at the time was to help with the creation of Arabic font due to the inherent complexity of the task. Microsoft chose to rename the technology to OpenType in 1996 and Adobe added their CFF glyph outline format to the technology in the same year. Today OpenType is used to improve readability of text as well as to express new and exciting type design in various languages.

    However, despite its long-time presence and availability, the usage of OpenType in the Windows world remains largely in specialized programs. The Windows native graphics system has not fully embraced OpenType for its mainstream usage of text. This absence discourages many designers as there is no standard way in Windows to test the feature they produce. Likewise, its limited exposure doesn’t encourage discoverability for mainstream application developers. Improving this and transitioning to this improved rendering technology is a multi-step and multi-release investment done so as to maximize the benefit while minimizing the disruption that might be introduced as incompatibilities. Windows 7 takes another step on this path. We know for many that care deeply about this area there is a strong desire to move faster. We are doing our best to balance the speed of transition with the desire to maintain compatibility.

    Windows 7 new text system not only uses available OpenType features internally but also allows access to any feature made available in the font in the high level programming interface, making it easier for application developer to discover and exercise the font feature in mainstream scenario. Windows 7 also ships with a brand new OpenType font “Gabriola” developed by a well-respected typographer John Hudson. Gabriola makes heavy use of contextual letterforms and offers an unprecedented number of stylistic sets for different usages of the font in different occasions. The figure below enumerates all stylistic sets available in this font; notice the subtleties and not-so-subtle way to distinguish each stylistic set.

    Gabriola style set 1 of 3

    Gabriola style set 2 of 3

    Gabriola style set 3 of 3

    The figure below also demonstrates the power of contextual letterforms in the eighth rendition of Gabriola’s stylistic set (“ss07”) to produce different ways the same word is rendered depending on where it’s at in the line.

    Gabriola rendered using contextual letterforms

    New APIs

    Rendering text is complex and involved, even though it seems like something that should be straight forward. There are probably hundreds of ways to format text in a document and often many paths that ultimately yield the same results. HTML/CSS is a complex standard and is a great example of the richness of how text may be formatted and typeset. Underneath the formatting logic lies the language requirement – the rule of writing for the language. Windows has long been supporting Unicode – another complex standard for global data interchange. Windows supports an increasing number of Unicode script in every single release. The mapping from the input text to the final glyphs in the font requires intricate transformation, which involves parsing of font data and analyzing the language writing pattern. Once the glyph is finalized, it is then rasterized, merged and filtered into the final visual on the display device.

    Due to this staging nature, different types of applications require different support from the text system. While a typical application such as the legendary “Hello world” type application may be satisfied with only the ability to get some text out showing to the user. The same level of support is hardly adequate for document preparation system such as Microsoft Word and Adobe InDesign. Some of the more mature application code bases may also have to deal with different graphics systems. This makes it harder in practice for a text system that tie to a particular graphics model to really be widely useful across the wide variety of applications in the Windows ecosystem.

    It became obvious to us early on during the planning stage of Windows 7 that text processing is not homogeneous, and different types of applications have different needs and requires different levels of support. The appropriate level of programming access to the text functionality is as important as the functionality itself. The new text system in Windows 7 is assembled into a self-sufficient system called DirectWrite. The API is provided in four layers – the interfaces for font data, rendering support, language processing, and typesetting, each built upon the others with the lower layer makes no requirement to the upper one, and none depends on a specific graphics model. To illustrate the latter point, the figure below shows a sample application that uses the new typesetting interface and language processor while the final rendering happens as an extruded filled 3D geometry from the 2D graphics environment also new to Windows 7 called Direct2D. Both systems were introduced in PDC 2008 as the new graphic foundation in Windows 7.

    Sample text output in DirectWrite using Direct2D

    DirectWrite preserves developer’s investment in existing technologies such as GDI and GDI+ in three important aspects. First, the previously described layering design of DirectWrite allows for the clean separation between the two fundamental processes of  placing and rendering of text. It enables applications to use DirectWrite to place text while having it rendered onto traditional graphic surfaces such as GDI and GDI+. The reverse scenario in which the application may use GDI to place text while having it rendered through DirectWrite is also naturally supported. The second aspect of compatibility comes from the fact that DirectWrite also supports all existing methods for placing and rendering text found in GDI. A DirectWrite application can use DirectWrite to place and render text in the same manner as GDI does without actually using GDI. Text placed and rendered under this compatibility mode is indistinguishable from GDI text from the user’s point of view, and as such preserving existing layout of application UI and text document. Lastly, DirectWrite exposes a set of APIs that interoperate with GDI. An application selecting a GDI font object can turn it into a DirectWrite’s font object and vice versa. Since the font system is at the low end of the DirectWrite API layer, it provides a natural interoperability point that is fundamental enough to ensure high degree of data preservation and correctness. Once the application is able to acquire a DirectWrite’s font object, it can in turn use it in any other DirectWrite API requiring a DirectWrite font from that point onward. The conversion from a DirectWrite’s font object back to a GDI font object allows the rest of the GDI-based application to function with no change while still being able to reap the benefit of using DirectWrite’s new and improved font model. As in some real world examples, the XPS print rasterizer in Windows 7 is implemented on top of DirectWrite and utilizes DirectWrite’s interoperability API to convert back to a GDI font as part of the conversion of an XPS-based print job for a non-XPS printer driver. The Windows 7 XPS Viewer also uses DirectWrite alongside the GDI+ graphic rendering for its onscreen display.  

    There’s a lot more to the details of the API. In the PDC session linked to above, Leonardo Blanco and Kam VedBrat go into the details of DirectWrite and Direct2D and how to develop applications such as this.

    The world has changed a lot since the first text APIs of Windows GDI, such as TextOut or ExtTextOut in Windows NT 3.1 (or the subsequent API additions). The evolution of support for text is a critical part of the underpinnings of Windows 7. We continue to improve this most “basic” element of a graphical operating system so that regardless of the language, script, or device used to render text, Windows will offers a great set of tools and APIs for developers and a great experience for end-users.

    --Worachai

  • Engineering Windows 7

    Recognizing Improvements in Windows 7 Handwriting

    • 28 Comments

    Microsoft has been working on handwriting recognition for over 15 years going back to the Pen extensions for Windows 3.0.  With the increased integration and broad availability of the handwriting components present in Windows Vista we continue to see increased use of handwriting with Windows PCs.  We see many customers using handwriting across a wide variety of applications including schools, hospitals, banking, insurance, government, and more.  It is exciting to see this natural form of interaction used in new scenarios.  Of course one thing we need to continue to do is improve the quality of recognition as well as the availability of recognizers in more languages around the world.  In this post, Yvonne, a Program Manager on our User Interface Platform team, provides a perspective on engineering new recognizers and recognition improvements in Windows 7.  --Steven

    Hi, my name is Yvonne and I’m a Program Manager on the Tablet PC and Handwriting Recognition team. This post is about the work we’ve done to improve recognition in handwriting for Windows 7.

    Microsoft has invested in pen based computing since the early 1990s and with the release of Windows Vista handwriting recognizers are available for 12 languages, including USA, UK, German, French, Spanish, Italian, Dutch, Brazilian Portuguese, and Chinese (Simplified and Traditional), Japanese and Korean. Customers frequently ask us when we plan to ship more languages and why a specific language is not yet supported. We are planning to ship new and improved languages for Windows 7, including Norwegian, Swedish, Finnish, Danish, Russian, and Polish, and the list continues to grow. Let’s explore what it takes to develop new handwriting recognizers.

    Windows has true cursive handwriting recognition, you don’t need to learn to write in a special way – in-fact, we’ve taught (or “trained” as we say) Windows the handwriting styles of thousands of people and Windows learns more about your style as you use it. Over the last 16 years we’ve developed powerful engines for recognizing handwriting, we continue to tune these to make them more accurate, faster and to add new capabilities, such as the ability to learn from you in Vista. Supporting a new language is much more than adding new dictionaries – each new language is a major investment. It starts with collecting native handwriting, next we analyze the data and go through iterations of training and tuning, and finally the system gets to you and continues to improve as you use it.

    Data Collection

    The development of a new handwriting recognizer starts with a huge data collection effort. We collect millions of words and characters of written text from tens of thousands of writers from all around the world.

    Before I describe our collection efforts, I would like to answer a question we are frequently asked: “Why can’t you just use an existing recognizer with a new dictionary?” One reason is that some languages have special characters or accents. But the overriding reason is because people in different regions of the world learn to write in different ways, even between countries with the same language like the UK and US. Characters that may look visually very similar to you can actually be quite different to the computer. This is why we need to collect real world data that captures exactly how characters, punctuation marks and other shapes are written.

    Setting up a data collection effort is challenging and time consuming because we want to ensure that we collect the “right kind of data”. We carefully choose our collection labs in the respective countries for which we develop recognizers.

    Before we start our data collection in the labs, we configure our collection tools, prepare documentation, and compile language scripts that will guide our volunteers through the collection process. Our scripts are carefully prepared by native speakers in the respective language to ensure that we collect only orthographically correct data, data from different writing styles, and data that covers all characters, numbers, symbols and signs that are relevant to a specific language. All of our scripts are proofread and edited before they are blessed to be used at the collection labs.

    Once our tools and scripts are ready, we open our labs and start to recruit volunteers to donate their handwriting samples. Our recruitment efforts ensure that we have balanced demographics such as gender, age, left handiness, and educational background that represent the majority of the population for that country.

    A supervisor at the lab instructs the volunteers to copy the text as it is displayed in the collection tool in their own writing style. What is important to note is that we want to collect writing samples that accurately represent the person’s natural way of writing. We therefore encourage volunteers to treat “pen and tablet” like “pen and paper”. If one of the volunteers tends to writes in big, curvy strokes, then we want to collect his/her big, curvy strokes during the collection session. High quality data in this context refers to data that was naturally written.

    Here is a snapshot of what our collection tool looks like:

    Figure 1. Collection tool.

    Figure 1: Collection Tool

    A collection session lasts between 60-90 minutes at which point our volunteer has donated a significant amount of handwritten data without feeling fatigued. The donated data is then uploaded and stored in our database at Microsoft ready for future use. The written samples contain important information like stroke orders, start- and end points, spacing, and other characteristics that are essential to train our new recognizer.

    Let’s take a look at some of our samples in our database to illustrate the great variation among ink samples:

    Figure 2.  Ink samples illustrating stroke order.

    Figure 2: Ink samples illustrating different stroke orders.

    The screenshot shows how three different volunteers inked the word “black”. The different colors are used to illustrate the exact stroke orders in which the word was written. Our first two volunteers used five strokes to write the word “black”; our third volunteer used four strokes. Please also note how our third volunteer used one stroke only to ink the letters “ck”, while our first volunteer used three strokes for the same combination of letters. All of this information is used to train our recognizers.

    Neural Network and Language Model

    Once we have collected a sufficient amount of inked data, we split our data into a training set, used by our development team, and a “blind” set, used by our test team. The training set is then employed to train the Neural Network, which is largely responsible for the magic that is taking place during the recognition process. Good, naturally written data is essential in developing a high quality recognizer; the recognizer can’t be any better than its training set. The more high quality data we feed into our Neural Network, the more equipped we are to handle sloppy cursive handwriting.

    Our Neural Network is a Time-Delay Neural Network (TDNN) that can handle connected letters of cursive scripts. A TDNN takes ink segments of preceding and following stroke segments into consideration when computing the probabilities of letters, digits and characters for each segment of ink. The output of the TDNN is powerful but not good enough when handwriting is sloppy. In order to come within reach of human recognition accuracy, we have to employ information that goes beyond the shape of the letter: we call this the Language Model context. The majority of this Language Model context comes in form of the lexicon, which is a wordlist of valid spellings for a given language. For many languages, this is the same lexicon that the spellchecker uses. The TDNN and the lexicon work closely together to compute word probabilities and output the top suggestions for the given input.

    Training the Neural Network is an involved process that takes time. We often experiment with borrowing data from other languages to increase the size of the training data with the ultimate goal to boost recognition accuracy. Borrowing characters from other languages does not always lead to success. As I mentioned above, stroke order, letter shape, writing styles and letter size can differ significantly from country to country and can have a negative impact on the performance of the TDNN. It often takes us several rounds of training, re-training and tuning before we find “the right formula” that will lead to high recognition accuracy.

    How do we know if we are headed in the right direction when we build a new recognizer? This is an important question that the test team and native speakers answer for us. The test team is responsible for generating our recognition accuracy metrics that reflect how good our recognizer is. These accuracy metrics are based on our blind test set which is the collected data that development could not use for training. In addition to our accuracy metrics, we work with native speakers in house and at our world-wide subsidiaries to get feedback and further input.

    Improving the recognizers through personalization

    In the previous paragraphs I have outlined how we develop high quality recognizers that can handle a wide variety of different writing styles. But there is more as each person can also train the recognizer his/her unique writing style. The training that is done to teach the recognizer a personal writing style is the same training that happens before Microsoft ships the product. The only difference is that we are now collecting unique training data from a specific person (and not that of thousands of people). We call this process “Personalization”.

    Figure 3: Personalization Wizard (Sentence module).

    Figure 3: Personalization Wizard (Sentence module).

    As the screenshots of our Personalization wizard illustrates, a person is asked to write the requested sentence to provide his/her ink samples. The more data a person donates during the personalization process, the better the recognizer will become. In addition to providing writing samples based on specified sentences, a person can target specific recognition errors, shapes, and characters that will all be used for training. Our Personalization feature is complex and offers a variety of different modules that enable a person to optimally tune the recognizer. We are proud to announce that Personalization will be available for all Vista languages and all new Windows 7 languages. We encourage you to use this feature to improve your recognition accuracy.

    We continue to work on improving our recognizers which also means that we are incorporating our customers feedback through online telemetry (anonymously, privately, voluntary, and opt-in). In Windows Vista we released a new feature called “Report Handwriting Recognition Errors”, which gives people the opportunity to submit those ink samples that the recognizer did not recognize correctly. After the person has corrected a word in the Tablet Input Panel (TIP), we enable a menu that allows a person to send the misrecognized ink together with its corrected version to our team.

    Here is a screenshot of what our error reporting tool looks like:

    Figure 4: With “Report Handwriting Recognition Errors” people can choose which of the misrecognized ink samples they want to submit.

    Figure 4: With “Report Handwriting Recognition Errors” people can choose which of the misrecognized ink samples they want to submit.

    We receive approximately 2000 error reports per week. Each error report is stored in our database before we analyze it and use it to improve our next generation of recognizers. As you can imagine, real world data is extremely helpful because it is only this type of data that can reveal shortcomings of our recognizers.

    We value and appreciate every single error report. Keep sending us your feedback, so that we can use it to improve the magic of our present and future recognizers.

    Thank you,

    – Yvonne representing the handwriting recognition efforts

  • Engineering Windows 7

    UAC Feedback and Follow-Up

    • 198 Comments

    When we started the “E7” blog we were both excited and also a bit uneasy. The excitement is obvious. The unease is because at some point we knew we would mess up. We weren’t sure if we would mess up because we were blogging about a poorly designed feature or mess up because we were blogging poorly about a well-designed feature. To some it appears as though with the topic of UAC we’ve managed to do both. Our dialog is at that point where many do not feel listened to and also many feel various viewpoints are not well-informed. That’s not the dialog we set out to have and we’re going to do our best to improve.

    This post is an attempt to get both the blog right and the feature right. We don’t like where we are in terms of how folks are feeling and we don’t feel good – Windows 7 is too much fun and folks are having too much fun for us to be having the dialog we’re having. We hope this post allows us to get back to having fun!

    To start we’ll just show representative comments from the spectrum of feedback. We’ll then talk about the changes we’re making and also make sure we’re all on the same page regarding how we move forward. In terms of comments we’ve heard the following:

    @sroussey says:

    You have 95% of the people out there think you got it wrong, even if they are the ones that got it wrong. The problem is that they are the one's that buy and recommend your product. So do you give them a false sense of increased security by implementing the change (not unlike security by obscurity) and making them happy, or do you just fortify the real security boundaries?

    And @Thack says:

    Jon,

    Thanks for sharing your thoughts.  I understand your points.

    Now, I want add my voice to the call for one very simple change:

    Treat the UAC prompting level as a special case, such that ANY change to it, whether from the user or a program, generates a UAC prompt, regardless of the type of account the user has, and regardless of the current prompting level.

    That is all we are asking.  No other changes.  Leave the default level as it is, and keep UAC as it is.  We're just talking about the very specific case of CHANGES to the UAC prompting level.

    It will NOT be a big nuisance - most people only ever change the UAC level once (if at all).

    Despite your assurances, I REALLY WANT TO KNOW if anything tries to alter the UAC prompting level. 

    The fact that nobody has yet demonstrated how the putative malware can get into your machine is NO argument.  Somebody WILL get past those other boundaries eventually.

    Even if you aren't convinced by my argument, then the PR argument must be a no-brainer for Microsoft.

    PLEASE, Jon, it's just a small change that will gain a LOT of user confidence and a LOT of good PR.

    Thack

    With this feedback and a lot more we are going to deliver two changes to the Release Candidate that we’ll all see. First, the UAC control panel will run in a high integrity process, which requires elevation. That was already in the works before this discussion and doing this prevents all the mechanics around SendKeys and the like from working. Second, changing the level of the UAC will also prompt for confirmation.

    @mdaria510 says:

    Sometimes, inconsistency with your own ideals is a good thing. Make an exception, if only to put people's fears to rest.

    That sums up where we are heading. The first change was a bug fix and we actually have a couple of others similar to that—this is a beta still, even if many of us are running it full time. The second change is due directly to the feedback we’re seeing. This “inconsistency” in the model is exactly the path we’re taking. The way we‘re going to think about this that the UAC setting is something like a password, and to change your password you need to enter your old password.

    The feedback is that UAC is special, because it can be used to disable silently future warnings if that change is not elevated and so to change the UAC setting an elevation will be required.  To the points in the comments, we also don’t want to create a sense or expectation of security that is not there—you should still not download code and run it unless you trust the source. HTML, EXE, VBS, BAT, CMD and more are all code and all have the potential to alter the environment (user settings, user files) running as a standard user or an administrator. We’re focused on helping people make sure that code doesn’t get on the machine without consent and many third party tools can help more as well. We want people to be comfortable with the new UAC control and the new default setting, so we’ll make the changes outlined above as the feedback has been clear.

    While we’re discussing this we want to make sure we’re all on the same page going forward in terms of how we will evaluate the security of Windows 7. Aside from the UAC setting, the discussion of the vulnerability aspects of the Windows 7 Beta  have each started with getting code on the machine, which the mechanisms of Windows have prevented in the cases shown. We have also heard of security concerns that involve multiple steps to demonstrate a potential exploit. It is important to look at the first step—if the first step is “first get code running on the machine” then nothing after that is material, whether it is changing settings or anything else.  We will treat very seriously the ability to get code on a machine and run without consent. As Jon’s post highlighted briefly, the work in Windows 7 is about the increased protections in place to secure your PC from acquiring and running code without your consent, and of course we continue to make sure Windows code is secure from both tampering or circumventing the protections in the system.

    We want to reiterate the security of the system overall. Windows 7 is SD3+C and is designed to be more secure that Vista—that’s our priority. None of us want to have Windows 7 be perceived as being less secure than Vista in any way, because our design point is to make sure it is more secure that Windows Vista, by default.

    We said we thought we were bound to make a mistake in the process of designing and blogging about Windows 7. We want to continue the dialog and hopefully everyone recognizes that engineering, perhaps especially engineering Windows 7, is sometimes going to be a lively discussion with a broad spectrum of viewpoints expressed. We don’t want the discussion to stop being so lively or the viewpoints to stop being expressed, but we do want the chance to learn and to be honest about what we learned and hope for the same in return. This blog has almost been like building an extra product for us, and we’re having a fantastic experience. Let’s all get back to work and to the dialog about Engineering Windows 7. And of course most importantly, we will continue to hear all points of view and share our point of view and work together to deliver a Windows 7 product that we can all feel good about.

    --Jon and Steven

  • Engineering Windows 7

    Update on UAC

    • 90 Comments

    Hi, Jon DeVaan here to talk to you about the recent UAC feedback we’ve been receiving.

    Most of our work finishing Windows 7 is focused on responding to feedback. The UAC feedback is interesting on a few dimensions of engineering decision making process. I thought that exploring those dimensions would make for an interesting e7 blog entry. This is our third discussion about UAC and for those interested in the evolution of the feature in Windows it is worth seeing the two previous posts (post #1 and post #2) and also reading the comments from many of you.

    We are flattered by the response to the Windows 7 beta so far and working hard at further refining the product based on feedback and telemetry as we work towards the Release Candidate. For all of us working on Windows it is humbling to know that our work affects so many people around the world. The recent feedback is showing us just how much passion people have for Windows! Again we are humbled and excited to be a part of an amazing community of people working to bring the value of computing to a billion people around the world. Thank you very much for all of the thoughts and comments you have contributed so far.

    UAC is one of those features that has a broad spectrum of viewpoints with advocates staking out both “ends” of the spectrum as well as all points in between, and often doing so rather stridently. In this case we might represent the ends of the spectrum as “security” on one end and “usability” on the other. Of course, this is not in reality a bi-polar issue. There is a spectrum of perfectly viable design points in between. Security experts around the world have lived with this basic tension forever, and there have certainly been systems designed to be so secure that they are secure from the people who are supposed to benefit from them. A personal example I have, is that my bank recently changed the security regimen on its online banking site. It is so convoluted I am switching banks. Seriously!

    Clarifying Misperceptions

    As people have commented on our current UAC design (and people have commented on those comments) it is clear that there is conflation of a few things, and a set of misperceptions that need to be cleared up before we talk about the engineering decisions made on UAC. These engineering decisions have been made while we carry forth our secure development lifecycle principles pioneered in Windows XP SP2, and most importantly the principle of “secure by default” as part of SD3+C. Windows 7 upholds those principles and does so with a renewed focus on making sure everyone feels they are in control of their PC experience as we have talked about in many posts.

    The first issue to untangle is about the difference between malware making it onto a PC and being run, versus what it can do once it is running. There has been no report of a way for malware to make it onto a PC without consent. All of the feedback so far concerns the behavior of UAC once malware has found its way onto the PC and is running. Microsoft’s position that the reports about UAC do not constitute a vulnerability is because the reports have not shown a way for malware to get onto the machine in the first place without express consent. Some people have taken the, “it’s not a vulnerability” position to mean we aren’t taking the other parts of the issue seriously. Please know we take all of the feedback we receive seriously.

    The word “vulnerability” has a very specific meaning in the security area. Microsoft has one of the leading security agencies in the world in the Microsoft Security Response Center (secure@microsoft.com) which monitors the greater ecosystem for security threats and manages the response to any threat or vulnerability related to Microsoft products. By any definition that is generally accepted across the world wide security community, the recent feedback does not represent a vulnerability since it does not allow the malicious software to reach the computer in the first place.

    It is worth pointing out the defenses that exist in Windows Vista that keep malware from getting on the PC in the first place. In using Internet Explorer (other browsers have similar security steps as well) when attempting to browse to a .vbs file or .exe file, for example, the person will see the prompts below:

    clip_image002

    clip_image004

    Internet Explorer 8 has also introduced many new features to thwart malware distribution (see http://blogs.msdn.com/ie/archive/2008/08/29/trustworthy-browsing-with-ie8-summary.aspx ). One of my favorites is the SmartScreen® Filter which helps people understand when they are about to visit a malicious site. There are other features visible and hidden that make getting malware onto a PC much more difficult.

    clip_image006
    A SmartScreen® display from IE 8

    Additionally, if one attempts to open an attachment in a modern email program (such as Windows Live Mail) the malware file is blocked:

    clip_image008

    Much of the recent feedback has failed to take into account the ways that Windows 7 is better than Windows Vista at preventing malware from reaching the PC in the first place. In Windows 7 we have continued to focus on improving the ability to stop malware before it is installed or running on a PC.

    The second issue to untangle is about the difference in behavior between different UAC settings. In Windows 7, we have four settings for the UAC feature: “Never Notify,” “Notify me only when programs try to make changes to my computer (without desktop dimming),” “Notify me only when programs try to make changes to my computer (with desktop dimming),” and “Always Notify.” In Windows Vista there were only two choices, the equivalent of “Never Notify” and “Always Notify.” The Vista UI made it difficult for people to choose “Never Notify” and thus choosing between extremes in the implementation. Windows 7 offers you more choice and control over this feature, which is particularly interesting to many of you based on the feedback we have received.

    The recent feedback on UAC is about the behavior of the “Notify me only when programs try to make changes to my computer” settings. The feedback has been clear it is not related to UAC set to “Always Notify.” So if anyone says something like, “UAC is broken,” it is easy to see they are mischaracterizing the feedback.

    The Purpose of UAC

    We are listening to the feedback on how “Notify me only when…” works in Windows 7. It is important to bring in some additional context when explaining our design choice. We choose our default settings to serve a broad range of customers, based on the feedback we have received about improving UAC as a whole. We have learned from our customers participating in the Customer Experience Improvement Program, Windows Feedback Panel, user surveys, user in field testing, and in house usability testing that the benefit of the information provided by the UAC consent dialog decreases substantially as the number of notifications increases. So for the general population, we know we have to present only key information to avoid the reflex to “answer yes”.

    One important thing to know is that UAC is not a security boundary. UAC helps people be more secure, but it is not a cure all. UAC helps most by being the prompt before software is installed. This part of UAC is in full force when the “Notify me only when…” setting is used. UAC also prompts for other system wide changes that require administrator privileges which, considered in the abstract, would seem to be an effective counter-measure to malware after it is running, but the practical experience is that its effect is limited. For example, clever malware will avoid operations that require elevation. There are other human behavior factors which were discussed in our earlier blog posts (post #1 and post #2).

    UAC also helps software developers improve their programs to run without requiring administrator privileges. The most effective way to secure a system against malware is to run with standard user privileges. As more software works well without administrator privileges, more people will run as standard user. We expect that anyone responsible for a set of Windows 7 machines (such as IT Administrators or the family helpdesk worker (like me!)) will administer them to use standard user accounts. The recent feedback has noted explicitly that running as standard user works well. Administrators also have Group Policy at their disposal to enforce the UAC setting to “Always Notify” if they choose to manage their machines with administrator accounts instead of standard user accounts.

    Recapping the discussion so far, we know that the recent feedback does not represent a security vulnerability because malicious software would already need to be running on the system. We know that Windows 7 and IE8 together provide improved protection for users to prevent malware from making it onto their machines. We know that the feedback does not apply to the “Always Notify” setting of UAC; and we know that UAC is not 100% effective at stopping malware once it is running. One might ask, why does the “Notify me only when…” setting exist, and why is it the default?

    Customer-Driven Engineering

    The creation of the “Notify me only when…” setting and our choice of it as the default is a design choice along the spectrum inherent in security design as mentioned above. Before we started Windows 7 we certainly had a lot of feedback about how the Vista UAC feature displayed too many prompts. The new UAC setting is designed to be responsive to this feedback. A lot of the recent feedback has been of the form of, “I’ll set it to ‘Always Notify,’ but ‘regular people’ also need to be more secure.” I am sure security conscious people feel that way, and I am glad that Windows 7 has the setting that works great for their needs. But what do these so called “regular people” want? How to choose the default, while honoring our secure design principles, for these people is a very interesting question.

    In making our choice for the default setting for the Windows 7 beta we monitored the behavior of two groups of regular people running the M3 build. Half were set to “Notify me only when…” and half to “Always Notify.” We analyzed the results and attitudes of these people to inform our choice. This study, along with our data from the Customer Experience Improvement Program, Windows Feedback Panel, user surveys, and in house usability testing, informed our choice for the beta, and informed the way we want to use telemetry from the beta to validate our final choice for the setting.

    A key metric that came out of the study was the threshold of two prompts during a session. (A session is the time from power up to power down, or a day, whichever is shorter.) If people see more than two prompts in a session they feel that the prompts are irritating and interfering with their use of the computer. In comparing the two groups we found that the group with the “Always Notify” setting was nearly four times as likely to have sessions with more than two prompts (a 1 in 6.7 chance vs a 1 in 24 chance). We gathered the statistic for how many people in the sample had malware make it onto their machine (as measured by defender cleaning) and found there was no meaningful difference in malware infestation rates between the two groups. We will continue to collect data during the beta to see if these results hold true in a much broader study.

    We are very happy with the positive feedback we have received about UAC from beta testers and individual users overall. This helps us validate our “regular people” focus in terms of the trade-offs we continue to consider in this design choice. We will continue to monitor the feedback and our telemetry data to continue to improve our design choices on UAC.

    So as you can see there is a lot of depth to the discussion of UAC and the improvements made in Windows 7 in UAC itself and in improving ways to prevent malware from ever reaching a PC. We are working hard to be responsive to the feedback we received from Vista to provide the right usability and security for people of all types. We believe we’ve made good progress and are listening carefully to the feedback on our UAC changes. Again please accept our most sincere thanks for the passion and feedback on Windows 7. While we cannot implement features the way each and every one of you might wish, we are listening and making a sincere effort to properly weigh all points of view. Our goal is to create a useful, useable, and secure Windows for all types of people.

    Jon

  • Engineering Windows 7

    Our Next Engineering Milestone

    • 129 Comments

    Many posts start with a thank you and I want to start this post with an extra special thank you on behalf of the entire Windows team for all the installs and usage we are seeing of the Windows 7 Beta. We’ve had millions of installations of Windows 7 from which we are receiving telemetry, which is simply incredible. And from those who click on the “Send Feedback” button we are receiving detailed bug reports and of course many suggestions. There is simply no way we could move from Beta through Final Release of Windows 7 without this type of breadth coverage and engagement from you in the development cycle. There’s been such an incredible response, with many folks even blogging about how they have moved to using Windows 7 Beta on all their machines and have been super happy. The question we get most often is “if the Beta expires in August what will I do—I don’t want to return to my old [sic] operating system.” For a Beta release, that is quite a complement and we’re very appreciative of such a kind response.

    This post is about the path from where we are today, Beta, to our RTM (Release To Manufacturing), building on the discussion of this topic that started at the PDC. This post is in no way an announcement of a ship date, change in plans, or change in our previously described process, but rather it provides additional detail and a forward looking view of the path to RTM and General Availability. The motivation for this, in addition to the high level of interest in Windows 7, is that we’re now seeing how releasing Windows is not something that Microsoft does “solo”, but rather is something that we do as one part of the overall PC ecosystem. Obviously we have a big responsibility to do our part, one we take very seriously of course. The last stages of a Windows release are a partnership across the entire ecosystem working to make sure that the incredible variety of choices you have for PCs, software, and peripherals work together to bring you a complete and satisfying Windows 7 experience.

    The next milestone for the development of Windows 7 is the Release Candidate or “RC”. Historically the Release Candidate has signaled “we’re pretty close and we want people to start testing the release, especially because all the features are done.” As we have said before, with Windows 7 we chose a slightly different approach which we were clear up front about and are all now experiencing together and out in the open. The Pre-Beta from the PDC was a release where we said it was substantially API complete and even for the areas that were not in the release we detailed the APIs and experience in the sessions at the PDC. At that time we announced that the Beta test in early 2009 would be both API and feature complete, widely available, and would be the only Beta test. We continued this dialog with our hardware partners at WinHEC. We also said that many ecosystem partners including PC makers, software vendors, hardware makers will, as has been the case, continue to receive interim builds on a regular basis. This is where we stand today. We’ve released the feature complete Beta and have made it available broadly around the world (though we know folks have requested even more languages). As a development team we’re doing just what many of you do, which is choosing to run the Beta full time on many PCs at home and work (personally I have at least 9 different machines running it full time) and we’re running it on many thousands of individual’s machines inside Microsoft, and thousands of machines in our labs as well.

    All the folks running the Beta are actively contributing to fixing it. We’re getting performance telemetry, application compatibility data, usage information, and details on device requirements among other areas. This data is very structured and very actionable. We have very high-bandwidth relationships with partners and good tools to help each other to deliver a great experience. One thing you might be seeing is that hardware and software vendors might be trying out updated drivers / software enhanced for Windows 7. For example, many of the anti-virus vendors already have released compatibility packs or updates that are automatically applied to your running installation. You might notice, for example, that many GPU chipsets are being recognized and Windows 7 downloads the updated WDDM 1.1 drivers. While the Windows Vista drivers work as expected, the new 1.1 drivers provide enhanced performance and a reduced memory footprint, which can make a big difference on 1GB shared memory machines. You might insert a device and receive a recently updated version of a driver as I did for a Logitech QuickCam. Another example some of you might have seen is that the Beta requires a an updated version of Skype software currently in testing. When you go to install the old version you get an error message and then the problem and solutions user interface kicks in and you are redirected to the Beta site. This type of error handling is deployed in real time as we learn more and as the ecosystem builds out support. It is only because of our partnerships across the ecosystem that such efforts are possible, even during the Beta.

    Of course, it is worth reiterating that our design point is that devices and software that work on Windows Vista and are still supported by the manufacturer will work on Windows 7 with the same software. There are classes of software and devices that are Windows-version specific for a variety of reasons, as we have talked about, and we continue to work together to deliver great solutions for Windows 7. The ability to provide people the vast array of choices and the openness of the Windows platform make all of this a massive undertaking. We continue to work to improve this while also making sure we provide the opportunities for choice and differentiation that are critical to the health and variety of the overall ecosystem. This data and the work we’re doing together with partners is the critical work going on now to reach the Release Candidate phase.

    We’re also looking carefully at all the quality metrics we gather during the Beta. We investigate crashes, hangs, app compat issues, and also real-world performance of key scenarios. A very significant portion of our effort from Beta to RC is focused on exclusively on quality and performance. We want to fix bugs experienced by customers in real usage as well as our broad base of test suites and automation. A key part of this work is to fix the bugs that people really encounter and we do so by focusing our efforts on the data we receive to drive the ordering and priority of which bugs to fix. As Internet Explorer has moved to Release Candidate, we’ve seen this at work and also read about it on IE Blog.

    Of course the other work we’re doing is refining the final product based on all the real-world usage and feedback. We’ve received a lot of verbatim feedback regarding the user experience—whether that is default settings, keyboard shortcuts, or desired options to name a few things. Needless to say just working through, structuring, and “tallying” this feedback is a massive undertaking and we have folks dedicated to doing just that. At the peak we were receiving one “Send Feedback” note every 15 seconds! As we’ve talked about in this blog, we receive a lot of feedback where we must weigh the opinions we receive because we hear from all sides of an issue—that’s to be expected and really the core design challenge. We also receive feedback where we thought something was straight forward or would work fine, but in practice needed some tuning and refinement. Over the next weeks we’ll be blogging about some of these specific changes to the product. These changes are part of the process and part of the time we have scheduled between Beta and RC.

    So right now, every day we are researching issues, resolving them, and making sure those resolutions did not cause regressions (in performance, behavior, compatibility, or reliability). The path to Release Candidate is all about getting the product to a known and shippable state both from an internal and external (Beta usage and partner ecosystem readiness) standpoint.

    We will then provide the Release Candidate as a refresh for the Beta. We expect, based on our experience with the Beta, a broad set of folks to be pretty interested in trying it out.

    With the RC, this process of feedback based on telemetry then repeats itself. However at this milestone we will be very selective about what changes we make between the Release Candidate and the final product, and very clear in communicating them. We will act on the most critical issues. The point of the Release Candidate is to make sure everyone is ready for the release and that there is time between the Release Candidate and our release to PC makers and manufacturing to validate all the work that has gone on since the pre-Beta. Again, we expect very few changes to the code. We often “joke” that this is the point of lowest productivity for the development team because we all come to work focused on the product but we write almost no code. That’s the way it has to be—the ship is on the launch pad and all the tools are put away in the toolbox to be used only in case of the most critical issues.

    As stated up front, this is a partnership and the main thing going on during this phase of the project is really about ecosystem readiness. PC makers, software vendors, hardware makers all have their own lead times. The time to prepare new products, new configurations, software updates, and all the collateral that goes with that means that Windows 7 cannot hit the streets (so to speak) until everyone has time to be ready together. Think of all those web sites, download pages, how-to articles, training materials, and peripheral packages that need to be created—this takes time and knowing that the Release Candidate is the final code that we’re all testing out in the open is reassuring for the ecosystem. Our goal is that by being deliberate, predictable, and reliable, the full PC experience is available to customers.

    We also continue to build out our compatibility lists, starting with logo products, so that our http://www.microsoft.com/windows/compatibility site is a good resource for people starting with availability. All of these come together with the PC makers creating complete “images” of Windows 7 PCs, including the full software, hardware, and driver loads. This is sort of a rehearsal for the next steps.

    At that point the product is ready for release and that’s just what we will do. We might even follow that up with a bit of a celebration!

    There’s one extra step which is what we call General Availability or GA. This step is really the time it takes literally to “fill the channel” with Windows PCs that are pre-loaded with Windows 7 and stock the stores (online or in-person) with software. We know many folks would like us to make the RTM software available right away for download, but this release will follow our more established pattern. GA also allows us time to complete the localization and ready Windows for a truly worldwide delivery in a relatively small window of time, a smaller window for Windows 7 than any previous release. It is worth noting that the Release Candidate will continue to function long enough so no one should worry and everyone should feel free to keep running the Release Candidate.

    So to summarize briefly:

    • Pre-Beta – This release at the PDC introduced the developer community to Windows 7 and represents the platform complete release and disclosure of the features.
    • Beta – This release provided a couple of million folks the opportunity to use feature complete Windows 7 while also providing the telemetry and feedback necessary for us to validate the quality, reliability, compatibility, and experience of Windows 7. As we said, we are working with our partners across the ecosystem to make sure that testing and validation and development of Windows 7-based products begins to enter final phases as we move through the Beta.
    • Release Candidate (RC) – This release will be Windows 7 as we intend to ship it. We will continue to listen to feedback and telemetry with the focus on addressing only the most critical issues that arise. We will be very clear in communicating any changes that have a visible impact on the product. This release allows the whole ecosystem to reach a known state together and make sure that we are all ready together for the Release to Manufacturing. Once we get to RC, the whole ecosystem is in “dress rehearsal” mode for the next steps.
    • Release to Manufacturing (RTM) – This release is the final Windows 7 as we intend to make available to PC makers and for retail and volume license products.
    • General Availability (GA) – This is a business milestone and represents when you can buy Windows 7 pre-installed on PCs or as full packaged product.

    The obvious question is that we know the Pre-Beta was October 28, 2008, and the Beta was January 7th, so when is the Release Candidate and RTM? The answer is forthcoming. We are currently evaluating the feedback and telemetry and working to develop a robust schedule that gets us the right level of quality in a predictable manner. Believe me, we know many people want to know more specifics. We’re on a good path and we’re making progress. We are taking a quality-based approach to completing the product and won’t be driven by imposed deadlines. We have internal metrics and milestones and our partners continue to get builds routinely so even when we reach RC, we are doing so together as partners. And it relies, rather significantly, on all of you testing the Beta and our partners who are helping us get to the finish line together.

    Shipping Windows, as we hoped this shows, is really an industry-wide partnership. As we talked about in our first post, we’re promising to deliver the best release of Windows we possibly can and that’s our goal. Together, and with a little bit more patience, we’ll achieve that goal.

    We continue to be humbled by the response to Windows 7 and are heads down on delivering a product that continues to meet your needs and the needs of our whole industry.

    --Steven on behalf of the Windows 7 team

  • Engineering Windows 7

    Showcasing Windows 7 Platform with Applets

    • 29 Comments

    About every decade we make the big decision to update what we refer to as the applets (note we’ll use applet, application, program, and tool all interchangeably as we write about these) in Windows—historically Calc (Calculator), Paint (or MS Paint, Paint Brush) and WordPad (or Write), and also the new Sticky Notes applet in Windows 7. As an old-timer, whenever I think of these tools I think of all the history behind them and how they came about. I’m sure many folks have seen the now “classic” video of our (now) CEO showing off Windows to our sales force (the last word of this video is the clue that this video was done for inside Microsoft). Windows 7 seems like a great time to update these tools. The motivation for updating the applets this release is not the 10-year mark or just time to add some applet-specific features, but the new opportunities for developers to integrate their applications with the Windows 7 desktop experience. While many use the applets as primary tools, our view of these is much more about demonstrating the overall platform experience and to provide guidance to developers about how to integrate and build on Windows 7, while at the same time providing “out of box” value for everyone. There’s no real “tension” over adding more and more features to these tools as our primary focus is on showing off what’s new in Windows—after all there are many full-featured tools available that provide similar functionality for free. So let’s not fill the comments with request for more bitmap editing features or advanced scientific calculator features :-).

    The APIs discussed in this post are all described on MSDN in the updated developer area for Windows 7 where you can find the Windows 7 developer guide. Each of the areas discussed is also supported by the PDC and WinHEC sessions on those sites.

    This post was written by several folks on our applications and gadgets team with Riyaz Pishori, the group program manager, leading the effort. --Steven

    This blog post discusses some of the platform innovations in Windows 7 and how Windows 7 applications have adopted and showcased these innovations. This post details some of the platform features that developers and partners can expect in Windows 7 and how Windows 7 programs have showcased these innovations. This post also discusses how applications have been given a facelift both in terms of their functionality as well as their user experience by focusing on key Windows design principles and platform innovations. Finally, this post can serve as a pointer or guide to application developers and ISVs to get themselves familiar with some of the key new Windows platform innovations, see them in action and then figure out how they can build on these APIs for their own software.

    The post is organized by each subsystem, and how Windows applets are using that particular subsystem.

    Windows Ribbon

    The Windows Ribbon User Interface is the next generation rich, new user interface for Windows development. The Windows Ribbon brings the now familiar Office 2007 Ribbon user interface to Windows 7, making it available to application developers and ISVs.

    There are several advantages of adopting the Windows Ribbon user interface, many of which have been talked about in the Office 2007 blogs. The Ribbon provides a rich, graphical user interface for all commands in a single place, without the need to expose various functions and commands under different menus or toolbars. The Ribbon UI is direct and self-explanatory, and has a labelled grouping of logically related commands. While using an application that is built on the Ribbon UI platform, the user only needs to focus on his workflow and the context of his task, rather than worry about where a particular function is located or accessible. The Ribbon UI also takes care of layout and provides consistency as compared to toolbars which the user can customize in terms of their sizes, location and contents. It also has built-in and improved keyboard accessibility, and making the application DPI and theme aware becomes easier by using the Ribbon. Finally, development and changes to the user interface is very quick and rapid due to the XML-markup based programming model for the Ribbon User Interface.

    Paint and Wordpad are two of the first consumers of the Windows Ribbon UI Platform. In Windows 7, both these applications are enhanced with a set of new features, and the user interface of these applications also required to be brought up to the Windows 7 experience and standards. The Windows Ribbon UI is a great fit for these applications to revamp their user experience and make it consistent, and make these applications rich, fun and easy to use. The tasks and commands in these applications were amenable to be applied to the Ribbon UI framework, and it also served as an opportunity for popular native Windows applications to showcase the Windows Ribbon UI platform to consumers, as well as developers and ISVs. Many has asked about the Windows Explorer and IE also using the ribbon, which we did not plan on for Windows 7. Our Windows 7 focus was on the platform and demonstrating the platform for document-centric applications such as Paint and Wordpad.

    Both these applications showcase several elements of the Windows UI Ribbon. The Application Menu of both Paint and Wordpad exposes Application related commands that are typically available thru the ‘File’ menu of an application. Both the applications have a core tab set that consists of ‘Home’, which exposes most of the commands in the application, and ‘View’ which exposes the image or document viewing options in the application. The commands in both these tabs are laid out logically in groups of related functionality.

    A quick access toolbar (QAT) is provided by both Paint and Wordpad, which comes with certain defaults like Save, Undo and Redo that are meaningful to the application. The user can customize the QAT by using the QAT drop-down, or right-click on any command or group in the ribbon and add it to the QAT.

    Several ribbon commands are used in both these applications, like command buttons, split buttons, galleries, drop-downs, check boxes and toggle buttons.

    Paint and WordPad ribbon

    Paint Application Menu

    Further, both applications provide a ‘Print preview’ mode which shows a print preview of the image or the document in context. In a mode, all the core tabs are removed and only the mode is displayed for the user to interact with. On exiting a mode, the user is returned to the core tab set.

    Paint also exposes a contextual tab for the Text tool, which is displayed only when a text control is drawn on canvas. A contextual tab shown next to the core tab set when the text tool is in focus, and removed when the text is applied to the image on the canvas. The contextual tab set contains the tools that are specific and relevant only to the text tool.

    Both the applications provide live previews through ribbon galleries, for example the font size and font name for Wordpad and Paint while formatting text, bullets and lists in Wordpad, and color selection, outline size selection and outline and fill styles for shapes in Paint. A live preview allows the user to see the changes instantaneously on mouse hover, and then apply those changes on a selection. These previews are one of the key elements of the ribbon UI and demonstrate why the metaphor is much more than a “big toolbar” but a new interaction style.

    By adopting the Ribbon User Interface, both the applications inherit built-in keyboard accessibility support using ribbon Keytips, have tooltips on all commands, and have ready support for DPI and Windows themes.

    Paint and Wordpad can serve as examples of how the Ribbon UI can be easily used in MFC applications. The Windows Ribbon presents new opportunities and options for developers and ISVs to develop applications with the Ribbon User Interface. The Windows Scenic Ribbon programming model and architecture emphasizes the separation of the markup file and the C++ code files to help developers decouple the presentation and customization of the UI from the underlying application code. The platform also promotes developer-designer workflow, where the developer can focus on the application logic, while the designer can work on the UI presentation and layout. The ribbon UI is a significant investment for us and you should expect to see us continue to use it more throughout Microsoft, including an implementation in the .NET Framework as was demonstrated by Scott Guthrie at the PDC, which will be built on Windows 7 natively in the future.

    Multi-touch platform

    Windows 7 provides support for multi-touch input data, as well as supporting multi-touch in Win32 via Windows messages. The investments in the multi-touch platform include the developer platform that exposes touch APIs to applications, enhancing the core user interface in Windows 7 to optimize for touch experiences, and providing multi-touch gestures for applications to consume. Developers on Windows 7 can build on these APIs decide on the appropriate level of touch support they would like to provide in their software.

    Wordpad enhances the document reading experience by using the multi-touch platform and using the zoom and pan gestures. Zooming, panning and inertia lets the user get to a particular piece of content very quickly in an intuitive fashion. By using the zoom gesture, the user can zoom in or zoom out of the document which is akin to using the zoom slider at the right of the Wordpad status bar. On multi-touch capable hardware, the user can zoom in and out of the document by placing his fingers anywhere within the document window and executing the zoom gesture. Wordpad also supports the pan gesture to pan thru the pages of a document that is open in Wordpad. By executing the pan gesture, the user can scroll-down or scroll-up a document similar to using the scroll bar of the Wordpad application.

    In Paint multi-touch data is used to allow users to paint with multiple fingers. It is an example of an application that allows multi-touch input without the usage of gestures. For Paint’s functionality, providing multiple finger painting ability was more compelling and enriching than allowing for zoom, pan, rotate or other gestures that act on the picture in a read-only mode and not in an edit-mode. New brushes in Paint are multi-touch enabled, and they handle touch inputs via multiple fingers and allow the user to simultaneously draw strokes on canvas on finger drag. These brushes are also pressure-sensitive, thereby providing a realistic experience with touch by varying the stroke width based on the pressure on the screen. While adopting the multi-touch platform to enhance the end-user experience in Paint, conscious design decisions were made to preserve the single touch experience for functionalities where a multi-touch scenario does not apply such as the color picker, magnifier and text tool.

    By building with the multi-touch APIs, Paint and Wordpad have created more natural and intuitive interfaces on touch-enabled hardware and show “out of the box” how different capabilities can be exposed by developers in their software.

    Taskbar

    Sticky Notes (or just Notes) is an extension of a TabletPC applet available in Windows 7. One of the things which was key to the Notes experience on the desktop was to have the ability to quickly take all the notes away and get them back, but still making sure it is really easy to create a new note. We achieved this by having a single top level window for the sticky notes application. You can minimize all your notes and view a stack of notes in the preview on the command bar with a single click. The stacked preview has been achieved using the new thumbnail preview APIs that enable apps to override the default taskbar previews that are essentially a redirected snapshot of the top level application window, and provide their own. This enables applications to decouple their previews from the top level application window and provide a more productive preview based on the scenario. For example, this was very valuable in Sticky Note scenarios where a quick peek at a note that was last touched provides for quite a productive workflow. Taskbar also caches the preview thumbnail images so once the preview is given to the Taskbar, the application does not need to keep it around – the application does however need to send an updated preview whenever it changes.

     Sticky Notes preview on Taskbar

    Another nifty customization end-point on the task bar is the destination menu (aka jumplist). This menu comes up when a user right clicks on the application in the taskbar or hovers over the application icon in the Start Menu. The Sticky Notes application does not have a single main application window – this makes the application feel really light weight and fits in well into the Windows 7 philosophy of creating simple and powerful user experiences. The challenge then was exposing functionality such as the ability to create a new note from a central location or potentially other custom “tasks”. The destination menu helped exposing these scenarios in a simple yet discoverable way.

    Sticky Notes destination menu

    Sticky Notes destination menu

    The new taskbar functionality and extensibility built in that has the potential to make it a lot easier for people to work with applications/scenarios in a more productive and efficient manner when developers integrate their software with the Windows desktop.

    Search

    Building on the long history of Search in Windows and the significant enhancements in Windows 7, there are APIs available to developers to deeply integrate their content types with the desktop search user experience affordances in Windows 7. Sticky Notes shows one example of how these APIs can be used.

    The Sticky Notes application now allows users to get back to their notes by simply searching for content through Windows Inline search within the start menu. This is in line with allowing users to reach the relevant note as quickly as possible even when the application is closed. Even though search could be done for both Text and Ink content, it is restricted to text because of lower success rates with varied handwriting styles in ink. The application registers a protocol handler that generates a URL for each Note. The Sticky Notes Filter handler gets asked for the content associated with each note that is then indexed by the Search infrastructure. These indexes are then used to perform a quick lookups when the user searches the Search interfaces provided by the Windows Shell. When a user clicks on a result, Search invokes the associated application with the URL corresponding to the one that the protocol handler had generated that the Filter handler associated with the content it sent to the Search indexer.

     

    Search from start menu and a result displayed from Sticky Notes

    The search platform also has the ability to enable the filter handler to specify the language of each chunk of content passed on to it that overrides the default Search heuristics used to compute the language - this increases Search accuracy manifold and thereby enhances internationalization support of the entire ecosystem.

    The reason Sticky Notes implemented a protocol handler in addition to a Filter handler was because it implements its own integrated storage schema on top of the Windows File system - all the notes are represented by a single .snt file. The protocol handler generates URL's to individual entities (in this case - notes); the filter handler picks out content for each of these URL's and gives it to Search for indexing.

    This demonstrates the ease in which applications can plug into the search platform in Windows 7, and add search handlers which can enhance the overall user experience from the App as well as the platform.

    Real-Time Stylus

    Real-Time Stylus (RTS) is infrastructure that provides access to the stylus events coming from pen or touches digitizers. It provides information about strokes and points and provides access to ink-related events. Using RTS, applications can get access to stylus information and develop compelling end-user scenarios and experiences.

    Sticky Notes now allows the users to Ink and Type on notes depending on the availability of inking hardware. Users can use keyboard input to type on notes and use the stylus to ink on notes. Though the experience has been designed keeping in mind that users will either use either ink or text on a particular note, it does allow users to ink and type on the same note. However these surfaces are maintained agnostic to each other. Sticky Notes also auto-grows the note while inking on the note, providing a real-time experience of the note adjusting its size to fit the inked content.

    Real Time Stylus (RTS) is used for inking features provided in Sticky Notes. Inking gestures are also available to applications, and the scratch out gesture has been implemented in Sticky Notes to delete content.

    Scratch gesture

    Sticky Note with both typed text and inked content

    In addition, Paint uses RTS to get a stream of positional input from mouse, stylus or touch which are used for drawing strokes on canvas. Paint also captures additional input variables like pressure and touch surface area when such input is available from the digitizer, and maps these inputs into the stroke algorithms that are used to generate Paint strokes on canvas. Using this algorithm, the user is able to modulate stroke width and other parameter based on the pressure or touch area on canvas.

    Paint with new brush strokes

    Using RTS allows the development of applications and software that can build on the inking platform and provide ways to interact with the application that go beyond mouse or keyboard. Using stylus, inking and gestures, developers can create interactive experiences for end-users.

    Restart and Recovery

    The Windows Error Reporting (WER) infrastructure is a set of feedback technologies that is built into Windows 7 and other earlier versions of Windows client and server. WER allows applications to register for application failures and capture this data for end-users who agree to report it. This data can be accessed and analyzed and can be used to monitor error trends and download debug information to help developers and ISVs determine the root cause for application failures.

    WER can add value to software development at various stages: during development, during beta testing by getting early feedback from end-users, after the release of the product by analysing and prioritizing the top fixes, and at end of life of the product.

    Related to failure recovery, Applications can also register with WER for restart on application of a Windows patch that terminates the application and on application of an update that reboots the computer, as well as failure caused due to an application crash or hang or not responding state. Applications can optionally register for recovery of lost data, can develop their own mechanism for recovery.

    Several Windows applications adopt the WER infrastructure to collect and analyze data. Calculator, Paint and Wordpad register for restart and additionally recover the current data in the sessions of the application that were running. Sticky Notes also registers for restart and recovery, and returns the user to the set of notes open on the desktop. Using WER, end-users would allow Windows to capture and collect problem data and then would be returned to the applications in the same state that they were in earlier.

    As you can see, our primary effort for the applets in Windows 7 is to showcase some of the new platform APIs and innovations available to developers. As you get to try out these applications you will see that while showcasing the Windows 7 platform innovations, we have also added some commonly requested features and functionality. Some of them are: Check and correct, calculation modes and templates in Calculator, New brushes, shapes and multi-touch support in Paint, Open standards support in Wordpad and Ink and text, taskbar and search integration in Sticky notes. Maybe we won’t wait 10 years to update these again :-)

    --Riyaz Pishori and team

  • Engineering Windows 7

    Disk Defragmentation – Background and Engineering the Windows 7 Improvements

    • 89 Comments

    One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. --Steven

    In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.

    Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.

    This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.

    Graph of Historical Trends of CPU and IOPS Performance

    Chart of Performance Improvements of Various Technologies

    In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.

    Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:

    1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.
    2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.

    Both rules have reasonably simply understood rationale:

    1. Each time an I/O is issued by the CPU, multiple software and hardware components have to do work to satisfy the request. This contributes toward increased latency, i.e., the amount of time until the request is satisfied. This latency is often directly experienced by users when reading data and leads to increased user frustration if expectations are not met.
    2. Movement of mechanical parts contributes substantially to incurred latency. For hard disks, the “rotational time” (time taken for the disk platter to rotate in order to get the right portion of the disk positioned under the disk head) and the “seek time” (time taken by the head to move so that it is positioned to be able to read/write the targeted track) are the two major culprits. By reading or writing in large chunks, the incurred costs are amortized over the larger amount of data that is transferred – in other words, the “per unit” data transfer costs decrease.

    File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk. Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.

    Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.

    So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:

    1. Any logically related content that was fragmented can be placed adjacently
    2. Free space can be coalesced so that new content written to the disk can be done so efficiently

    The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.

    Example of disk blocks being defragmented.

    Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.

    Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often. Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation. Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness. Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.

    So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.

    The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:

    • Availability of a system that has been “aged” to create fragmentation in a typical or representative manner. But, as noted above, there is no single, representative behavior. For example, the frequency and extent of fragmentation on a computer used primarily for web browsing will be different than a computer used as a file server.
    • Selection of meaningful disk-bound metrics e.g. boot and first-time application launch post boot.
    • Repeated measurements that can be statistically relevant

    Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.

    In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed. In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit. In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!

    Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.

    The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:

    In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively. Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:

    New Windows 7 Defrag User Interface

    New Windows 8 Defrag User Interface

     

    In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time. Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:

    Windows 7 Defrag Schedule

    Windows 7 Defrag Disk Selection

    Among the other changes under the hood in Windows 7 are the following:

    • Defragmentation in Windows 7 is more comprehensive – many files that could not be re-located in Windows Vista or earlier versions can now be optimally re-placed. In particular, a lot of work was done to make various NTFS metadata files movable. This ability to relocate NTFS metadata files also benefits volume shrink, since it enables the system to pack all files and file system metadata more closely and free up space “at the end” which can be reclaimed if required.
    • If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases.
    • By default, defragmentation is disabled on Windows Server 2008 R2 (the Windows 7 server release). Given the variability of server workloads, defragmentation should be enabled and scheduled only by an administrator who understands those workloads.

    Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :-).

    Rajeev and Matt

  • Engineering Windows 7

    Follow-up: Accessibility in Windows 7

    • 18 Comments

    We’ve seen some comments recently posted on a previous post on accessibility and a member of the User Interface Platform team wanted to offer some thoughts on the topic.  Brett is a senior test lead who leads our efforts testing the Accessibility of Windows 7.  --Steven

     

    Hi, my name is Brett and I am the test lead for the Windows 7 Accessibility team. Back in November my colleague Michael wrote a blog post about the work our team is doing for Windows 7, I’m following up to that and some recent comments about our new screen Magnifier. On a personal note I would like to mention that I’m a person with low vision and depend on some of the technologies that my team produces to help me in my work.

    I’ve been using Windows 7 for my day-to-day work for several months, this is something we call “dogfooding”, which is using our own pre-release products long before the public ever sees a beta. I’ve been using Windows 7 as my primary operating system and have found our new Magnifier to be very useful to me.

    Now, about our Magnifier, as you can imagine, the appeal of the many features in Windows varies from person to person, we often say that it is like making pizza for a billion people. The same is true for the features my team owns. I’ve read many comments since we released our Windows 7 beta about magnifier, some are from people that have really benefited from our new work, some have suggestions, and others have concerns. I will say thanks for the feedback, we appreciate all types. Those of you that have benefited are mostly people that need basic magnification and appreciate the easy ability to zoom in and out as needed; I fall into this category myself. Those of you that need magnification in combination with custom colors, high-contrast or some screen readers probably haven’t been able to benefit from the new Magnifier, for you we’ve made sure that the Vista magnifier continues to work. Let me explain a little more about what we’ve done in Windows 7.

    To go into more detail about our implementation I need to start with our graphics system in Windows. Over the last several years GPU technology has made huge advances and in Vista we finally made the leap to a modern hardware accelerated graphics system, what we call Aero, which takes advantage of the GPU. We often use the term Aero to refer to the specific elements of Windows visuals, such as transparency and gradients. In practice it is more than that, the modern graphics rendering (technically the desktop window manager along with the DirectX APIs) is not just for aesthetics but for all forms of rendering including text, 2D, and 3D all using modern hardware assisted graphics and a much richer API. It takes time, however, for the diverse ecosystem to adopt this technology, perhaps even over the course of several OS releases. It also takes time for Windows and time for software developers and hardware manufacturers to adopt new technologies; so for a time we will have (and fully support) a mix of both old and new. For example, some screen readers do the great things they do by capturing the data that goes through the original Windows graphics system (GDI) and building their off-screen UI models which is why they need to turn off the new rendering. On the other hand, our new Magnifier is integrated deeply into the desktop window manager (“Aero”) to leverage this graphics horsepower and deliver smooth full-screen multi-monitor magnification.

    While, as this demonstrates, these advances aren’t seamless, in Windows 7 my team has worked to make sure that we maintain Vista functionality and compatibility while making new investments. Magnifier is an example of this, we utilize the power of the GPU where we can to bring new capabilities to a broad spectrum of customers, and when Aero needs to be off, whether for screen readers, high-contrast or other needs, we maintain the existing capabilities in the product. And by maintaining compatibility as much as possible, many of the tools you depend on today will continue to work with Windows 7.

    So, is Magnifier better for everyone? Not everyone, but certainly for many people, but more than that I can honestly say that we have made advances to accessibility for everyone in Window 7. As Michael noted in his posting, we invested in several areas, there’s not only the Magnifier and on-screen keyboard work, there is also significant work to the underlying accessibility APIs. We also actively support the community and recently made a grant to NV Access to help them improve their open source screen reader support for Windows 7 and Internet Explorer 8.

    Thanks for reading, and thanks for your comments,

    -Brett                                 

  • Engineering Windows 7

    Engineering the Windows 7 “Windows Experience Index”

    • 78 Comments

    We’re busy going through tons of telemetry from the many people that have downloaded and installed the Windows 7 beta around the world. We’re super excited to see the excitement around kicking the tires. Since most folks on the beta are well-versed in the hardware they use and very tuned into the choices they make, we’ve received a few questions about the Windows Experience Index (WEI) in Windows 7 and how that has been changed and improved in Windows 7 to take into account new hardware available for each of the major classes in the metric. In this post Michael Fortin returns to dive into the engineering details of the WEI.

    The WEI was introduced in Windows Vista to provide one means across PCs to measure the relative performance of key hardware components. Like any index or benchmark, it is best used as a relative measure and should not be used to compare one measure to another. Unlike many other measures, the WEI merely measures the relative capability of components. The WEI only runs for a short time and does not measure the interactions of components under a software load, but rather characteristics or your hardware. As such it does not (nor cannot) measure how a system will perform under the your own usage scenarios. Thus the WEI does not measure performance of a system, but merely the relative hardware capabilities when running Windows 7.

    We do want to caution folks in trying to generalize an “absolute” WEI as necessary for a given individual. We each have different tolerances or more importantly expectations for how a PC should perform and the same WEI might mean very different things to different individuals. To personalize this, I do about 90% of my work on a PC with a WEI of 2.0, primarily driven by the relatively low score for the gaming graphics component on my very low cost laptop. I run Outlook (with ~2GB of email), Internet Explorer (with a dozen tabs), Excel (with longs list of people on the development team), PowerPoint, Messenger (with video), and often I am running one of several LOB applications written in .NET. I feel with this type of workload and a PC with Windows 7 and that WEI my own brain and fingers continues to be my “bottleneck”. At the other end of the spectrum is my holiday gift machine which is a 25” all-in-one with a WEI of 5.1 (though still limited by gaming graphics, with subscores of 7.2, 7.2, 6.2, 5.1, 5.9). This machine runs Windows 7 64-bit and I definitely don’t keep it very busy even though I run MediaCenter in a window all the time, have a bunch of desktop gadgets, and run the PC as our print server (I use about 25% of available RAM and the CPU almost never gets above 10%).

    –Steven

    The overall Windows Experience Index (WEI) is defined to be the lowest of the five top-level WEI subscores, where each subscore is computed using a set of rules and a suite of system assessment tests. The five areas scored in Windows 7 are the same as they were in Vista and include:

    • Processor
    • Memory (RAM)
    • Graphics (general desktop work)
    • Gaming Graphics (typically 3D)
    • Primary Hard Disk

    Though the scoring areas are the same, the ranges have changed. In Vista, the WEI scores ranged from 1.0 to 5.9. In Windows 7, the range has been extended upward to 7.9. The scoring rules for devices have also changed from Vista to reflect experience and feedback comparing closely rated devices with differing quality of actual use (i.e. to make the rating more indicative of actual use.) We know during the beta some folks have noticed that the score changed (relative to Vista) for one or more components in their system and this tuning, which we will describe here, is responsible for the change.

    For a given score range, we hope our customers will be able to utilize some general guidelines to help understand the experiences a particular PC can be expected to deliver well, relatively speaking. These Vista-era general guidelines for systems in the 1.0, 2.0, 3.0, 4.0 and 5.0 ranges still apply to Windows 7. But, as noted above, Windows 7 has added levels 6.0 and 7.0; meaning 7.9 is the maximum score possible. These new levels were designed to capture the rather substantial improvements we are seeing in key technologies as they enter the mainstream, such as solid state disks, multi-core processors, and higher end graphics adapters. Additionally, the amount of memory in a system is a determining factor.

    For these new levels, we’re working to add guidelines for each level. As an example for gaming users, we expect systems with gaming graphics scores in the 6.0 to 6.9 range to support DX10 graphics and deliver good frames rates at typical screen resolutions (like 40-50 frames per second at 1280x1024). In the range of 7.0 to 7.9, we would expect higher frame rates at even higher screen resolutions. Obviously, the specifics of each game have much to do with this and the WEI scores are also meant to help game developers decide how best to scale their experience on a given system. Graphics is an area where there is both the widest variety of scores readily available in hardwaren and also the widest breadth of expectations. The extremes at which CAD, HD video, photography, and gamers push graphics compared to the average business user or a consumer (doing many of these same things as an avocation rather than vocation) is significant.

    Of course, adding new levels doesn’t explain why a Vista system or component that used to score 4.0 or higher is now obtaining a score of 2.9. In most cases, large score drops will be due to the addition of some new disk tests in Windows 7 as that is where we’ve seen both interesting real world learning and substantial changes in the hardware landscape.

    With respect to disk scores, as discussed in our recent post on Windows Performance, we’ve been developing a comprehensive performance feedback loop for quite some time. With that loop, we’ve been able to capture thousands of detailed traces covering periods of time where the computer’s current user indicated an application, or Windows, was experiencing severe responsiveness problems. In analyzing these traces we saw a connection to disk I/O and we often found typical 4KB disk reads to take longer than expected, much, much longer in fact (10x to 30x). Instead of taking 10s of milliseconds to complete, we’d often find sequences where individual disk reads took many hundreds of milliseconds to finish. When sequences of these accumulate, higher level application responsiveness can suffer dramatically.

    With the problem recognized, we synthesized many of the I/O sequences and undertook a large study on many, many disk drives, including solid state drives. While we did find a good number of drives to be excellent, we unfortunately also found many to have significant challenges under this type of load, which based on telemetry is rather common. In particular, we found the first generation of solid state drives to be broadly challenged when confronted with these commonly seen client I/O sequences.

    An example problematic sequence consists of a series of sequential and random I/Os intermixed with one or more flushes. During these sequences, many of the random writes complete in unrealistically short periods of time (say 500 microseconds). Very short I/O completion times indicate caching; the actual work of moving the bits to spinning media, or to flash cells, is postponed. After a period of returning success very quickly, a backlog of deferred work is built up. What happens next is different from drive to drive. Some drives continue to consistently respond to reads as expected, no matter the earlier issued and postponed writes/flushes, which yields good performance and no perceived problems for the person using the PC. Some drives, however, reads are often held off for very lengthy periods as the drives apparently attempt to clear their backlog of work and this results in a perceived “blocking” state or almost a “locked system”. To validate this, on some systems, we replaced poor performing disks with known good disks and observed dramatically improved performance. In a few cases, updating the drive’s firmware was sufficient to very noticeably improve responsiveness.

    To reflect this real world learning, in the Windows 7 Beta code, we have capped scores for drives which appear to exhibit the problematic behavior (during the scoring) and are using our feedback system to send back information to us to further evaluate these results. Scores of 1.9, 2.0, 2.9 and 3.0 for the system disk are possible because of our current capping rules. Internally, we feel confident in the beta disk assessment and these caps based on the data we have observed so far. Of course, we expect to learn from data coming from the broader beta population and from feedback and conversations we have with drive manufacturers.

    For those obtaining low disk scores but are otherwise satisfied with the performance, we aren’t recommending any action (Of course the WEI is not a tool to recommend hardware changes of any kind). It is entirely possible that the sequence of I/Os being issued for your common workload and applications isn’t encountering the issues we are noting. As we’ve said, the WEI is a metric but only you can apply that metric to your computing needs.

    Earlier, I made note of the fact that our new levels, 6 and 7, were added to recognize the improved experiences one might have with newer hardware, particularly SSDs, graphics adapters, and multi-core processors. With respect to SSDs, the focus of the newer tests is on random I/O rates and their avoidance of the long latency issues noted above. As a note, the tests don’t specifically check to see if the underlying storage device is an SSD or not. We run them no matter the device type and any device capable of sustaining very high random I/O rates will score well.

    For graphics adapters, both DX9 and DX10 assessments can be run now. In Vista, the tests were specific to DX9. To obtain scores in the 6 or 7 ranges, a graphics adapter must obtain very good performance scores, support DX10 and the driver must be a WDDM 1.1 driver (which you might have noticed are being downloaded in beta during the Windows 7 beta). For WDDM 1.0 drivers, only the DX9 assessments will be run, thus capping the overall score at 5.9.

    For multi-core processors, both single threaded and multi-threaded scenarios are run. With levels 6 and 7, we aim to indicate that these systems will be rarely CPU bound for typical use and quite suitable for demanding processing tasks and multi-tasking. As examples, we anticipate many quad core processors will be able to score in the high 6 to low 7 ranges, and 8 core systems to be able to approach 7.9. The scoring has taken into account the very latest micro-processors available.

    For many key hardware partners, we’ve of course made available additional details on the changes and why they were made. We continue to actively work with them to incorporate appropriate feedback.

    --Michael Fortin

  • Engineering Windows 7

    User Account Control (UAC) – quick update

    • 59 Comments

    There’s been a ton of interest in how we have improved user account control (UAC) and so we thought we’d offer a quick update for folks. We know most of you have discovered this and picked a setting that works for you, and we're happy with the feedback we've seen.  This just goes into the details on the choice of defaults.  --Steven

    In an earlier blog post we discussed the why of UAC and its implications for Windows, the ecosystem, and our customers. We also talked about what we needed to do moving forward to address the data and feedback we’ve received. This blog post will provide additional detail on our response and what you can expect to see in the upcoming beta build in early 2009.

    As mentioned in our previous post, and your comments supported this, the goals for UAC are good and important ones. User Account Control was created with the intention of putting you in control of your system, reducing cost of ownership over time, and improving the software ecosystem. It is important not to abandon these goals. Instead, we want to address feedback we’ve received and build on the telemetry we have using those to improve the overall experience without losing sight of the goals with which we agree.

    For those of you using 6801 you have started to see the benefits of prompt reduction and our new and improved dialog designs. You also have seen our efforts to give the user greater control of their system – the new UAC Control Panel. The administrator now has more control over the level of notification received from UAC. Look for the UAC Control Panel to appear in Start Search, Action Center, Getting Started, and even directly from the UAC prompt itself. Of course, the familiar ways to access it from Vista are still present.

    User Account Control control panel.

    Figure 1: UAC Control Panel

    The UAC Control Panel enables you to choose between four different settings:

    1. Always notify on every system change. This is Vista behavior – a UAC prompt will result when any system-level change is made (Windows settings, software installation, etc.)
    2. Notify me only when programs try to make changes to my computer. This setting does not prompt when you change Windows settings, such as control panel and administration tasks.
    3. Notify me only when programs try to make changes to my computer, without using the Secure Desktop. This is the same as #2, but the UAC prompt appears on the normal desktop instead of the Secure Desktop. While this is useful for certain video drivers which make the desktop switch slowly, note that the Secure Desktop is a barrier to software that might try to spoof your response.
    4. Never notify. This turns off UAC altogether.

    We know from the feedback we’ve received that our customers are looking for a better balance of control versus the amount of notifications they see. As we mentioned in our last post we have a large number of admin (aka developer) customers looking for this balance, our data shows us that most machines (75%) run with a single account with full admin privileges.

    Distribution of number of accounts per PC

    Figure 2. Percentage of machines (server excluded) with one or more user accounts from January 2008 to June 2008.

    For the in-box default, we are focusing on these customers, and we have chosen number 2, “Notify me only when programs try to make changes to my computer”. This setting does not prompt when you change Windows settings (control panels, etc.), but instead enables you to focus on administrative changes being requested by non-Windows applications (like installing new software). For people who want greater control in changing Windows settings frequently, without the additional notifications, this setting results in fewer overall prompts and enables customers to zero in on the key remaining notifications that they do see.

    This default setting provides the right degree of change notification that a broad range of customers’ desire. At the same time we’ve made it easy and readily discoverable for the administrator to adjust the setting to provide more or fewer notifications via the new control panel (and policy). As with all of our default choices we will continue to closely monitor the feedback and data that come in through beta before finalizing for ship.

    --UAC, Kernel, and Security program managers

  • Engineering Windows 7

    Primer on Device Support and Testing for Windows 7

    • 58 Comments

    As most folks (finally) get the beta and start to set aside some time to install and try out Windows 7, we thought it would be a good idea to start to talk about how we support devices through testing and work across the PC ecosystem. This is a big undertaking and one that we take very seriously. As we talked about at the PDC, this is also an area where we learned some things which we want to apply to Engineering Windows 7. While this is a massive effort across the entire Windows organization, Grant George, the VP of Test for the Windows Experience, is taking the lead in authoring this post. We think this is a deep topic and I know folks want to know more so consider this a kick-off for more to come down the road. –Steven

    Devices and Drivers in Windows

    One of the most important responsibilities in a release of Windows is our support of, and compatibility with, all of the devices and their associated drivers that our users have. The abstraction layer in Windows to connect software and hardware is a crucial part of the operating system. That layer is surfaced through our driver model, which provides the interface for all of our partners in the multi-faceted hardware ecosystem. Windows supports a vast range of devices today – audio devices (speakers, headsets…), display devices (monitors…), print, fax and scan devices, connectivity to digital cameras, portable media devices of all shapes, sizes and functions, and more. Windows is an open platform for companies across the globe who develop and deliver these devices to the marketplace and our users – and our job is to make sure we understand that ecosystem and those choices and verify those devices and drivers work well for our customers – which includes partnering with those device providers throughout the engineering of Windows7.

    Drivers provide the interface between a device and the Windows operating system – and are citizens of the WDM (Windows Driver Model). WDM was initially created as an intermediary layer of kernel mode drivers to ease the authoring of drivers for Windows. There are different types of drivers. Class drivers (which are hardware device drivers that supports an array of devices of a similar hardware class where hardware manufacturers make their products compatible with standard protocols for interaction with the operating system) and device-specific drivers (provided by the device manufacturer for a specific device and sometimes a specific version of that device) are the two most common.

    Partner Support

    Support for our hardware partners comes in the form of the Windows Driver Kit (WDK) and for certification, the Windows Logo Kit (WLK). The WDK enables the development of device drivers and as of Vista replaced the previous Windows Driver Development Kit (DDK). The WDK contains all of the DDK components plus Windows Driver Foundation (WDF) and the Installable File System kit (IFS). The Driver Test Manager (DTM) is another component here, but is separate from the WDK. The Windows Logo Kit (WLK) aids in certifying devices for Windows (it contains automated tests as well as a run-time framework for those tests). These tests are run and passed by our hardware vendor partners in order to use the Microsoft “Designed for Windows™” logo on devices. This certification process helps us and our hardware partners ensure a specific level of quality and compatibility for devices interacting with the Windows operating system. Hardware devices and drivers that pass the logo kits tests qualify for the Windows logo, driver distribution on Windows Update, and can be referenced in the online Windows Marketplace.

    Validation and Testing

    With Windows 7 we have modified driver model validation, new and legacy device testing, and driver testing. Compared to Vista, we now place much more emphasis on validating the driver platform and verifying legacy devices and their associated drivers throughout our product engineering cycle. Data based on installed base for each device represents an integral part of testing, and we gather this data from a variety of sources including the voluntary, opt-in, anonymous telemetry in addition to sources such as sales data and IHV roadmaps. We have centralized and standardized the testing mechanics of the lab approach to this area of the product in a way that yields much earlier issue/bug discovery than in past releases. We have also ramped up our efforts to communicate platform or interface changes earlier with our external hardware partners to help them ensure their test cycles align with our schedule. In addition, we draw a more robust correlation between the real-world usage data, including recent trends, and prominence of each device and the prioritization it is given in our test labs. This is especially important for new and emerging devices that will come to market right before and just after we release Windows 7 to our customers.

    Another important element in bringing a high quality experience to our Windows 7 users in device and driver connectivity and capability is the staging of our overall engineering process in Windows 7. For this release all of our engineering teams have followed a well structured and staged development process. The development/coding of new features and capabilities in Windows 7 was broken out in to 3 distinct phases (milestones) with dedicated integration and stabilization time at the end of each of these three coding phases. This included ensuring our code base remained highly stable throughout the development of Windows 7 and that our device and driver test validation was a constant part of those milestones. Larry discussed this in his post as some might recall. Program Managers, Developers and Testers all worked in super close partnership throughout the coding phases. Our work with external partners – especially our device manufacturer partners – was also enhanced through early forums we provided for them to learn about the changes in Windows 7 and also work closely with us on validation. Much more focus has been put on planning and then executing - planning the work and then working the plan. Our belief is that this yields much more predictability to developing and delivering our new features in Windows 7 both from a feature content and overall schedule standpoint. We recognize that this raised the bar on how our external partners see us execute and deliver on that plan when we say we will, but we also hope it increases their confidence in how they engage with us in validating the device experience during our development and delivery of Windows 7.

    Determining Which Devices to Test

    Our program management team helps us drive device market share analysis. Most of their data comes from our Customer Experience Improvement Program. This gives us data on the actual hardware in use across our customer base. For example there are over 16,000 unique 4-part hardware IDs for display devices alone. Like many things, we understand that it only takes a single device not functioning well to degrade an overall Windows experience or upgrade—we definitely want to re-enforce this shared understanding.

    New devices typically have a small initial user base, but the driver will often be mostly new code (or the first time a code-base has seen a new device). As the device enters the mainstream, market share grows and most manufacturers continue to develop and improve their drivers. This is where for our customers, and our own testing, it’s important to always have the latest drivers for a given device.

    Over a device’s lifetime, we work closely with our external device partners and represent as faithfully as possible in our test labs, a prioritized way of ensuring old and new devices continue to work well with Windows. By paying very close attention to trends in the market place across our device classes, we can make guided decisions in the context of these areas:

    • Critical and mainstream devices we must support out-of-the-box
    • Which drivers we must make available on Windows Update
    • On which devices and drivers to focus our testing

    Another benefit of close market tracking is creating an equivalence-based view of a device family.

    Equivalence Classes

    We use the notion of equivalence classes to help us define and prioritize our hardware (device) test matrix. Creating equivalence classes involves grouping things into sets based on equivalent properties across related devices. For example, imagine if we worked for a chemical company and it was our job to test a car polish additive on actual automobiles. Given a fixed test budget, we would want to maximize the number of makes and models we test our product on. We begin by analyzing the current market space so we can make the best choices for our test matrix.

    Let’s say the first test car we analyze is a blue 2003 Ford Mustang. We also know that the same blue paint is used on all of Ford’s 2003 and 2004 models and is also used on all of Mazda’s 2005 models. This means our first automobile represents several entries in our table based on equivalence:

    Test ID

    Make

    Model

    Color

    Year

    1

    Ford

    Mustang

    Blue

    2003

    2

    Ford

    *

    Blue

    2004

    3

    Mazda

    *

    Blue

    2005

     

    Now let’s look at a silver 2001 Mercedes C240. We know that Mercedes and Chrysler have a relationship and upon further investigation we find Chrysler used the same silver paint on their 2006 through 2009 models. Now our equivalence class based test matrix looks like this:

    Test ID

    Make

    Model

    Color

    Year

    1

    Ford

    Mustang

    Blue

    2003

    2

    Ford

    *

    Blue

    2004

    3

    Mazda

    *

    Blue

    2005

    4

    Mercedes

    C240

    Silver

    2001

    5

    Chrysler

    *

    Silver

    2006

    6

    Chrysler

    *

    Silver

    2007

    7

    Chrysler

    *

    Silver

    2008

    8

    Chrysler

    *

    Silver

    2009

    By carefully analyzing each actual automobile, we have established an equivalence relationship that we can leverage to maximize implicit test coverage. Testing one make and model is theoretically equivalent to testing many. Of course we recognize in the real world different companies might use different techniques for applying paint, as one variable, so there are subtleties that require additional information to property class attributes for testing purposes.

    Testing computer devices is very similar. Even though there are thousands of different devices on the market, many of them share major components, are die-shrinks of a previous revision, or differ only in terms of memory, clock-rate, pixel count, connector, or even the type of heat sink. Take for example display devices. There are over 16,000 display devices on the market. But the equivalence view reveals that 90% of the market is represented by about 60 different GPUs. By adding a few more to a carefully constructed test matrix based on equivalence it is possible to represent over 99% of all GPUs. Driver writers also leverage equivalence by targeting drivers at a range of hardware. Driver install packages indicate devices they support via hardware IDs.

    All modern computer devices are assigned a unique hardware ID based on the device vendor, type, and class. Most IDs (PCI, PC Card, USB, and IEEE 1394 devices) are assigned by the industry standards body associated with that device type.

    Let’s look at the device ID of my display adapter:

    PCI\VEN_10DE&DEV_0611&SUBSYS_C8013842&REV_A2

    If I visit PCI-SIG (the standards body associated with all PCI device ID assignment) and do a search on 10DE, I’m told I this is an NVidia PCI ID. If I look further on my system in

    C:\Windows\System32\DriverStore\FileRepository

    I can find NVidia drivers (folders that start with nv_lh). If I open one of the driver .INF files on my machine I see this tell-tale line:

    NVIDIA_G92.DEV_0611.1 = "NVIDIA GeForce 8800 GT”

    Further inspection of the driver .INF file tells me that the same G92 GPU is used for all of these devices:

    • NVIDIA GeForce 8800 GTS 512
    • NVIDIA GeForce 8800 GT
    • NVIDIA GeForce 9800 GX2
    • NVIDIA GeForce 8800 GS
    • NVIDIA GeForce 9600 GSO
    • NVIDIA GeForce 8800 GT
    • NVIDIA GeForce 9800 GTX
    • NVIDIA Quadro FX 3700

    A bit of online research reveals other interesting information: “The 8800 GT, codenamed G92, was released on October 29, 2007. The card is the first to transition to 65 nm process, and supports PCI-Express 2.0.[13] It has a single-slot cooler as opposed to the double slot cooler on the 8800 GTS and GTX, and uses less power than GTS and GTX due to its 65 nm process.” -WikiPedia

    So in theory, if I was to run a test on my display adapter, there’s a good chance I’d get the same results as I would on any of these other related devices.

    Driver Goals for Windows 7

    One of our primary goals for Windows 7 is compatibility with all Vista certified drivers and to ensure that people have a seamless upgrade experience. This breaks down into several requirements that guide how we test:

    • Drivers for basic functionality are in-box (by in-box we mean available as part of the installation of Windows). This includes drivers for mainstream storage, network, input, and display devices so the OS can be installed and user can get online where, if needed, additional drivers can be acquire from Windows Update.
    • Drivers update and/or install with minimal end user effort.
    • When drivers are upgraded, there aren’t problems with the new drivers.
    • Drivers are reliable.

    One question we are asked about quite a bit is the availability of drivers. There are three primary reasons drivers end up looking for folks: clean installation of Windows, attaching device to a new computer, wanting the updated driver. We definitely recognize that for the readers of this blog, both as enthusiasts and often the support/IT infrastructure for corporations, friends, and families, that the ability to acquire drivers and reliably update machines is something of a “hobby” we all love to hate. We all want the latest and greatest—no more and no less.

    A clean installation is one we are all definitely valuing during the beta phase of Windows 7. It should be clear that a clean install, as important as it is to many of us, is not a routine/mainstream experience. Nevertheless, the combination of in-box drivers and those available via Windows Update will serve a very broad set of PCs (for example, you should see most of the drivers installed for the new Atom-based machines if you do a clean install). On the other hand, some drivers for PCs are only available from the PC maker and for a variety of reasons are not available for download from Windows Update or even the device manufacturer’s site. For example, mobile graphics drivers are generally available only from the PC maker and not from the graphics component maker—this is a decision they make because of the way these chipsets are delivered for each PC maker.

    Obviously attaching an existing device to a new PC is a common occurrence. In this case you may have long ago lost the CD/DVD that came with a device and you just plug it in (because you ignored the warning saying “please run the setup program first”). Again, our goal is to provide these via Windows Update. Often IHVs have updates or significantly large downloads that for a number of reasons are not appropriate to deliver via Windows Update. In that case we can also alert you, with a link many times, to seek the driver from the vendor of the device.

    Updating drivers is something we are all familiar with as we often read “get the latest driver” to address issues. We all see this particularly in the enthusiast gamer space where newer drivers also improve performance or offer more features, in addition to improving overall. The primary way to get updated drivers is generally through optional updates in Windows Update, though again many times the latest and greatest must be downloaded directly from an IHV (independent hardware vendor) site.

    Our goal is clearly to make sure that drivers for the broadest set of devices are available and high quality. There are many equal partners that contribute to delivering a PC and all the associated devices and we work hard to develop a systematic way to reach the broadest set of customers with high quality software and support.

    Scale of Device and Driver Testing in Windows 7

    The table below provides examples of some of the explicit devices we have directly tested thus far during the development of Windows 7. This is just a sampling of that direct testing - many more devices have been directly tested that are not shown here or are covered through equivalence classing.

    This information is available in many sources, such as the WHQL web site that lists all qualified devices. For the purposes of this blog we thought it would be fun to provide a list here which we think will most certainly serve as the basis for discussion.

    Manufacturer

    Description

    Family

    Altec Lansing

    T515

    Audio

    AMD (ATI)

    Radeon 9200

    Display

    AMD (ATI)

    FireGL 3100

    Display

    AMD (ATI)

    Radeon X300/X550/X1050 Series

    Display

    AMD (ATI)

    Radeon 9800 Pro

    Display

    AMD (ATI)

    FireGL V3100

    Display

    AMD (ATI)

    Radeon Xpress Series

    Display

    AMD (ATI)

    Radeon Xpress Series

    Display

    AMD (ATI)

    Radeon Xpress 1200

    Display

    AMD (ATI)

    Radeon X700 PRO

    Display

    AMD (ATI)

    Radeon X1200

    Display

    AMD (ATI)

    Radeon X800 CrossFire Edition

    Display

    AMD (ATI)

    Mobility Radeon X300

    Display

    AMD (ATI)

    Radeon X850 CrossFire Edition

    Display

    AMD (ATI)

    Radeon X1550

    Display

    AMD (ATI)

    Radeon X1950 Series

    Display

    AMD (ATI)

    Mobility Radeon X1300

    Display

    AMD (ATI)

    Mobility Radeon X1400

    Display

    AMD (ATI)

    Mobility Radeon HD3200

    Display

    AMD (ATI)

    Radeon HD 2600 XT

    Display

    AMD (ATI)

    Radeon HD 3850

    Display

    AMD (ATI)

    Radeon HD 3870

    Display

    AMD (ATI)

    Radeon HD 3200

    Display

    AMD (ATI)

    Radeon HD 2400

    Display

    AMD (ATI)

    FireGL 6000

    Display

    AMD (ATI)

    FireGL 8200

    Display

    AMD (ATI)

    Radeon HD 2900 XT

    Display

    AMD (ATI)

    Radeon HD 2600

    Display

    AMD (ATI)

    Radeon HD 4850

    Display

    AMD (ATI)

    Radeon HD4670

    Display

    AMD (ATI)

    ATI Technologies, Inc. RAGE XL PCI

    Display

    AMD (ATI)

    RADEON 7000 Series

    Display

    Analog Devices

    AD1884

    Audio

    Analog Devices

    AD1984

    Audio

    Analog Devices

    AD1981

    Audio

    Analog Devices

    ADI1986A

    Audio

    Analog Devices

    ADI1988B

    Audio

    Analog Devices Inc.

    ADI AC97

    Audio

    Apple

    iPhone headset

    Audio

    Apple

    iSight 640x480 Firewire

    VidCap

    Archos

    Archos605(WiFi)

    Portable Device

    ATI

    ATI HDMI

    Audio

    BlueAnt

    X5 Stereo BT Headset

    Audio

    Brother

    HL-5140

    Print / Scan

    Brother

    HL-2070

    Print / Scan

    Brother

    MFC-8440

    Print / Scan

    Brother

    MFC-5840c

    Print / Scan

    Brother

    HL-5150

    Print / Scan

    Brother

    MFC-8840

    Print / Scan

    Brother

    HL-6050D

    Print / Scan

    Brother

    IntelliFax-5750e

    Print / Scan

    Brother

    IntelliFax-5750

    Print / Scan

    Canon

    Canon A720IS

    Portable Device

    Canon

    Digital Rebel XT

    Portable Device

    Canon

    A420\410

    Portable Device

    Canon

    SD430

    Portable Device

    Canon

    Pixma MP140

    Print / Scan

    Canon

    Pixma iP1800

    Print / Scan

    Canon

    Pixma iP1700

    Print / Scan

    Canon

    Pixma iP2500

    Print / Scan

    Canon

    Pixma MP210

    Print / Scan

    Canon

    Pixma MP160

    Print / Scan

    Canon

    Pixma iP1500

    Print / Scan

    Canon

    Pixma iP1600

    Print / Scan

    Canon

    Pixma iP4200

    Print / Scan

    Canon

    Pixma iP3500

    Print / Scan

    Canon

    Pixma iP4500

    Print / Scan

    Canon

    Pixma MP180

    Print / Scan

    Canon

    Pixma iP2000

    Print / Scan

    Canon

    i475D

    Print / Scan

    Canon

    Pixma MP150

    Print / Scan

    Canon

    i250

    Print / Scan

    Canon

    Pixma MP520

    Print / Scan

    Canon

    S450

    Print / Scan

    Canon

    MultiPass MP390

    Print / Scan

    Canon

    Pixma MP500

    Print / Scan

    Canon

    Pixma MX300

    Print / Scan

    Canon

    Pixma iP1000

    Print / Scan

    Canon

    Pixma MP610

    Print / Scan

    Canon

    MultiPass MP190

    Print / Scan

    Canon

    Pixma iP6210D

    Print / Scan

    Canon

    Pixma iP5200

    Print / Scan

    Canon

    Pixma iP3300

    Print / Scan

    Canon

    Pixma iP3000

    Print / Scan

    Canon

    Pixma MP510

    Print / Scan

    Canon

    Pixma iP90

    Print / Scan

    Canon

    i350

    Print / Scan

    Canon

    Pixma iP6600D

    Print / Scan

    Canon

    Pixma MP830

    Print / Scan

    Canon

    BJC-6000

    Print / Scan

    Canon

    i550

    Print / Scan

    Canon

    Pixma MP170

    Print / Scan

    Canon

    Pixma MP460

    Print / Scan

    Canon

    Pixma MP600

    Print / Scan

    Canon

    Pixma iP4300

    Print / Scan

    Canon

    i860

    Print / Scan

    Canon

    Pixma MP110

    Print / Scan

    Canon

    i320

    Print / Scan

    Canon

    Pixma iP6220D

    Print / Scan

    Canon

    Pixma MP130

    Print / Scan

    Canon

    Pixma iP6310D

    Print / Scan

    Canon

    i960/i965

    Print / Scan

    Canon

    Pixma MP950

    Print / Scan

    Canon

    Selphy Series

    Print / Scan

    Canon

    i560

    Print / Scan

    Canon

    Pixma iP8500

    Print / Scan

    Canon

    MultiPass MP370

    Print / Scan

    Canon

    Pixma iP4000

    Print / Scan

    Canon

    i9900

    Print / Scan

    Canon

    Pixma iX4000

    Print / Scan

    Canon

    i865

    Print / Scan

    Canon

    Pixma mini260

    Print / Scan

    Canon

    Pixma iX5000

    Print / Scan

    Canon

    i850

    Print / Scan

    Canon

    S530D

    Print / Scan

    Canon

    Pixma MP800R

    Print / Scan

    Canon

    Pixma iP5200R

    Print / Scan

    Canon

    i470D Photo Printer

    Print / Scan

    Canon

    S600

    Print / Scan

    Canon

    BJC-85

    Print / Scan

    Canon

    Pixma iP6000

    Print / Scan

    Canon

    S9000

    Print / Scan

    Canon

    Pixma MP750

    Print / Scan

    Canon

    Pixma MP780

    Print / Scan

    Canon

    S630

    Print / Scan

    Canon

    MultiPass MP1000

    Print / Scan

    Canon

    S520

    Print / Scan

    Canon

    Pixma MP810

    Print / Scan

    Canon

    Pixma iP5000

    Print / Scan

    Canon

    Pixma iP6700D

    Print / Scan

    Canon

    Pixma iP80

    Print / Scan

    Canon

    SD600

    Portable Device

    Canon Inc.

    PowerShot A720 IS

    Portable Device

    CASIO COMPUTER CO.,LTD.

    EX-Z1200

    Portable Device

    Chrontel

    Chrontel HDMI

    Audio

    Conexant

    Venice

    Audio

    Creative

    MP3+ (SB0270)

    Audio

    Creative

    Xmod

    Audio

    Creative

    Live! Cam Optia AF

    VidCap

    Creative

    WebCam Live! USB

    VidCap

    Creative

    Webcam NoteBook 640x480 USB

    VidCap

    Creative

    WebCam Instant 352x288 USB

    VidCap

    Creative

    WebCam NX Pro 640x480 USB

    VidCap

    Creative

    WEBCAM NX

    VidCap

    Creative

    Live! Cam Notebook Pro 640K USB 2.0

    VidCap

    Creative

    Live! Cam Video IM Pro VGA USB 2.0

    VidCap

    Creative

    Webcam Live Ultra 640x480 USB 2.0 Manual Focus Ring

    VidCap

    Creative Labs, Inc.

    Live! Series

    Audio

    Creative Labs, Inc.

    Audigy Series

    Audio

    Creative Labs, Inc.

    X-Fi Series

    Audio

    Creative Technology Ltd

    Nano Plus

    Portable Device

    Creative Technology Ltd

    NOMAD MuVo TX

    Portable Device

    Creative Technology Ltd

    Zen Vision M

    Portable Device

    Creative Technology Ltd

    Vision W

    Portable Device

    Creative Technology Ltd

    Sleek

    Portable Device

    Creative Technology Ltd

    PMC v2

    Portable Device

    Dell

    Axim X51v

    Portable Device

    Dell

    AiO 810

    Print / Scan

    Dell

    A924

    Print / Scan

    Dell

    J740

    Print / Scan

    Dell

    1600n

    Print / Scan

    Dell

    A922

    Print / Scan

    Dell

    A940

    Print / Scan

    Dell

    LP 1720dn

    Print / Scan

    Dell

    3100cn

    Print / Scan

    Dell

    W5300N

    Print / Scan

    Denon

    S-52

    Media Sharing

    Dixim

    media server

    Media Sharing

    Dlink

    DSM-210

    Media Sharing

    Dlink

    DSM - 520

    Media Sharing

    Dlink

    DSM - 510

    Media Sharing

    Drobo

    Drobo NAS

    Media Sharing

    Epson

    Stylus Color C88+

    Print / Scan

    Epson

    Stylus Color C84/C85

    Print / Scan

    Epson

    Stylus Color C86/C87

    Print / Scan

    Epson

    Stylus Color C64

    Print / Scan

    Epson

    Stylus Photo R265

    Print / Scan

    Epson

    LQ-570/670

    Print / Scan

    Epson

    FX-880

    Print / Scan

    Epson

    Stylus Photo R220

    Print / Scan

    Epson

    LQ-300

    Print / Scan

    Epson

    Stylus Photo R320

    Print / Scan

    Epson

    Stylus CX6600/6500/6900

    Print / Scan

    Epson

    Stylus CX5400

    Print / Scan

    Epson

    Stylus Photo 1270

    Print / Scan

    Epson

    LQ-1070+

    Print / Scan

    Epson

    Stylus Photo R200

    Print / Scan

    Epson

    Stylus Photo 1280/1290

    Print / Scan

    Epson

    Stylus Color 900/N

    Print / Scan

    Epson

    Stylus Color C62

    Print / Scan

    Epson

    ActionPrinter 5000+

    Print / Scan

    Epson

    Stylus Photo 820

    Print / Scan

    Epson

    Stylus Color 660

    Print / Scan

    Epson

    Stylus Color 640

    Print / Scan

    Epson

    AcuLaser 2600N

    Print / Scan

    Epson

    FX-2170

    Print / Scan

    Epson

    FX-2190

    Print / Scan

    FujiFilm

    F30

    Portable Device

    General Electric

    EasyCam USB PC Camera 640x480

    VidCap

    GN\Jabra

    GN9330

    Audio

    GN\Jabra

    GN9350

    Audio

    GN\Jabra

    GN2000USB

    Audio

    HP

    HD TV

    Media Sharing

    HP

    Photosmart R717

    Portable Device

    HP

    Deskjet D1400 series

    Print / Scan

    HP

    Deskjet F380

    Print / Scan

    HP

    Deskjet F4100

    Print / Scan

    HP

    LaserJet 1018

    Print / Scan

    HP

    LaserJet 1020

    Print / Scan

    HP

    Photosmart C3180

    Print / Scan

    HP

    Deskjet D2400 Series

    Print / Scan

    HP

    LaserJet P2015

    Print / Scan

    HP

    Officejet K550

    Print / Scan

    HP

    PSC 1410

    Print / Scan

    HP

    Deskjet F2100 series

    Print / Scan

    HP

    PSC 1315

    Print / Scan

    HP

    Deskjet 5440

    Print / Scan

    HP

    Color LaserJet 2600

    Print / Scan

    HP

    Officejet 5700

    Print / Scan

    HP

    PSC 1510

    Print / Scan

    HP

    Photosmart C4200

    Print / Scan

    HP

    Deskjet 5150

    Print / Scan

    HP

    Deskjet 930C/935C

    Print / Scan

    HP

    Deskjet 5940

    Print / Scan

    HP

    Photosmart C4180

    Print / Scan

    HP

    Deskjet D2330

    Print / Scan

    HP

    LaserJet 1022

    Print / Scan

    HP

    Deskjet 3745

    Print / Scan

    HP

    Deskjet 5550

    Print / Scan

    HP

    Photosmart C5200

    Print / Scan

    HP

    Officejet 5610

    Print / Scan

    HP

    Deskjet D2360

    Print / Scan

    HP

    Deskjet 3900 Series

    Print / Scan

    HP

    Photosmart C5180

    Print / Scan

    HP

    Deskjet 5740

    Print / Scan

    HP

    Deskjet D4200 Series

    Print / Scan

    HP

    Deskjet 6122

    Print / Scan

    HP

    Deskjet 950C

    Print / Scan

    HP

    Deskjet 940C

    Print / Scan

    HP

    PSC 1610

    Print / Scan

    HP

    Photosmart D5160

    Print / Scan

    HP

    Officejet 6200 Series

    Print / Scan

    HP

    Deskjet 3845

    Print / Scan

    HP

    Deskjet 3650

    Print / Scan

    HP

    PSC 2355

    Print / Scan

    HP

    Officejet 6300 Series

    Print / Scan

    HP

    LaserJet P2014

    Print / Scan

    HP

    LaserJet 1300

    Print / Scan

    HP

    Officejet Pro L7500

    Print / Scan

    HP

    Officejet Pro L7600

    Print / Scan

    HP

    PSC 1350

    Print / Scan

    HP

    Deskjet 9800

    Print / Scan

    HP

    Photosmart 2575

    Print / Scan

    HP

    Deskjet 450ci

    Print / Scan

    HP

    Officejet 4215

    Print / Scan

    HP

    LaserJet 1160

    Print / Scan

    HP

    Deskjet 5650

    Print / Scan

    HP

    Officejet 7400 Series

    Print / Scan

    HP

    Deskjet 3740

    Print / Scan

    HP

    Officejet 5510 Series

    Print / Scan

    HP

    Photosmart 3210

    Print / Scan

    HP

    Officejet 7300 Series

    Print / Scan

    HP

    Photosmart 7850

    Print / Scan

    HP

    Deskjet 832C

    Print / Scan

    HP

    Deskjet 1220C

    Print / Scan

    HP

    LaserJet 3030 MFP

    Print / Scan

    HP

    Photosmart A616

    Print / Scan

    HP

    LaserJet 3055

    Print / Scan

    HP

    Deskjet 720C

    Print / Scan

    HP

    Photosmart 7260

    Print / Scan

    HP

    Deskjet 3320

    Print / Scan

    HP

    Deskjet 970C

    Print / Scan

    HP

    Photosmart A440

    Print / Scan

    HP

    Deskjet 695C/697C

    Print / Scan

    HP

    Photosmart A516

    Print / Scan

    HP

    Deskjet 6540

    Print / Scan

    HP

    Deskjet 6940

    Print / Scan

    HP

    PSC 2510

    Print / Scan

    HP

    Officejet 6100 Series

    Print / Scan

    HP

    Deskjet 6840

    Print / Scan

    HP

    Photosmart A430

    Print / Scan

    HP

    Photosmart 7450

    Print / Scan

    HP

    Deskjet 812C/815C

    Print / Scan

    HP

    Photosmart 375

    Print / Scan

    HP

    Officejet V40 Series

    Print / Scan

    HP

    Deskjet 840/843/845

    Print / Scan

    HP

    Photosmart D7400 Series

    Print / Scan

    HP

    PSC 950 Series

    Print / Scan

    HP

    Officejet G Series

    Print / Scan

    HP

    LaserJet 1015

    Print / Scan

    HP

    Photosmart 7960

    Print / Scan

    HP

    Deskjet 895C

    Print / Scan

    HP

    Photosmart 8450

    Print / Scan

    HP

    Photosmart Pro B8350

    Print / Scan

    HP

    Deskjet 1180c

    Print / Scan

    HP

    LaserJet 4345 MFP

    Print / Scan

    HP

    LaserJet 4250

    Print / Scan

    HP

    LaserJet P3005

    Print / Scan

    HP

    LaserJet 5200

    Print / Scan

    HP

    LaserJet 4350n

    Print / Scan

    HP

    Color LaserJet 4700

    Print / Scan

    HP

    LaserJet 2300

    Print / Scan

    HP

    LaserJet 4000

    Print / Scan

    HP

    Color LaserJet 5550

    Print / Scan

    HP

    Color LaserJet 3800

    Print / Scan

    HP

    LaserJet 4050

    Print / Scan

    HP

    Color LaserJet 3600

    Print / Scan

    HP

    LaserJet 9050

    Print / Scan

    HP

    LaserJet 2100

    Print / Scan

    HP

    LaserJet 4240

    Print / Scan

    HP

    LaserJet 2200

    Print / Scan

    HP

    Color LaserJet 3000

    Print / Scan

    HP

    LaserJet 4100

    Print / Scan

    HP

    LaserJet 5000

    Print / Scan

    HP

    Business Inkjet 1200D

    Print / Scan

    HP

    Color LaserJet 4550

    Print / Scan

    HP

    Color LaserJet 4600

    Print / Scan

    HP

    Color LaserJet CP4005

    Print / Scan

    HP

    Color LaserJet 3700

    Print / Scan

    HP

    Color LaserJet 3500

    Print / Scan

    HP

    LaserJet 9000 MFP

    Print / Scan

    HP

    LaserJet 4 Plus

    Print / Scan

    HP

    LaserJet III

    Print / Scan

    HP

    LaserJet 6MP

    Print / Scan

    HP

    Color LaserJet 1500L

    Print / Scan

    HP

    PSC 1315

    Print / Scan

    HP

    Officejet 5610

    Print / Scan

    HP

    PSC 1350

    Print / Scan

    HP

    LaserJet 4345 MFP

    Print / Scan

    HTC

    TyTN II

    Portable Device

    IDT

    STAC9220(9223)7680

    Audio

    IDT

    STAC9220(9223)7681

    Audio

    IDT

    STAC9227X(D)7618

    Audio

    IDT

    STAC9227X(D)7619

    Audio

    IDT

    STAC9225(Sony)7662

    Audio

    IDT

    STAC9225(Sony)7664

    Audio

    IDT

    STAC9225(Sony)7661

    Audio

    IDT

    STAC9200

    Audio

    IDT

    STAC9228

    Audio

    IDT

    STAC9205

    Audio

    IDT

    STAC9250

    Audio

    Insignia

    NS-BTHDP

    Audio

    Insignia

    NS-DV4G

    Portable Device

    Insignia

    NS-DA2G

    Portable Device

    Intel

    Intel HDMI

    Audio

    Intel

    i965GX/G35

    Display

    Intel

    G3x

    Display

    Intel

    i4G

    Display

    Intel

    i45GM

    Display

    Intel

    i915GM

    Display

    Intel

    i915G

    Display

    Intel

    i945G

    Display

    Intel

    i945GM

    Display

    Intel

    Q3x

    Display

    Intel

    i965G

    Display

    Intel

    i965GM

    Display

    Iriver

    ClixGen2

    Portable Device

    Iriver

    IriverClix2_FWv1.14

    Portable Device

    Iriver

    U10 Series

    Portable Device

    Iriver

    Clix 

    Portable Device

    Jabra

    BT620S

    Audio

    Jabra

    BT8010

    Audio

    Jabra

    BT3030

    Audio

    Jasco

    Minicam Pro

    VidCap

    Kodak

    Easyshare LS420

    Portable Device

    Konica Minolta

    magicolor 5450

    Print / Scan

    Kyocera Mita

    FS-6900

    Print / Scan

    LABTEC

    LABTEC WEBCAM PRO 961358

    VidCap

    LABTEC

    Web Cam Plus 352x288 USB 2.0 Manual Focus Motion Detection

    VidCap

    Lexmark

    Z845

    Print / Scan

    Lexmark

    Z1300

    Print / Scan

    Lexmark

    X2550

    Print / Scan

    Lexmark

    X1270

    Print / Scan

    Lexmark

    X2470

    Print / Scan

    Lexmark

    Z735

    Print / Scan

    Lexmark

    E120n

    Print / Scan

    Lexmark

    X3550

    Print / Scan

    Lexmark

    Z715

    Print / Scan

    Lexmark

    Z42 Color JetPrinter

    Print / Scan

    Lexmark

    X5470

    Print / Scan

    Lexmark

    Z816

    Print / Scan

    Lexmark

    Z615

    Print / Scan

    Lexmark

    X2250

    Print / Scan

    Lexmark

    P915

    Print / Scan

    Lexmark

    X7170

    Print / Scan

    Lexmark

    X4550

    Print / Scan

    Lexmark

    X6170

    Print / Scan

    Lexmark

    X6150

    Print / Scan

    Lexmark

    E232

    Print / Scan

    Lexmark

    2490

    Print / Scan

    Lexmark

    P3150

    Print / Scan

    Lexmark

    X5150

    Print / Scan

    Lexmark

    E323

    Print / Scan

    Lexmark

    P315

    Print / Scan

    Lexmark

    Z25 Color JetPrinter

    Print / Scan

    Lexmark

    2491

    Print / Scan

    Lexmark

    X215

    Print / Scan

    Lexmark

    X4250

    Print / Scan

    Lexmark

    E321

    Print / Scan

    Lexmark

    Z45 Color JetPrinter

    Print / Scan

    Lexmark

    X83

    Print / Scan

    Lexmark

    C524

    Print / Scan

    Lexmark

    E450D

    Print / Scan

    Lexmark

    T640

    Print / Scan

    Lexmark

    X634

    Print / Scan

    Lexmark

    W840

    Print / Scan

    Lexmark

    X632

    Print / Scan

    Lexmark

    X620

    Print / Scan

    Lexmark

    X630

    Print / Scan

    Lexmark

    T642

    Print / Scan

    Lexmark

    W812

    Print / Scan

    Lexmark

    X1270

    Print / Scan

    LG

    HBS-200

    Audio

    Logitech

    QuickCam Pro 9000

    Audio

    Logitech

    QuickCam Pro 9000

    VidCap

    Logitech

    Quickcam Communicate STX VGA Fixed Focus USB 2.0

    VidCap

    Logitech

    QuickCam Chat VGA w/Image Capture USB 2.0

    VidCap

    Logitech

    961400-0403 QuickCam Notebook Deluxe 1.3MP MF USB 2.0

    VidCap

    Logitech

    QuickCam Pro 4000 640x480 USB 2.0

    VidCap

    Logitech

    QuickCam Pro 5000 640x480 USB 2.0

    VidCap

    Logitech

    Quickcam Vision Pro1

    VidCap

    Logitech

    Quickcam Vision Pro2

    VidCap

    Logitech

    961403 QuickCam Fusion 1.3MP USB 2.0

    VidCap

    Logitech

    QuickCam Messenger 640x480 USB

    VidCap

    Logitech

    QuickCam Messenger Refresh 640x480 USB

    VidCap

    Logitech

    QuickCam Notebooks Pro 1.3MP USB 2.0

    VidCap

    Logitech

    QuickCam Zoom 640x480 USB

    VidCap

    Logitech

    QuickCam Communicate 640x480 USB 2.0

    VidCap

    Logitech

    QuickCam Orbit MP 1.3MP USB 2.0

    VidCap

    Logitech

    QUICKCAMFORNB

    VidCap

    Logitech

    QuickCam Orbit 640x480 USB 2.0

    VidCap

    Logitech

    QuickCam for Notebooks Pro

    VidCap

    Lubix

    UBHS-LC1

    Audio

    Matrox

    M9120

    Display

    Microsoft

    NX-3000

    Audio

    Microsoft

    VX-7000

    Audio

    Microsoft

    NX-6000

    Audio

    Microsoft

    VX-6000

    Audio

    Microsoft

    VX-3000

    Audio

    Microsoft

    VX-1000

    Audio

    Microsoft

    LX-3000

    Audio

    Microsoft

    ZX-6000

    Audio

    Microsoft

    Mic Array

    Audio

    Microsoft

    XBox 360

    Media Sharing

    Microsoft

    LifeCam VX-1000 VGA USB 2.0

    VidCap

    Microsoft

    Lifecam NX-6000

    VidCap

    Microsoft

    LifeCam VX-6000 1.3MP USB 2.0

    VidCap

    Microsoft

    LifeCam VX-3000 1.3MP USB 2.0

    VidCap

    Microsoft

    Xbox Live Vision (Xbox 360)

    VidCap

    Microsoft

    Lifecam VX-7000

    VidCap

    Microsoft

    Lifecam NX-3000

    VidCap

    Momento

    Wireless Picture Frame

    Media Sharing

    Motorola

    S9

    Audio

    Motorola

    HT820

    Audio

    Motorola

    H670

    Audio

    Motorola

    HS850

    Audio

    Motorola

    H500

    Audio

    Motorola

    DJ S805

    Audio

    NEC

    UTR-UC-1

    Audio

    Nero8 Home Media

    media server

    Media Sharing

    Nikon

    CoolPix S1

    Portable Device

    Nokia

    BH800

    Audio

    Nokia

    N95

    Media Sharing

    Nokia

    N95

    Portable Device

    Nokia

    5300

    Portable Device

    nVidia

    nVidia HDMI

    Audio

    Nvidia

    GeForce 7600GT

    Display

    Nvidia

    GeForce 7800GT

    Display

    Nvidia

    Geforce 8200

    Display

    Nvidia

    GeForce 7400 Go

    Display

    Nvidia

    Geforce 7950 GX2

    Display

    Nvidia

    Geforce 8800GTS

    Display

    Nvidia

    Geforce 8800GTX

    Display

    Nvidia

    Geforce 8400 GS

    Display

    Nvidia

    GeForce 8400M GS

    Display

    Nvidia

    Geforce 8600 GT

    Display

    Nvidia

    Quador NVS 130m

    Display

    Nvidia

    Quadro 570

    Display

    Nvidia

    Quadro 570m

    Display

    Nvidia

    GeForce 9600 GT

    Display

    Nvidia

    GeForce 8800 GT

    Display

    Nvidia

    Geforce 8400GS (G98)

    Display

    Nvidia

    Geforce 9800 X2

    Display

    Nvidia

    Geforce GTX 260

    Display

    Nvidia

    GeForce4 MX 420

    Display

    Nvidia

    GeForce FX 5200

    Display

    Nvidia

    Geforce FX 5900

    Display

    Nvidia

    GeForce 6150

    Display

    Nvidia

    GeForce 6100

    Display

    Nvidia

    GeForce 6200

    Display

    Nvidia

    GeForce 7050

    Display

    Nvidia

    GeForce 6800

    Display

    Nvidia

    GeForce Go 6150

    Display

    Oki

    Microline 320/Turbo

    Print / Scan

    Oki

    Microline 184 Turbo

    Print / Scan

    Oki

    Microline 391/Turbo

    Print / Scan

    Oki

    Microline 321/Turbo

    Print / Scan

    Oki

    Microline 590

    Print / Scan

    Panasonic

    KX-P2130

    Print / Scan

    Panasonic

    KX-P2023

    Print / Scan

    Parrot

    Boombox

    Audio

    Philips

    Stereo Mic

    Audio

    Philips

    GoGear 30GB

    Portable Device

    Plantronics

    Pulsar 590A/E

    Audio

    Plantronics

    Pulsar 260

    Audio

    Plantronics

    Discovery 655 or 665

    Audio

    Plantronics

    SupraPluc DA45

    Audio

    Polycom

    CX400

    Audio

    Realtek

    Realtek 262 HD Audio codec

    Audio

    Realtek

    Realtek 268 HD Audio codec

    Audio

    Realtek

    Realtek 660 HD Audio codec

    Audio

    Realtek

    Realtek 862 HD Audio codec

    Audio

    Realtek

    Realtek 883 HD Audio codec

    Audio

    Realtek

    Realtek 888 HD Audio codec

    Audio

    Realtek

    Realtek 885 HD Audio codec

    Audio

    Realtek

    Realtek 882 HD Audio codec

    Audio

    Realtek

    Realtek 861 HD Audio codec

    Audio

    Realtek

    Realtek 662 HD Audio codec

    Audio

    Realtek Semiconductor Corp

    Realtek AC97

    Audio

    Rhapsody

    music Jukebox

    Media Sharing

    RIO

    Rio Carbon

    Portable Device

    Roku

    Radio Soundbridge

    Media Sharing

    Roku

    SoundbridgeM1000

    Media Sharing

    S3

    GammaChrome G700

    Display

    S3

    GammaChrome G700

    Display

    S3

    S3 Graphics Chrome 440/430 Series

    Display

    S3

    S3 Graphics Chrome 440/430 Series

    Display

    Samsung

    WEP-210

    Audio

    Samsung

    YP-Z5

    Portable Device

    Samsung

    ML-1610

    Print / Scan

    Samsung

    SF-5100

    Print / Scan

    Samsung

    ML-1710

    Print / Scan

    SanDisk Corporation

    Sansa E260

    Portable Device

    SanDisk Corporation

    Sansa View Mp3 Player

    Portable Device

    SanDisk Corporation

    Sansa m250

    Portable Device

    SI

    1392 HDMI

    Audio

    SigmaTel, Inc.

    Sigmatel AC97

    Audio

    SiS

    Xabre

    Display

    SiS

    Mirage3

    Display

    Sonos

    Zone player ZP80

    Media Sharing

    Sony

    DR-BT22

    Audio

    Sony

    PS3

    Media Sharing

    Sony

    DSC-T200

    Portable Device

    Sony Corporation

    WALKMAN NWZ-A816

    Portable Device

    Sony Ericsson

    W910i

    Portable Device

    Toshiba

    Gigabeat

    Portable Device

    Toshiba

    Gigabeat V2 PMC

    Portable Device

    Turtle Beach

    Audio Advantage Micro

    Audio

    Tversity Inc

    media server

    Media Sharing

    Twonky Media

    media server

    Media Sharing

    Via

    DeltaChrome G700

    Display

    Western Digital

    External harddrive

    Media Sharing

    Xerox

    Phaser 6120

    Print / Scan

    Xerox

    Phaser 4510

    Print / Scan

  • Engineering Windows 7

    Windows 7 Energy Efficiency

    • 81 Comments

    Happy New Year!  The following post continues our discussion of fundamentals with a focus on power management.  Power Management (or energy efficiency) is something that every contributor to the PC Ecosystem must always address—the energy efficiency of a running PC is limited by the weakest component.  In engineering Windows 7 we had an explicit focus on the energy usage patterns of the running system and will continue to work with hardware and software makers to realize the collective benefit of all of this work.  While we talk about the balancing of needs in every area, energy consumption is probably the most easily visualized—when we test running systems we connect them to power meters and watch a very clear number change as we run tests.  (If you’ve seen the film Apollo 13 then you’ve seen a similar (albeit much more mission critical) struggle with a power budget.)  This post is by Dean DeWhitt in program management team on our Kernel team. --Steven     PS: Quite a few of us are at CES this week!

    Energy efficiency is one of the most active topics in modern computing today. As evidence, consider that processor and chipset vendors are marketing products on “performance per watt”, instead of just processor clock frequency and benchmark performance. Perhaps you have seen a press release for one of the many industry consortiums focused on “Green Computing”--reducing the power consumption and environmental impact of computing. Finally, battery life continues to be a major purchasing and usability factor for mobile PCs. These related energy efficiency efforts in the PC industry result in an ever-increasing interest in how Windows manages power.

    In engineering Windows 7, our goal is to deliver the capabilities and features users want from a Windows PC while reducing power consumption over previous releases. Windows already provides a rich set of energy saving features, including the ability to turn off the display and automatically put the system to sleep when the user is not interacting with the computer. For Windows 7, we are building upon the investments in these areas by extending the existing capabilities and focusing on reducing power consumption when the system is idle. Although Windows is responsible for managing the power state of many devices, including the processor, hard drive and display, the remaining devices and software running on the computer have just as much (if not more) impact on power consumption and battery life.  This is a challenge for everyone contributing to the PC experience.

    When we talk about energy efficiency and power consumption, we like to break down the problem area into 3 main components:

    • Base Hardware Platform: The processor, chipset and memory and in the case of mobile platforms this also includes the battery capacity. The base hardware platform can have a significant impact on the power consumption of the platform—maximum processor speed, the number of cores, if the processor is designed for mobile devices, and the amount of RAM are all factors.
    • Windows: The PC operating system is responsible for managing many of the devices in the system, making smart tradeoffs between performance and power consumption based on usage and allowing the end-user to dictate power management policy through power plans and settings. The challenges in this area are to properly manage device power and to ensure new Windows features are as efficient as possible in the amount of system resources (CPU, memory and disk) they use.
    • Extensions: Extensions is a general category which includes other devices, drivers, services and applications. Devices, drivers and other software can have a significant impact on power consumption and a single application can impact battery life by 20% or more.

    Realizing great energy efficiency from a Windows PC requires efforts in each of these areas. A problem with any single component in any area can have a significant impact on power consumption. Thus approaching energy efficiency from a platform approach and paying special attention to each component on the platform is required.

    Base Hardware Platform

    The base hardware platform is really dictated by the system manufacturer. The customer gets the ultimate choice when they buy a system—the customer can buy a system with ultra-efficient hardware components or can buy a system with components that favor performance over power consumption. There are desktop and mobile PCs in all kinds of form factors, with varying capabilities and power consumption levels. Some mobile PCs have a normal 3 or 6-cell battery, while others have an extended 9-cell battery or another external battery that can be added to the computer. The challenge for Windows is to be energy efficient across the wide range of hardware in the Windows ecosystem. Looking at a modern laptop, here is where the power goes:

    Laptop power consumption.Desktops will have a similar power distribution although higher in watts. The display is a large amount of the energy consumed in using your desktop PC as well.

    Operating System

    The Windows operating system can have just as big an influence as any other component in the platform. In engineering Windows 7 our goal is to make sure Windows provides a great foundation and energy saving opportunities within the operating system starting with configuration of power policy settings.

    The first place most users encounter Windows power management is through Power Options in control panel, or the battery meter on a mobile PC. For as long as Windows has had power management, Windows has had power schemes or power plans. The power plans allow you to easily change from one set of power settings to another, depending on your preferences.

    Power Options control panel.

    Within a power plan, you can change a variety of Windows power-saving features, including inactivity timers for turning off the display, automatically putting the system to sleep or even creating a new custom power plan for the exact settings you want. The display and sleep idle features are very important for power savings and battery life. As above, the display can consume approximately 40% of the power budget on the typical mobile PC and anywhere from 30-100+ Watts on a desktop PC.

    PC OEMs, especially makers of laptops, will often develop a custom set of power schemes that work to take advantage of differentiated hardware and unique software available on a specific model.  So often you will see power schemes that carry the name of your PC OEM in the title.  These have been developed by the OEM who is just as committed to energy efficiency.

    Quick tips: The easiest way to save power on a desktop PC is reduce the display idle timeout to something very aggressive, such as 2 or 5 minutes. If you have a screen saver enabled, disable it to allow the display to turn off. On a mobile PC, the easiest way to extend battery life is to reduce the brightness of the display in addition.  Also note that many of the new all-in-one machines use laptop components and thus from a power management perspective look like laptops.

    Windows manages the processor performance and changes it dynamically based on the current usage to provide performance boost when required and conserve power based on the current workload. For example, when the system is mostly idle, such as when I’m typing this blog post, there is no need to be running the processor in the maximum performance mode, instead the processor voltage and frequency can be reduced to a lower value to save power. Similarly, the hard disk drive and a variety of other devices can be placed in low-power modes or turned off completely to save power when not in use.

    For Windows 7, we’re refining the user experiences for power management, focusing on reducing idle power consumption and supporting new device power modes.

    There are two reasons to optimize idle power consumption on the system. First there are various times throughout the day when the PC is idle and the more the system gets to idle and stays idle, the less power it uses. Second, idle power consumption is the ‘base’ power consumption for all other workloads. A system which consumes 15W at Idle will consume additional power over the idle power consumption while is use for other workloads. By reducing the idle power consumption on the platform we will improve most other scenarios as well.

    The first step in reducing idle power is optimizing the amount of processor, memory and disk utilization. Reducing processor utilization is the most important, because the processor has a wide range of power consumption. When truly idle, the processor power consumption can be as low as 100-300mW. But, when fully busy, the processor can consume up to 35W. This large range means that even small amounts of processor activity can have a significant impact on overall power consumption and battery life. There are several areas of investment in Windows 7 that help reduce processor utilization and thereby enabling longer periods of time where the processor can enter into low power modes. One of these investments is in the area of services that are running on the platform and having those services only start when they are required referred to as “Trigger-Start”. While these services are efficient and have minimal impact by themselves, the additive effect of several services can add up. We are looking at smart ways to manage these services both within Windows but making our investments in this area extensible for others who are writing services to take advantage of this infrastructure. (Also note these are the same features that contribute to improvements in boot time as well).

    To further help reduce idle power, we are focusing on core processor power management improvements. Windows scales processor performance based on the current amount of utilization, and making sure Windows only increases processor performance when absolutely required can have a big impact on power consumption.

    We have made several investments in the area of device power management including enhancements to USB device classes to enable selective suspend across a broad range of devices including audio, biometrics, scanners, and smart cards. These investments available in Windows 7 enable more energy efficient PC designs. We have also invested in improvements to power management for networking devices, both wired and wireless.

    While many of our investments in the core infrastructure improves energy efficiency across several scenarios, in Windows 7 we also focused on several key customer scenarios to identify resource utilization improvements to extend battery life on mobile platforms. One of these scenarios that we identified was media playback. The optimizations for DVD playback include reducing processor and graphics utilization, audio improvements, and optical disk drive enhancements. These improvements are already paying off and showing significant increase in battery life across a broad range of mobile platforms which we demonstrated at the WinHEC conference.

    Extensions

    Graphics devices, USB devices, device drivers, background services and installed applications are all extensions to Windows. Large improvements in power consumption and energy efficiency can be realized by improving the efficiency of platform extensions.

    For example, consider a single USB device that does not support Selective Suspend. That USB device itself may have very low power consumption (e.g., a fingerprint reader), but until that device enters the suspend state, the processor and chipset must poll the device at a very high frequency to see if there is new data. That polling prevents the processor from entering low power idle states, and on a typical business-class notebook reduces battery life by 20-25%.

    Devices are not the only area that require efforts for great energy efficiency. Application and service software can also have a big impact on power consumption. Take for example an application that increases the platform timer resolution using the timeBeginPeriod API. The platform timer tick resolution will be increased and the processor will not be able to efficiently use low power idle modes. We have observed a single application that keeps the timer resolution increased to 1ms can have up to a 10% impact on battery life on a typical notebook PC.

    We’re committed to helping improve the energy efficiency of Windows platform extensions by working closely with our partners. The strategy we’re employing is to provide rich tools to identify energy efficiency problems in hardware and software. For Windows 7, we’ve added a new inbox utility that provides an HTML report of energy efficiency issues—a “Top 10” checklist of power problems. If you want to try it out on Windows 7, run powercfg /energy at an elevated command prompt. Be sure to close any open applications and documents before running powercfg—this utility is designed to find energy efficiency problems when the system is idle. powercfg with the /energy parameter can detect USB devices that are not suspending and applications that have increased the platform timer resolution.

    For more advanced analysis, we have provided the Windows Performance Toolkit. The Performance Toolkit http://www.microsoft.com/whdc/system/sysperf/perftools.mspx makes it very easy for software developers to observe the resource utilization of their applications, resolve performance bottlenecks and identify issues impacting energy efficiency.

    What about turning my PC off?

    So far, we have been talking about how to save power while the PC is ON. But, there are power savings to gain by entering low power modes when the PC is not in use. Many users simply Shut Down their computer when it is not in use, yet others use Sleep and sometimes Hibernate on mobile PCs. Windows features each of these power-saving modes so you can choose the right mode for how you use the system:

    • Sleep : All of the open programs, documents and files are preserved in system RAM and the rest of the system is powered off. Because only memory is powered, Sleep consumes a very small amount of power—typically less than 1W on a mobile PC and typically less than 3W on a desktop PC. The primary benefit of Sleep is that resume is very fast—most systems resume from sleep in less than 2 seconds.
    • Hibernate: All of the open programs, documents and files are copied from system RAM to the hard drive. The resulting file is called the Hiberfile. After RAM is copied into the Hiberfile, all of the PC is powered off. Hibernate is most often used on mobile PCs because it consumes nearly 0W on most laptops, and even if the battery does eventually drain, all of the open programs and documents are saved in the Hiberfile. As RAM continues to grow, and as some PCs have limited storage, Hibernate might not be the best option for folks.  (As a quick tip, the disk cleanup wizard, or powercfg –hibernate off, can remove the disk space pre-allocated to hibernate). 
    • Shut Down - This is a normal Windows shutdown, nothing is saved to memory or disk, and the system boots again the next time the system is powered on.

    Using an example desktop PC, we measured power consumption for Sleep, Hibernate, Shut Down and the basic ON state, with just the desktop shown and no open programs. We also measured resume latency—the amount of time to get the system back to the ON state.

    Comparing Sleep, Hibernate, and Boot Power v. Time to On

    The chart makes it pretty clear why we focus on Sleep reliability and performance, and encourage most people to use it when they are not using their computer. Sleep consumes nearly the same amount of power as Shut Down, but resumes the system in less than 2 seconds, instead of going through the boot process.  You can see that boot takes a significant amount of power so when considering whether to turn off your machine to save power or to put it into a low power state, think about how long your machine will be out of use.  Nevertheless, as we’ve talked about in previous blogs boot (and shutdown) are obviously very important performance scenarios as we engineer Windows 7.

    Next Steps

    We are committed to continuously improving the energy efficiency of Windows PCs, and have made significant improvements to core platform power management for Windows 7, as well as tools to identify where power is consumed. We still have more work to do, and look forward to our upcoming Beta release and monitoring incoming CEIP telemetry for energy efficiency and power management results.  Of course we continue to work very closely with the other members of the ecosystem as we all have much to contribute to energy efficiency—from the manufacturing, usage, and end of life of a PC, software, and peripherals.

    --Dean

  • Engineering Windows 7

    At Home with HomeGroup in Windows 7

    • 79 Comments

    Like many places we’ve spent the past few weeks under quite a bit of snow, which is pretty unusual for Seattle!  Most of us on the team took advantage of the snow time to install test builds of Windows 7 on our home machines as we finalize the beta for early 2009—I know I felt like I installed it on 7000 different machines.  We’re definitely looking forward to seeing folks kick the tires on the beta when it is available. For more information on the beta, please stay tuned to http://www.microsoft.com/windows/windows-7 which is where we will post information about participation.

    This post is about a Windows 7 feature that covers a lot of territory—it is about networking, user interface, sharing, media, printing, storage, search, and more.  HomeGroup is a way of bringing all these features together in a way that makes it possible for a new level of coolness in a home with multiple PCs running Windows 7.  A lot of us are the sysadmins for our own homes and for many others (friends and family).  We set up network topologies, configure machines, and set things up so they work—HomeGroup is designed to make that easier so it can be done without a volunteer sysadmin.  It makes for some challenges in how to describe the feature since the lack of such a feature has each of us creating our own private best practices or our own techniques for creating and maintaining a home network.  HomeGroup is about making this easier (or possible for everyone else) and at the same time giving you the tools to customize and manage—and no matter what, under the hood the file and printer sharing, media sharing, and networking you are already familiar with is there should you wish to stick with the familiar ways. HomeGroup is a deep feature that builds on a lot of new infrastructure/plumbing new to Windows 7, though in this post we’ll talk about it from the experience of setting up a network. 

    This is a feature that is one you should just use and see it working, rather than trying to read about it as it covers so much territory in writing. 

    This post is by Jerry Koh a lead program manager in the Core User Experience team, with help from a number of folks across the dev team.  --Steven

    PS: From all of us on the Windows team, we wish you a very Happy New Year!

    You probably have seen or heard about HomeGroup by now. We demonstrated it at PDC this year during Steven’s keynote, it was mentioned a few times at WinHec, and some of you may have even tried it on your PCs with the PDC pre-beta build of Windows 7. HomeGroup represents a new end-to-end approach to sharing in the home, an area in which Windows has provided many features before --- the intuitive end to end is what’s new. HomeGroup recognizes and groups your Windows 7 PCs in a “simple to set up” secure group that enables open access to media and digital memories in your home. With HomeGroup, you can share files in the home, stream music to your XBOX 360 or other devices, and print to the home printer without worrying about technical setup or even understanding how it all works.

    This blog post is designed to give you a behind-the-scenes look at how we designed HomeGroup.

    Designed with you in mind

    The HomeGroup design goal, like other Windows 7 features, is informed by customer data and input. Whether from the Customer Experience Improvement Program (CEIP), the Windows Feedback Panel, focus groups or usability sessions, the data we collect enables us to focus on key areas where people feel the most pain. To begin figuring out how to solve file and printer sharing problems in the home, we started by looking at how people interact within a home environment. We wanted to learn not only how people used computers in the home, but also what social and behavioral norms were acceptable to see if there were parallels that we could bring into our design. We found the following:

    • People don’t allow strangers into their homes and usually lock their exterior doors. People within the confines of the home are typically considered to be trusted.
    • Within the home, doors to rooms are usually not locked, allowing members of the household to have free access. Books, photographs, magazines, CDs, and DVDs are often freely shared.
    • Social norms prevent most people from snooping into areas where they shouldn’t and, if needed, adding locks to rooms or drawers is relatively easy.

    The social model of the home also reflected how people want to share. When we discussed file and printer sharing in the home (or the concept of doing so), we found that people classify their content generally into four different buckets: private, public, parentally sensitive, and children’s stuff. Private content consists of business and financial data and is considered private mainly because people fear it will be accidentally deleted as the number of people who have access to it increases.

    People are typically quick to point out that they don’t have entertainment content they consider private, and they’re very open to free access to this content within the family. Families with children are often concerned about parentally sensitive content (inappropriate music, videos, etc.).  With digital cameras and camcorders dropping in price and being widely adopted, parents are primarily concerned about accidental deletion or loss of original copies of digital memories.

    These observations were very interesting to us; a model that mirrored real-world expectations for sharing could be more natural to people than something that layered different questions around security, permissions or rights. So we approached the HomeGroup sharing model with the concept of open access in the home. But, how can we define what the “home” really is? What assumptions can we make about security?

    Wireless, user passwords and when are you “at home”?

    One of the key advances we’ve had in home networking technology has been wireless. Standards like 802.11 have taken the home network by storm. Wireless router sales to consumers are higher than ever, and are projected to continue growing. As a wider segment of people buy wireless routers, concerns about security start to build up. When configured incorrectly, wireless networks can leave your entire home network vulnerable to malicious people or nosy neighbors. While there have been efforts to help people become more aware of securing wireless networks -- such as the “Windows Rally program” and various “Windows Connect Now” technologies--the general public still lags behind in setting up security for their wireless networks. We know from our customer data that more than half of all wireless networks, whether by choice or oversight are set up as unsecured and we know many of you are the first line of defense in helping your friends and families set up a secure home network. While trends all point to more awareness and improvement in the future, it isn’t clear whether we would ever reach 100% security on these networks. So how can we make sure home networks are secured?

    Another interesting factor is the usage of passwords on user accounts in the home. While people are more sensitive to security than ever before, we also observed that many don’t want to set up passwords for their Windows user accounts. They feel that it is a barrier to their use of the computer and yet another thing for them to remember or lose (as an aside, passwords are often viewed as a performance bottleneck in the home). From the data we obtained from the Windows Feedback Panel, a majority of users actually don’t use passwords in the home, opting for the simple model of opening the laptop lid and using Windows quickly. This parallels usage patterns on cell phones, where setting passwords on them would just be a deal-breaker for most people.

    A majority of the computers in our panel only had one primary user. While we all know that laptop sales have overtaken desktop sales in the last couple of years, this data tells us that people are buying PCs more for specific people rather than for a shared location. With laptops, the mobility factor has contributed to the “one person/one computer” landscape, again mirroring cell phone ownership patterns in which users almost never share a personal cell phone.  Clearly as notebook options include even less expensive options, this will only increase, though we recognize it is still rather a luxury in most of the world.

    Percent of PCs with given number of accounts.

    So we wanted to find a model that could secure home sharing for people who don’t use passwords and could also take into account the more personal nature of PC usage.

    First we needed to figure out that people were “at home”. Luckily we didn’t need to look very far for some useful technology in this area. Windows Vista introduced a concept known as network location awareness (NLA). This enables the system to recognize when you’ve changed network locations, and it tags the location with a simple “Home”, “Work” or “Public” designation. While it was somewhat of a mystery in Vista in terms of what such a designation did (unless you read all the words), we will see the infrastructure has become increasingly important as we built out the HomeGroup scenario.  In addition to ensuring the right firewall settings are configured for these locations, NLA also enabled us to be smarter about starting Windows services that are targeted at specific network locations. For example, the network discovery service does not start if you’re in a public location. However, Windows Vista didn’t have much distinction between the “work” and “home” network locations; they were essentially the same in terms of which firewall ports were opened and which Windows services were started.

    In Windows 7, we extended the concept of NLA and made “work” and “home” more distinct. In Windows 7, when you select the “home” network profile, we know that you are “at home”, and will start the essential services required for successful file and printer sharing in the home. This provides an intuitive entry point into HomeGroup, and once you are “at home” we start looking for (via network discovery) other Win7 PCs in the home. If you already have a HomeGroup active, we offer you the ability to join it; if not, you can create one.

    Set Up Windows Network Location dialog

    Now that we know your PC is at home, we need to make sure that your data is secured from prying eyes. While wireless security is full of acronyms and technical solutions to security (WEP, WPA, TKIP, etc., to name a few), the fundamental model of wireless security is fairly simple for people to understand. The use of a physical key (copied several times) to enter one’s home is mirrored by the concept of typing in a shared key to gain access to the home wireless network. In the HomeGroup case, Windows will provide you with a pre-generated password out of the box, which you would hand over to any member of the home, and they could then join the group.

    Homegroup password reminder.

    While a password is provided by default, people can, at any time, visit HomeGroup in Control Panel to change their password to something they prefer. This flexible system performed very well in testing. When faced with the default password, people wrote it down, and shared it with others to set up the HomeGroup. You may ask, why don’t we enable people to set their own passwords by default? The answer is actually quite ironic, since that was our initial design. In testing, this concept raised quite a bit of alarm with people. It seems that most people generally have 1 or 2 passwords that they use for all their online or offline activities. When asked to input a user password for their HomeGroup, they gravitated towards using one of those, and then reacted with alarm when they realized that this password needs to be shared with other users in the home! People generally reacted better to the auto-generated password, since they knew to write it down and hand it around. The other interesting benefit we got from this was a reduction in the amount of time people would spend on the UI that introduced them to the HomeGroup concept. With a user-generated password, they had to grasp the HomeGroup concept, think about what password to set, and decide whether to accept the shared libraries default. Without having to provide a password, people had more time to understand HomeGroup, and their sharing decision – leading to a much more streamlined, private, and secure design.

    A home of equals with open access to libraries

    In addition to balancing security with ease of use, we also wanted to account for PCs becoming more personal. For this reason, we adopted the concept that each person in the HomeGroup is a peer of the others. Each person can thus join and leave the HomeGroup as they wish. Each person brings with them their choice of media/memories or files to share with the rest of the home. With a system based on equals and peers, the big benefit is a lack of management overhead; you don’t need one person to bear the management task of maintaining the group and dealing with membership tasks. This eliminates a primary source of complexity. All you need to gain entry is the shared password (just like the house key that each family member has).

    With a home full of equals, what would they share? As mentioned above, our customers indicated a desire to share media, both music and photos, they want to quickly and easily access within the home. So that is exactly what we implemented. HomeGroup will enable sharing the pictures, music, and video libraries from your Windows 7 PC by default. Another blog post will go into more detail on how libraries work, but in a nutshell, they provide Windows with a way to aggregate multiple physical locations on a computer into one unified view. This is a very powerful addition to the way you organize your data in Windows. Your Pictures library can now contain your <username>\pictures folder, the Public\pictures folder, as well as the f:\foo folder that contains other pictures (and perhaps is on a USB external hard drive). Viewing your picture library locally gives you a unified view of all the pictures in these locations and enables you to search, sort, and organize them in the same way you would within a folder, while also making sure you save new items to the right place physically.

    In addition to media, some people might want to share their documents. We enable you to do this when you create or join a HomeGroup. This is great for people who want to collaborate with their family or in families where open access to documents is not a concern. The content is shared as “read-only” and can be selectively changed in Windows Explorer. We want the system to work the way you expect it to, with enough flexibility to do whatever you want later.

    Setting up Homegroup options.

    Easy to use

    Now that we have made it easy to set up, the next step is to make it easy to use. There were two aspects here that we want to emphasize for this post:

    1. Discovery of what is shared to me
    2. Access and usage of content that is shared to me

    In Windows Vista, this discovery was done through the network folder, which provides a complete, but highly technical, view of the resources available to you on the network. In addition, the network folder also contains other devices and additional media libraries that were shared on the network. This was confusing and difficult to understand for typical people. For example, if you shared your Pictures folder, it was actually found under the computer in \\<computername>\users\<username>\pictures. Typically, people would not know to look into that path for the correct folder.

    The concept of “libraries” introduced in Windows 7 gives us the design point to improve access their content across the network. While libraries aggregated the view on a local computer, if these locations were shared out to the network, they resulted in a more complicated view in the network folder for our users. Each location would be shared as a separate path, so taking the example above, sharing out the Pictures library means that you’ll see three shares under \\computername, Users\<username>\pictures, Users\public\pictures and foo. People would not benefit from the power of libraries on a network. Therefore, we use the concept of libraries to work well even across a home network. We did this in two ways.

    First, people should have the same experience viewing a library whether on a local computer or across the network in a HomeGroup. We made sure that when you share the Pictures library in Windows 7, not only are all locations of the library shared, but the library resource is also shared and can be consumed by other computers in the HomeGroup. Effectively, members in a HomeGroup would see just one unified library with its aggregated views.

    Second, we found that accessing these resources in the network folder was too many clicks away and sufficiently buried such that people would find it impossible to discover. So we created a new HomeGroup node on the navigation pane in Windows Explorer. When you join a HomeGroup, other HomeGroup Win7 PCs will appear under the HomeGroup node in the Windows Explorer navigation pane. They’re one click away and always at your fingertips. In our tests, this really opened up discovery and usage of content throughout the HomeGroup. People easily discovered music on another computer, played it back, or looked at photos. Consumption of media thus becomes something easy and habit-forming in the home, all by joining a HomeGroup.

    Windows Explorer with Homegroup visible.

    With the introduction of libraries, we also had an opportunity to remove some of the confusion between specialized media libraries that are created by Windows Media Player (WMP) or Windows Media Center (WMC). In previous versions of Windows, WMP would scan the entire hard drive on the computer to find media files and add them into a media library, but in Windows 7 this no longer has to happen. Since you already have Windows Explorer libraries, WMP and MCE just use those. If you add new locations to the libraries in Windows Explorer, WMP and MCE now automatically just pick them up since they are using the same common library for the content. We thus eliminated the need for people to manage multiple views of their data using different user experiences. In addition, WMP will also show the media libraries shared by the HomeGroup as nodes in the WMP navigation pane, mirroring the discovery and access model of Windows Explorer. So the same set of HomeGroup users you see in Explorer by default will also be shown to you in WMP as well.

    Media Player with Homegroup

    Similar to WMP, in WMC, there is a new “shared” section when browsing media like recorded TV, pictures, music and video. HomeGroup computers show up in this section and can be accessed easily. The content of those libraries that have been shared with the HomeGroup will show up and be accessible in WMC. This includes music, pictures and videos, but also recorded TV--which means that you can now browse and stream non-DRM TV (that was recorded on another computer in your home) from your laptop!

    Media Center and Homegroup sharing.

    Media Center and Homegroup sharing.

    In addition to sharing out your media by default, we also wanted to make sharing additional content to the HomeGroup simple. In the past you had to worry about setting access control, as well as managing user passwords to make sharing work in the home. As we better understood how people interacted and worked at home, we realized that most were OK with enabling general access to all members of the household. So we built a few shortcuts into the sharing experience to enable this. Windows Explorer now features a new “share with” menu in the command bar:

    Share with context menu in Windows Explorer

    This enables you to select a library or folder and quickly share it with the home. It even enables you to make content writable by home members with one click, thus making it easy for people at home to easily collaborate on pictures or documents. This enables scenarios like importing digital photographs on one computer and editing them on another computer without making a copy. Once you share a folder with the home, it also shows up under the user in the HomeGroup node. This makes it incredibly easy to share anything on your computer to others in the home, and have them easily find and use them. We also recognize that some people need a way to easily bring some of this content off the network quickly and easily and make it private. The “share with” menu includes a shortcut to “share with nobody.” This option removes access to any content that has been previously shared and makes it private, thus enabling us to deliver on another requirement we observed people have in the home.

    Printers and other devices work with HomeGroup as well

    So what about devices? We’ve heard from you that sharing printers needs to be much simpler. While we have made it super easy to add printers to Windows, we needed to bring this simplicity to the home network. USB printers are still tied to a specific PC and can’t be shared out very easily. People typically email files to themselves to retrieve on another computer, or use USB keys to move their files to the computer with the printer. That had to change.

    In a HomeGroup , if you have installed a USB printer that has a Windows logo, the other people on the HomeGroup would get this printer automatically installed on their computers. They won’t see a prompt, they won’t need to answer any questions – it would just show up, and “just work.” For non-Windows logoed printers, we need to ask the user for permission to install the printer. HomeGroup members will see a prompt that a printer has been found in the HomeGroup. Clicking on this prompt installs the driver. The reason we had to do this was to ensure that users consent to 3rd party code that hasn’t been through the rigors of the logo program. One of the big benefits of this system is that you no longer need to find, download, and install the driver manually on multiple computers. The driver (for the correct architecture) is just copied from the computer that has the physical printer attached. This saves time and network bandwidth. With a HomeGroup, there will no longer be a need to think about sharing a printer. If you attach one to a computer in the HomeGroup, everyone else will get it installed and ready to use.

    Changing Homegroup settings to include printers.

    In addition to printers, devices like photo frames, game consoles (such as the Xbox 360), and media receivers (like the Roku Soundbridge) can benefit from some of the easy setup, as well as all the shared media in the home. For setup, we have reduced all the UI within Windows that deals with these devices to one simple checkbox:

    Enabling Homegroup sharing of devices such as photo frames, game consoles, and media receivers.

    Once you are part of a HomeGroup, we turn on Windows Media Player streaming support, so not only will your computer detect other WMP libraries on the network and allow playback from them, devices would also be able to consume the shared media content. Another blog post will go into more detail on an exciting new feature called “play to” which would also be automatically enabled in a HomeGroup enabling you to send media from your PC to any supported picture frame or media receiver, and never have to deal with the minimal UI you have on these devices, which you can see in the demonstration of the Day 1 keynote at WinHEC. If you check a box in HomeGroup in Control Panel, all existing and future devices in the home will detect and consume the media on the HomeGroup computer. All these previously complicated settings are now simplified with HomeGroup.

    Domain-joined computers can be part of a HomeGroup

    The laptop buying trend doesn’t stop at home. Large corporations are also moving toward buying laptops for their employees. There is research out there that outlines productivity improvements with employees using laptops. This makes sense as most of these laptop-wielding employees bring their computers home and put in those extra email hours. However, most corporations require that their laptops be joined to a corporate domain. This enables system administrators to manage and maintain these computers. Domain-joined laptops are thus subject to more restrictions than regular home computers are. It’s hard to even locate another PC on the home network to access or share files, let along configure your domain-joined computer to print to a printer at home.

    With HomeGroup, we wanted see if we could make things a little easier for these computers to come home. With more and more people working from home or having the option to these days, we wanted to see if they could enjoy some of the media content they have on the other PCs in the HomeGroup while they work. So in Windows 7, your domain-joined computer can join and participate in a HomeGroup. This enables the domain-joined computer to consume the media available on Windows 7 PCs in the home, watch TV through WMC, listen to music via WMP, or print to the printer on another HomeGroup PC all by entering the same key you provide to other computers in the HomeGroup.

    The only difference is that sensitive content on the corporate laptop is never shared to the other HomeGroup computers. In essence, the domain-joined computer can see out (and consume) but no one can see in. We believe this meets the need for corporations to maintain security over documents while enabling our customers to enjoy a fun and interesting work environment at home, with access to all their media and home printers while they work. All you need is an existing HomeGroup, a domain-joined computer, and you can be rocking to your favorite tunes on your home network, while you catch up on all your important work.

    Of course the ability to join a HomeGroup is a policy that can be managed by corporate domains as you would expect.

    Create a HomeGroup with the Beta

    Phew! I hope this post has given you some insight into some of our design decisions, as well as the capabilities of the feature. HomeGroup will highlight some of the cool capabilities Windows has had for a long time in a friendly and easy fashion and also build on some of the new plumbing and infrastructure in Windows 7, and we are very excited with its possibilities. It is important to note that none of this would be possible without the help of people around the world who have provided us with opportunities to listen to their feedback, observe their actions, and take note of their needs.

    We know there will be lots of discussion around this feature once folks have had a chance to explore it.  It represents a new model for something that has arguably been very difficult to set up and so for most people seeing all this work will be a first and for many of us reading this blog we’ll be “mapping” our existing model to this new experience.  The best thing to do is just see if you can let Windows 7 run and do the work.  After some use you can then dive into the customization and configuration available to you.

    To set up a HomeGroup you will need to install Windows 7 Beta on more than one PC on the same network and be sure to select Home as the network location if you want to automatically create (or join) a HomeGroup. 

    Thanks,

    Jerry

  • Engineering Windows 7

    Continuing our discussion on performance

    • 129 Comments

    We've talked some about performance in this blog and recently many folks have been blogging and writing about the topic as well. We thought it would be a good time to offer some more behind the scenes views on how we have been working on and thinking about performance because it such an interesting topic for the folks reading this blog. Of course I've been using some pretty low-powered machines lately so performance is top of mind for me as well. But for fun I am writing this on my early holiday present--my new home machine is a 64-bit all-in-one desktop machine with a quad core CPU, discrete graphics, 8GB of memory, and hardware RAID all running a pretty new build of Windows 7 upgraded as soon as I finished the out of box experience. Michael Fortin and I authored this post. --Steven

    Our beta isn’t even out the door yet and many are already dusting off their benchmarks and giving them a whirl. As a reminder, we are encouraging folks to hold on benchmarking our pre-production builds. Yet we’ve come to expect it will happen, and we realize it will lead many to conclude one thing or another, and at the same time we appreciate those of you who take the time to remind folks of the pre-ship status of the code. Nevertheless we’re happy that many are seeing good results thus far. We're not yet as happy as we believe we will be when we finish the product as we continue to work on all the fundamental capabilities of Windows 7 as well as all the new features folks are excited about.

    Writing about performance in this blog is nearly as tricky as measuring it. As we've seen directional statements are taken further than we might intend and at the same time there are seemingly infinite ways to measure performance and just as many ways to perceive the same data. Ultimately, performance is something each individual feels is right--whether than means adequate or stellar might vary scenario to scenario, individual to individual. Some of the mail we've received has been clear about performance:

    • Boot-very very fast in all applications ( open-load applications) especially so many simultaneously!!!!! Hence, massive multicore ( quad-octa core cpu) , gpgpu for all!!!!!!!!!!!!
    • This is right time to do this properly, the users want speed, we'll give them speed.
    • i want to be able to run windows 7 extremely fast and still look good graphically on a asus aspire one netbook with these specs-1.5 ghz intel atom processor (single core) 1gb of ram
    • I hope that in addition to improvements in the gui and heart (I hope massive multicore + 64-bit + Directx 11 ..extreme performance, etc) for windows 7, modified the feature Flip 3d In Windows 7!!!!! Try to make a Flip 3D feature, really efficient and sensible in windows 7.
    • With regard to the performance thing, could you look at ways to reduce the penalty of having a lot of fonts installed.
    • From performance, boot up, explorer speed and UI experience , I hope the next version of windows delivers something new and innovating. I was playing with the new UI on the HP TouchPC and I have to say they did a great 1.0 job on the touch interface controls.
    • I do keep my fingers crossed for Windows 7 to be dramatically better in its performance than Windows Vista.
    • The biggest feature I see a lot of people wanting is performance.

    You can also see through some of these quotes that performance means something different to different people. As user-interface folks know, perceived performance and actual performance can often be different things. I [Steven] remember when I was writing a portion of the Windows UI for Visual C++ and when I benchmarked against Borland C++ at the time, we were definitely faster (measured by seconds). However the reviews consistently mentioned Borland as being faster and providing feedback in the form of counts of lines compiled flying by. So I coded up a line count display that flashed a lot of numbers at you while compiling (literally flashy so it looked like it couldn't keep up). In clock times it actually consumed a non-zero amount of time so we got "slower" but the reviewers then started giving us credit for being faster. So in this case slower actually got faster.

    There's another story from the past that is the flip side of this which is the scrolling speed in Microsoft Word for DOS (and also Excel for Windows--same dynamic). BillG always pushed hard on visible performance in the "early" days and scrolling speed was one of those things that never seemed to be fast enough. Well clever folks worked hard on the problem and subsequently made scrolling too fast--literally to the point that we had to slow it down so you didn't always end up going from page 1 to the end of the document just because you hold down the page down key. It is great to be fast, but sometimes there is "too much speed".

    We have seen the feedback about what to turn off or adjust for better performance. In many ways what we're seeing are folks hoping to find the things that cause the performance to be less than they would like. I had an email conversation with someone recently trying to pinpoint the performance issues on a new laptop. Just by talking through it became clear the laptop was pretty "clean" (~40 processes, half the 1GB of RAM free, <5% CPU at idle, etc.) and after a few back and forths it became clear it was the internet connection (dial-up) that was actually the biggest bottleneck in the system. Many encourage us to turn off animations, graphics, or even color as there is a belief that these can be the root of performance. We've talked about the registry, disk space utilization, and even color depth as topics where folks see these as potential performance issues.

    It is important to consider that performance is inherently a time/space tradeoff (computer science sense, not science fiction sense), and on laptops there is the added dimension of power consumption (or CPU utilization). Given infinite memory, of course many algorithms would be very different than the ones we use. In finite memory, performance is impacted greatly by the overall working set of a scenario. So in many cases when we talk about performance we are just as much talking about reducing the amount of memory consumed as we are talking about the clock time. Some parts of the OS are much more tunable in terms of the memory they use, which then improves the overall performance of the system (because there is less paging). Other parts of the system are much more about the number of instructions executed (because perhaps every operation goes through that code path). We work a great deal on both!

    The reality of measuring and improving performance is one where we are focused at several "levels" in Windows 7: micro-benchmarks, specific scenarios, system tuning. Each of these plays a critical role in how we are engineering Windows 7 and while any single one can be measured it is not the case that one can easily conclude the performance of the system from a measurement.

    Micro-benchmarks. Micro-benchmarks are the sort of tests that stress a specific subsystem at extreme levels. Often these are areas of the code that are hard to see the performance of during usage as they go by very fast or account for a small percentage of time during overall execution. So tests are designed to stress part of the system. Many parts of the system are subjected to micro-benchmarking such as the file system, networking, memory management, 2D and 3D graphics, etc. A good example here is the work we do to enable fast file copying. There is a lot of low level code that accounts for a (very significant) number of conditions when copying files around, and that code is most directly executed through XCOPY in a command window (or an API). Of course the majority of copy operations take place through the explorer and along with that comes a progress indicator, cancellable operation, counting up bytes to copy, etc. All of those have some cost with the benefit as well. The goal of micro-benchmarks is to enable us to best understand the best possible case and then compare it to the most usable case. Advanced folks always have access to the command line for more power, control, and flexibility. It is tempting to measure the performance of the system by looking at improvements in micro-benchmarks, but time and time again this proves to be inadequate as routine usage covers a much broader code path and time is spent in many places. For Internet Explorer 8 we did a blog post on performance that went into this type issue relative to script performance. At the other end of the spectrum we definitely understand the performance of micro-benchmarks on some subsystems will be, and should be, carefully measured --the performance of directx graphics is an area that gamers rely on for example. It is worth noting that many micro-benchmarks also depend heavily on a combination of Windows OS, hardware, and specific drivers.

    Specific scenarios. Most people experience the performance of a PC through high level actions such as booting, standby/resume, launching common applications. These are topics we have covered in previous posts to some degree. In Engineering Windows 7, each team has focused on a set of specific scenarios that are ones we wanted to make better. This type of the work should be demonstrable without any elaborate setup or additional tools. This work often involves tuning the code path for the number of instructions executed, looking at the data allocated for the common case, or understanding all the OS APIs called (for example registry lookups). One example that comes to mind is the work that we have going on to reduce the time to reinsert a USB device. This is particularly noticeable for UFD (USB flash drives) or memory cards. Windows of course allows the whole subsystem to be plumbed by unique drivers for a specific card reader or UFD, even if most of the time they are the same we still have to account for the variety in the ecosystem. At the start of the project we looked at a full profile of the code executed when inserting a UFD and worked this scenario end-to-end. Then systematically each of the "hot spots" was worked through. Another example along these lines was playback of DVD movies which involves not only the storage subsystem but the graphics subsystem as well. The neat thing about this scenario is that you also want to optimize for the CPU utilization (which you might not even notice while playing back the movie) as that dictates the power consumption.

    System tuning. A significant amount of performance work falls under the umbrella of system tuning. To ascertain what work we do in this area we routinely look at the overall performance of the system relative to the same tests on previous builds and previous releases of Windows. We're looking for things that we can do to remove operations that take a lot of time/space/power or things that have "grown" in one of those dimensions. We have build-to-build testing we do to make sure we do not regress and of course every developer is responsible for making sure their area improves as well. We left no stone unturned in terms of investigating opportunities to improve. One of the areas many will notice immediately when looking at the pre-beta or beta of Windows 7 is the memory usage (as measured by task manager, itself a measurement that can be misunderstood) of the desktop window manager. For Windows 7, a substantial amount of architectural work went into reducing the amount of memory consumed by the subsystem. We did this work while also maintaining compatibility with the Windows Vista drivers. We did similar work on the desktop search engine where we reduced not just the memory footprint, but the I/O footprint as well. One the most complex areas to work on was the improvements in the taskbar and start menu. These improvements involved substantial work on critical sections ("blocking" areas of the code), registry I/O, as well as overall code paths. The goal of this work is to make sure these UI elements are always available and feel snappy.

    It is worth noting that there are broad based measures of performance as well that drive the user interface of a selection of applications. These too have their place--they are best used to compare different underlying hardware or drivers with the same version of Windows. The reason for this is that automation itself is often version dependent and because automation happens in a less than natural manner, there can be a tendency to measure these variances rather than any actual perceptible performance changes. The classic example is the code path for drawing a menu drop down--adding some instructions that might make the menu more accessible or more appealing would be impossible to perceive by a human, but an automated system that drives the menu at super human speed would see a change in "performance". In this type of situation the effect of a micro-benchmark is magnified in a manner inconsistent with actual usage patterns. This is just a word of caution on how to consider such measurements.

    Given this focus across different types of measurement it is important to understand that the overall goal we have for Windows 7 is for you to experience a system that is as good as you expect it to be. The perception of performance is just as important as specific benchmarks and so we have to look to a broad set of tools as above to make sure we are operating with a complete picture of performance.

    In addition to these broad strategies there are some specific tools we've put in place. One of these tools, PerfTrack, takes the role of data to the next level with regard to performance and so will play a significant role in the beta. In addition, it is worth reminding folks about the broad set of efforts that go into engineering for performance:

    • We’ve been building out and maintaining a series of runs that measure thousands of little and big things. We’ve been running these before developer check-ins and maintaining performance and responsiveness at a level above which all that self-host our builds will find acceptable. These gates have kept the performance and responsiveness of our daily builds at a high enough level that thousands have found it possible to run their main systems on Windows 7 for extended periods of time, doing their normal daily work.
    • We’ve been driving down footprint, reducing our service costs, improving the efficiency of key code paths, refactoring locks to improve scalability, reducing hangs, improving our I/O efficiency and much more. These are scenario driven based on real world execution paths we know from our telemetry to be common.
    • We’ve been partnering closely with the top OEMs, ISVs and IHVs. Our tools have been made public, we’ve held numerous training sessions, and we’ve been focusing heavily on shipping systems in an effort to insure customers get great performing systems out of the box, with great battery life too.
    • Within the Windows dev team, we’ve placed a simple trace capturing tool on everyone’s desktop. This desktop tool allows each person to run 24x7 with performance tracing enabled. If anything seems slow or sluggish, they can immediately save the last minute-or-so of activity and send it for automated analysis. Additionally, a team of people visually inspect the traces for new issues or issues not yet decipherable by our automation. The traces are incredibly rich and allow us to get to the root of top issues most of the time.
    • For all Pre-Beta, Beta and RTM users, we’ve developed a new form of instrumentation and have used it to instrument over 500 locations in the operating system and inbox applications. This new instrumentation is simple in concept, but revolutionary in result. The tool is called PerfTrack, and it has helped confirm our belief that the client benchmarks aren’t too informative about real user responsiveness issues.

    Perftrack is a very flexible, low overhead, dynamically configurable telemetry system. For key scenarios throughout Windows 7, there exist “Start” and “Stop” events that bracket the scenario. Scenarios can be pretty much anything; including common things like opening a file, browsing to a web page, opening the control panel, searching for a document, or booting the computer. Again, there are over 500 instrumented scenarios in Windows 7 for Beta.

    Obviously, the time between the Stop and Start events is meant to represent the responsiveness of the scenario and clearly we’re using our telemetry infrastructure to send these metrics back to us for analysis. Perftrack’s uniqueness comes not just from what it measure but from the ability to go beyond just observing the occurrence of problematic response times. Perftrack allows us to “dial up” requests for more information, in the form of traces.

    Let’s consider the distribution below and, for fun, let's pretend the scenario is opening XYZ. For this scenario, the feature team chose to set some goals for responsiveness. With their chosen goals, green depicts times they considered acceptable, yellow represents times they deemed marginal, and red denotes the poor times. The times are in milliseconds and shown along the X axis. The Hit Count is shown on the Y axis.

    Graph measuring responsiveness goals and real world data.

    As can be seen, there are many instances where this scenario took more than 5 seconds to complete. With this kind of a distribution, the performance team would recommend that we “dial up” a request for 100+ traces from systems that have experienced a lengthy open in the past. In our “dialed up” request, we would set a “threshold” time that we thought was interesting. Additionally, we we may opt to filter on machines with a certain amount of RAM, a certain class of processor, the presence of specific driver, or any number of other things. Clients meeting the criteria would then, upon hitting the “Start” event, configure and enable tracing quickly and potentially send back to us if the “Stop” event occurred after our specified “threshold” of time.

    As you might imagine, a good deal of engineering work went into making this end to end telemetry and feedback system work. Teams all across the Windows division have contributed to make this system a reality and I can assure you we’ll never approach performance the same now that we have these capabilities.

    As a result of focusing on traces and fixing the very real issue revealed by them, we’ve seen significant improvements in actual responsiveness and have received numerous accolades on Windows 7. Additionally, I’d like to point out that these traces have served to further confirm what we’ve long believed t be the case.

    This post provides an overview of the ways we have thought about performance with some specifics about how we measure it throughout the engineering of Windows 7. We believe that throughout the beta we will continue to have great telemetry to help make sure we are achieving our goals and that people perceive Windows 7 to perform well relative to their expectations.

    We know many folks will continue to use stop watches, micro-benchmarks, or to drive automated tests. These each have their place in your own analysis and also in our engineering. We thought given all the interest we would talk more about how we measure things and how we're engineering the product.

    --Steven and Michael

  • Engineering Windows 7

    Accessibility in Windows 7

    • 64 Comments

    This post is from Michael Bernstein, a development lead on the User Interface Platform team where he focuses on accessibility. Accessibility is the term we apply to the APIs and features that enable Windows to be used, to be accessible, by as many people as possible so that, regardless of physical or cognitive abilities, everyone has the ability to access the functions of Windows. To enable this, Windows includes both built-in accessibility utilities as well as APIs used by third party assistive technology aids and by application developers to make sure their software is also accessible. This is a topic that is extremely important to Microsoft and one that is a key tenet in the engineering of Windows 7. Microsoft also has a corporate-wide group dedicated to making sure that PCs are easier to see, hear, and use. You can read more about Microsoft’s accessibility initiatives on http://www.microsoft.com/enable/. --Steven

    Hi, I’m the development lead for Accessibility and Speech Recognition experiences in Windows 7, and I wanted to write about how we thought about Accessibility in Windows 7. 

    We wanted to make Windows 7 the most accessible operating system that Microsoft has ever produced.  It became clear as we planned this release, however, that the notion of Accessibility is not as simple as it may appear.  It is tempting to think about Accessibility like Security: either you have a known failure, or your system is believed to be secure/accessible.  This definition turns out to be limited, though.  How do you deal with the fact that the needs of customers who are blind are very different from the needs of customers who are deaf?  The needs of customers who are blind are even different from those of customers with reduced vision: a magnification tool is useless for one group and crucial for the other. And what do we make of cases where something is technically accessible but practically frustrating, like a common user scenario that takes 36 keystrokes to execute?  Clearly, Accessibility wasn’t going to boil down to a simple yes/no question.  It is really more like a particular kind of usability, but usability for a specific set of audiences with individual needs.

    Since the questions we were asking were complex, the answers ended up being complex, too.  We chose a four-part strategy to improve Accessibility in Windows 7.

    I. Build a firm foundation with UI Automation

    In Windows Vista, Microsoft delivered a new core component for Accessibility called UI Automation.  UI Automation enables a user’s assistive technology (AT) to programmatically drive the UI of an application, and allows applications to expose their accessible functionality in a richer way than was possible in previous versions of Windows.  More questions can be asked about a piece of UI, and that UI can be manipulated in richer ways.  UI Automation also introduced the idea of Control Patterns: any given piece of UI can decide how it should be controlled.  Buttons expose the Invoke pattern, indicating that they can be pushed; Combo Boxes expose ExpandCollapse, indicating that they can be opened and closed.  We let different controls be different, instead of trying to force them all into the same mold.  All this was introduced in Windows Vista and adoption is still ongoing.

    In Windows 7, we invested in improving the performance of the UI Automation system and created a new, native-code API for UI Automation to make sure that it can be used effectively by a wide range of assistive technology software.  Now applications written in C++, as well as those written using the .NET Framework, can take advantage of UI Automation. 

    We also did a bunch of work to make sure that the UI Automation system was integrated even more closely with the legacy Microsoft Active Accessibility (MSAA) system and developed new bridging techniques between the best of the new and the old technologies. UI Automation Clients can read Accessibility information from MSAA applications, and vice versa, to ensure maximum Accessibility regardless of which accessibility API an application used originally. Since the UI Automation and MSAA systems cooperate closely in many scenarios, we decided to name the combination of the two, calling it the Windows Automation API. This architecture forms the foundation for the rest of our Accessibility effort, and we’re pleased to have this Accessibility foundation Windows 7.

    II. Improve our included Accessibility utilities

    We also improved the Accessibility utilities that we include in the box with Windows.  Microsoft works closely with many different AT software companies who deliver software to make Windows more accessible to customers with disabilities, but we also include a set of utilities to make sure that our customers’ early experiences are accessible, even before installing any other software. We decided to enhance two of those utilities in Windows 7: the On-Screen Keyboard and the Magnifier.

    The most noticeable change to the On-Screen Keyboard is the improved look and feel, but there are also more subtle enhancements.  The appearance of this utility had not changed since Windows XP; our customers were also asking for it to be resizable.  We addressed both of these by working closely with Tablet developers to share a common code base between the Tablet Soft Keyboard and the On-Screen Keyboard.  Both keyboards now have an attractive appearance that is in tune with Windows 7 and both are now resizable.  The keyboards still are distinct, though, because customers use them differently: Tablet users may want to switch dynamically between handwriting and typing, whereas On-Screen Keyboard users may need modes where they can hover or scan to keys, if they have disabilities that prevent them from clicking.  Along these lines, we also added basic text prediction to help customers with disabilities enter text more quickly.  If you have ever tried typing with an on-screen keyboard, you can appreciate how significantly text prediction can improve text entry speed.

    The Magnifier came in for a deeper overhaul.  The Magnifier in Windows Vista and Windows XP was not an intuitive experience: when you pointed at part of the screen, the magnified content appeared in a separate window, usually docked at top of the screen.  You had to point at one place and look at another.  We considered two basic solutions to this problem: you could zoom into the entire screen or you could make the magnified area follow the pointer while leaving the rest of the screen the same.  These became our two primary modes for the Windows 7 Magnifier: Full-screen mode and Lens mode.

    Full-screen mode is great when you want to increase the size of everything on the screen at once.  As you move the mouse or keyboard focus around the middle of the screen, the view stays still; if you move towards the edge, the Magnifier scrolls the view to keep up.  One downside of this mode is that you can lose track of your context.  To address that usability issue, we added a context animation that zooms out to show you where your work area is relative to the whole screen, and then zooms back in. 

    Lens mode, on the other hand, is nice when you just want to zoom in on one particular thing.  In this mode, the lens centers on the mouse pointer, which feels much like using a magnifying glass.  You can re-size the lens to be very wide and short, which can be nice if you are reading a document and want to magnify it line by line.  We based our design on the popular Microsoft IntelliPoint magnifier, a design you can now enjoy with any mouse.

    We also addressed customer feedback about the Magnifier window taking up too much space on the screen.  We moved the most commonly used controls like zoom in/out to a small toolbar, which fades out to a semi-transparent watermark when you aren’t using it.  The remaining options are available in an Options dialog when you need them.  Last, we gave almost everything a keyboard shortcut, so if you really don’t want to see the UI, you don’t have to use it.  Win-+ will zoom you in any time you are using Windows 7.

    These tools directly improve Accessibility for customers with low vision and dexterity disabilities. It should be obvious, but making the PC easier to see or interact with benefits everyone and so these two examples also show the broad appeal of AT tools – at the PDC we showed both the On-Screen Keyboard and the Magnifier and I think it is fair to see everyone saw the benefit of using these tools themselves, regardless of abilities.

    III. Make it easier to build Accessible software

    Windows APIs cannot provide Accessibility all by themselves; it is vital for Windows-based applications to do their part in providing Accessibility data for AT programs to use.  For example, a screen reader may sound excellent, but if it can’t read your favorite web browser, what good is that?  Assistive tools like screen readers and magnifiers are clients of the Accessibility system, while the applications that you want to use, like web browsers and word processors, are providers.  It takes both to make the whole experience accessible--you need both a high-quality client and a well-written provider to have a good Accessible experience.  There are more providers in the software ecosystem, so it is hard for us to work one-on-one with every provider to make sure they are well-written.

    To address this challenge, our team developed the UI Accessibility Checker (AccChecker for short) and UI Automation Verify (UIA Verify) utilities, which can scan an application (a provider, really) and report on common Accessibility problems.  Software developers can use AccChecker and UIA Verify to detect problems in their provider code before a customer ever uses it.  Quality assurance engineers can use them to verify the quality of their firm’s work.  We believe this is so important that we released AccChecker and UIA Verify as open-source software to make it available to the widest possible audience.  If you are not a programmer, you may never use these utilities directly, but you may well benefit from the bugs they helped to eliminate before they ever reached you. 

    IV. Plan for Accessibility from Day 1

    To make sure that Windows features themselves were good providers, we borrowed an idea from the Software Development Lifecycle, risk assessment.  Before a line of code was written, each planned Windows 7 feature was rated on its Accessibility risk.  Features that use more basic, off-the-shelf common controls are usually more accessible because Windows provides built-in providers for off-the-shelf components; features that do fancy, custom drawing have more work to do.  This planning process made each team aware of how much accessibility risk it was taking on, so that they could plan appropriately.  Once the features were all rated, the list was sorted by risk so that our team could reach out to teams with high-risk features and make sure that they had the resources and tools they needed to make their feature properly accessible.  We also ensured that they received more hands-on testing and validation.  As a result, most Windows features are more accessible than they have been in previous releases, making for a better overall customer experience.

    To wrap up, we've emphasized Accessibility in engineering Windows 7.  We’ve made good progress on improving the core architecture for Accessibility and enhancing the included tools like On-Screen Keyboard and Magnifier.  The AccChecker and UIA Verify tools have made it much easier to validate applications to ensure that they will be compatible with current assistive tools as well as future tools based on the Windows Automation API.  Our approach to Accessibility for the features and providers in Windows itself has become more thorough, consistent and integrated, thanks to the hard work of hundreds of engineers across the company.  We’re proud of what we have accomplished in Windows 7 and hope that it will help customers with disabilities to realize their full potential and have a more enjoyable experience with Windows.

    --Michael

  • Engineering Windows 7

    The Windows 7 Taskbar

    • 202 Comments

    Happy Birthday Windows!  Given all the interest in the most used user-interface of Windows we thought it would be good to take a look back and see how we got to Windows 7.  --Steven

    We were very excited to unveil elements of the Windows 7 desktop at this year’s Professional Developers Conference (as seen in the Welcome to the Windows 7 Desktop session, among others). In previous posts (User Interface: Starting, Launching, and Switching and Follow-up: Starting, Launching, and Switching) we looked at the history, anatomy and areas for improvement of the taskbar. In this post, we will continue the conversation. Don’t let looks fool you though—the UI may feel new to Windows for some of you or old hat for some of you, but rest assured it represents a careful evolution that strives to address customer feedback while retaining its familiar Windows DNA.

    It was 23 years ago on November 20, 1985 when Windows first shipped. As it just so happens, this first Microsoft graphical shell actually holds relevance to this post as it surfaced one of the industry’s first taskbar-like concepts.

    Windows 1.01

    Fig 1 Windows 1.01: Icons at the bottom of the screen represent running windows

    Windows 1.0 supported zoomed (full-screen), tiled and icon (minimized) windows. Since there was no support for overlapping [that big debate between charless and billg, Steven], a dedicated portion of the desktop was kept visible at the bottom of the screen to surface non-tiled and non-zoomed windows. By minimizing a window or dragging it to the bottom of the screen, the person was able to populate this rudimentary taskbar with a large icon corresponding to the running window. She could then get back to this window by clicking or dragging this icon to the desktop. As simple as this mechanism seems today, it cemented an important concept that is with us even in Windows 7—when people switch between tasks, they are really switching between windows. Although it took Windows 95 to introduce a mature taskbar with launching, switching and notification functionality, the experience of surfacing and switching between windows via a dedicated region at the bottom of the screen is as ancient as Windows 1.0.

    Setting Goals

    In the previous taskbar posts, we discussed some high-level principles we defined after digesting the mountain of data and feedback on the taskbar. Here’s a more detailed look at the goals we identified and how we began to frame feature concepts.

    Things you use all the time are at your fingertips

    It is easy to get to the programs and destinations you use all the time, with less mouse movement and fewer clicks.

    Accessing commonly used programs within a single click required us to enrich Quick Launch by increasing its presence on the taskbar and making more top-level room for pinned items. We began looking into how Quick Launch interacted with the taskband and how launching and switching were sometimes separate and other times duplicative. For example, almost all single-instance programs in Windows interpret an attempt to re-launch them as a switch if they are already running. So, clicking Outlook’s icon in Quick Launch would merely switch to the program if it was already running and present in the taskband. To make room for more items on the taskbar, we knew we had to remove some of the redundancy and free up valuable real-estate.

    When researching and modeling a person’s workflow, we came to realize that there were three basic steps that a person frequently seems repeats. First, she finds the program and launches it. Then, she uses the program’s UI to open a file she wants to work on. Then finally, she gets to work. We asked ourselves whether we could help people jump directly to these items by skipping the first two steps. We called these files, folders, links, websites and other items that programs create or consume “destinations” as they represent where the person is ultimately is navigating to. We decided that these destinations should also be easily accessible from the taskbar. However, for real success and adoption, we needed to think through how destinations could be effectively surfaced to the person without the need for manual customization or by requiring developers to do lots of work.

    Manage your windows with confidence

    You can switch to the right window quickly without mistakes and effortlessly position windows the way you want them.

    This goal spoke to the very heart of the taskbar—the ability to switch between windows. This challenged us with seeking a more predictable method of surfacing windows on the taskbar, meaningful use of text and a reliable method of helping people consistently switch with confidence. We’ve had text on the taskbar for years and Vista introduced thumbnails, but customer feedback informed us that there was room for improvement. Interestingly, we found inspiration in old features such as Windows XP’s window grouping and Alt-Tab’s visual layout of individual windows.

    During our investigation, we also spent time looking into why a person would switch windows in the first place. Two interesting scenarios emerged—one in which she needs to get some information from a window (e.g. getting a phone number) and to interact with a window’s options (e.g. controlling background music). We wondered whether we could address these task switching cases in a novel way—by actually removing the need to switch completely.

    You are in control

    The desktop reflects your style. You get to personalize the experience, choosing what is important to you, including how and when you receive notifications.

    By far the biggest target of feedback, the Notification Area had to put control back in the hands of people. It was decided that instead of the opt-out model that required the person to clean up this area, we would start with a clean experience. Only system icons would appear by default and then people can to customize this area to their liking.

    Clean and lightweight

    The desktop experience feels organized, lightweight, open and is a pleasure to use. Visuals and animations are delighters the first time and every time.

    A successful product is more than the utility it serves—it is also an experience. From the very start we wanted the taskbar, and the desktop as a whole, to draw an emotional response from the person. This required a set of scoped delighters that demoed well and retained their appeal over time. We began to define a personality for the UI using terms such as “glass and energy,” Chi, authenticity and many others. These investigations helped define a visual and animation language that we could then apply to several aspects of Windows 7. Expect a future blog post that delves much deeper into this important design process—much of which Sam discussed in his PDC session.

    The Taskbar, Evolved

    The Windows 7 taskbar is about launching with ease, switching with confidence and all the while remaining in control. The UI is made up of several key features that complete common end-to-end scenarios. Let’s dive into each of these elements and how they work.

    Refreshed Look

    The taskbar has undergone a facelift. We’ve enabled large icons by default (as seen in Windows 1.0 and also an option of Quick Launch since Windows 95 with IE 4). This affords a richer icon language, improves identification of programs and improves targeting for both the mouse and touch. Yet, one of the most important advantages large icons provide is a means to promote the taskbar as the central place to launch everyday tasks. We joke that the new taskbar is the “beachfront property of the Windows OS” and in turn, we are already seeing many people populating the UI with their commonly used programs. Somewhat if a visual trick, the taskbar is only 10 pixels (at 96 DPI) higher than its Vista counterpart (when used as a single row, since multiple rows are still supported, along with positioning around the screen edges).

    Windows 7 taskbar

    Fig 2. The Windows 7 taskbar: Default settings include large icons, no text and glass surface

    To mitigate its slightly increased height and the larger icons, we decided to impart the UI with a more prominent glass treatment. This also allows us to better showcase the person’s color preference (you’ll recall that in a previous post we revealed that almost 30% of sessions have personalized glass). We also changed the Vista behavior so that when a window is maximized, both the taskbar and the window’s title bar continue to remain open and translucent. We received lots of feedback on Vista that many people didn’t like these UIs turning opaque and dark.

    Pinning

    You can still pin programs to the taskbar by dragging them or via a context menu, just like you have always done with Quick Launch. Destinations can also be pinned via a drag/drop, but they are designed to be surfaced differently as we’ll see under the Jump List section.

    Unification

    If one increases the size of Quick Launch, one must then determine what to do with the taskband. As previously discussed, we observed that under many scenarios of single-instance programs, launching and switching were equivalent. Hence, we decided to standardize this behavior and have program launchers turn into window switchers when they are launched. Effectively, we unified Quick Launch and the taskband. While some other operating systems have similar concepts, one difference with our approach is that our default experience always optimizes for a single representation on the taskbar. This means that regardless of a window’s state (e.g. minimized, maximized or restored) there are no new or duplicate buttons created. Also, the default taskbar doesn’t allow destinations to be pinned to the top-level which prevents duplication of a pinned file and a running window with that same file open. When we say there is “one button to rule them all” we’re serious. This approach to a single, unified button keeps the taskbar uncluttered and gives the person a single place to find what she’s looking for.

    Combining launching and switching also made it easier to provide the most requested feature—the ability to move taskbar buttons. Quick Launch as always allowed this, but combining this mechanism with the taskband naturally extended rearrange functionality to running windows.

    Interactive, Grouped Thumbnails

    Vista showed thumbnails when the user hovers on a taskbar button and Windows 7 improves upon this design. Unlike Vista, these thumbnails are now an extension of their corresponding button so the person can click on these visual aides to switch to a given window. The thumbnail is also is a more accurate representation of a window complete with an icon in the top left corner, window text and even the ubiquitous close button in the top right.

    Windows 7 Taskbar Thumbnails

    Fig 3. Thumbnails: Grouped, interactive thumbnails make it easier to manage windows

    One of the most important functions of the taskbar is to surface individual windows so people can easily switch between them. Having unified a program launcher and a single window switcher, the next logical step was to determine how multiple windows of a program could be combined and presented. We looked no further than a feature introduced in Windows XP called window grouping. When the taskbar became full, windows of a program could collapse into a single menu. However, there were a few challenges with the design. First, the behavior isn’t predictable. People don’t really understand when this scaling mechanism is triggered. Second, a listview of windows isn’t always the best way to represent these items. Finally, opening the menu always required a click, which slowed some people down. Our solution was to combine buttons by default for a predictable experience, to use grouped thumbnails and to have these thumbnails appear on hover as well as on click. Think of this approach as a contextual Alt-tab surfaced directly off the taskbar. When the person brings her mouse to a taskbar button, all the thumbnails of a program appear simultaneously making for a organized, light-weight switching model. To polish off the experience, we show a visual cue of stacked tiles that provides feedback on whether there are multiple windows running for a program. We also recognized that a set of people may still wish to see an individual buttons for each window and an option permits this behavior.

    With the Windows 7 taskbar, there is a single place to go regardless of whether the program is not running, running with one window or running with several windows. Rich thumbnails provide more intuitive ways of managing and switching between windows.

    Aero Peek

    Here’s a riddle for you—what’s the best size for a window’s preview that will guarantee that the you can accurately identify it? Grouped thumbnails look and feel great, but we know these small previews don’t always provide enough information to identify a window. Sure they work great for pictures, but not so for emails or documents. The answer is simply to show the actual window—complete with its real content, real size and real location. That’s the concept behind Aero Peek.

    When the taskbar doesn’t offer enough information via text or a thumbnail, the person simply moves the mouse over a taskbar thumbnail and voilà—the corresponding window appears on the desktop and all other windows fade away into glass sheets. Once you see the window you want, just click to restore it. Not only does this make finding a window a breeze, it may also remove the need to switch altogether for scenarios in which one just needs a quick glance to glean information. Peek also works on the desktop too. Show Desktop has been moved to the far right of the taskbar where one can still click on this button to switch to the desktop. The control enjoys a Fitts magic corner which makes it very easy to target. If you just move your mouse over the control, all windows on the desktop turn to glass allowing the desktop to be seen. It’s easy to now glance at a stock or the weather gadget or to check to see if a file is on the desktop.

    Windows 7 Aero Peek

    Fig 4. Aero Peek: Hovering over a thumbnail peeks at its corresponding window on the desktop

    We spent a lot of time analyzing different aspects of Peek. For example, we recognized that when people are using the feature, they won’t be necessary focused on the taskbar as they look at windows on the desktop. An early prototype triggered Peek directly off the top-level of the taskbar but this revealed issues. Moving the mouse across a small a region to trigger different previews exited Peek since the natural arc of hand motion resulted in the mouse falling off the taskbar. By only triggering Peek off the thumbnails, we gained much more room for the mouse to arc and we also reduced accidental triggers.

    Jump Lists

    As far back as Windows 1.0, there has always been a system menu that shows contextual controls for running windows and their programs. This menu is accessible by right-clicking on a taskband button or in the top left corner of most windows. By default, the menu exposes windows controls such as close. (Random trivia—ever wonder why the system menu off a taskbar button always shows close in bold when close isn’t the double-click behavior? Well, the answer is that double-clicking the top left region of most windows will close it and the bolded option makes sense in this context. The same menu just happens to be hosted in both locations.) Over the years, some programs have extended the system menu to surface relevant tasks. For example, Command Prompt reveals tasks such as editing options, defaults and properties in its system menu. However, this is a bit of a free-for-all for programs to opt in or not, resulting in an inconsistent experience for people. Another blow to this scenario is that the system menu is only accessible when the program is running. This makes sense since the default commands are about window management, but what if you wanted to access a program’s tasks even it isn’t running?

    As we discussed under the goals section, we thought about the various steps people have to take to accomplish tasks and whether we could reduce them. Be it getting to a destination or accessing the commands of a program, we wanted to make it easier for people to jump to the things they are trying to accomplish. Jump Lists are a new feature of the Windows 7 taskbar that accomplish just this. Think of this feature as a mini Start Menu for each program or an evolved version of the system menu. Jump Lists surface commonly used nouns (destinations) and verbs (tasks) of a program. There are several advantages this new approach provides. First, the you don’t need to even start the program to quickly launch a file or access a task. Second, destinations don’t take up valuable space on the taskbar; they are automatically organized by their respective program in a simple list. Should one have ten programs pinned or running on her taskbar, this means she could have quick access to over 150 destinations she uses all the time, without even the need to customize the UI! Since the Jump List shows lots of text for each of its items, gone are the days of having identical icons on your taskbar that are indistinguishable without a tooltip. Should you wish to keep a specific destination around, you can simply pin it to the list.

    Windows 7 Jump List

    Fig 5. Jump List: Right-clicking on Word gives quick access to recently used documents

    To make sure we provide a consistent and valuable experience out-of-the-box, we decided to pre-populate Jump Lists and also allow programs to customize the experience. By default, the menu contains the program’s shortcut, the ability to toggle pinning, the ability to close one or all windows and a program’s recent destinations (assuming they use the Common File Dialog, register their file type or use the Recent Items API). Programs are able to replace the default MRU (Most Recently Used) list with a system-maintained MFU (Most Frequently Used) list, should their destinations be very volatile. For example, while Word will benefit from a MRU just like the one in their File Menu, Windows Explorer has opted to enable the MFU because people tend to visit many paths throughout a session. Programs are also able to provide their own custom destination list when they have a greater expertise of the person’s behavior (e.g. IE exposes their own history). Still others like Windows Live Messenger and Media Player surface tasks or a mix of tasks and destinations.

    In case we haven’t yet impressed it upon you, the taskbar is about a single place to launch and switch. Jump Lists offer another important piece of the puzzle as it surfaces valuable destinations and tasks off a program’s unified taskbar button.

    Custom Window Switchers

    All the major web browsers offer tabs and a method of managing these tabs. One could argue tab toolbars are really like taskbars since they facilitate switching. These TDI (Tabbed Document Interface) and MDI (Multiple Document Interface) programs have always resorted to creating their own internal window management systems as the Windows taskbar was not optimized to help their scenarios. Some programs like Excel did custom work to surface their child windows on the taskbar, but this approach was somewhat of a hack.

    Since the new taskbar already groups individual windows of a program under a single button, we can now offer a standard way for programs that have child windows to expose them. Again, the taskbar offers a single, consistent place to access real windows as well as child windows. These custom window switchers also behave as regular windows on the taskbar with rich thumbnails and even Aero Peek.

    Thumbnail Toolbars

    In the earlier taskbar posts, we discussed how Windows Media Player’s deskband offers valuable background music controls, but only a mere 3% of sessions ever enjoy the functionality. The new taskbar exposes a feature called Thumbnail Toolbars that surface up to seven window controls right in context of taskbar buttons. Unlike a Jump List that applies globally to a program, this toolbar is contextual to just a specific window. By embracing this new feature, Media Player can now reach a majority of people.

    Windows 7 Thumbnail Toolbar

    Fig 6. Thumbnail Toolbar: Window controls easily accessible in context of a taskbar thumbnail

    Thumbnail Toolbars leave the taskbar uncluttered and allow relevant tasks to be conveniently accessible directly from a taskbar thumbnail. Surfacing tasks reduces the need to switch to a window.

    Notification Area

    We’re happy to announce that the Notification Area is back under your control. By default, only a select few system icons are shown while all others appear in a menu. Simply drag icons on or off the taskbar to control the experience. Better yet, every balloon tip that appears in the system has a little wrench icon that allows one to quickly “swat” an annoying alert by immediately seeing what is causing the notification and a direct way to disable it.

    Windows 7 Notification Overflow

    Fig 7. Notification Overflow: By default icons appear in an overflow area that you can then promote

    Interestingly a very popular change to Notification Area isn’t about reducing noise, but rather showing more information. The default taskbar now reveals both the time and the date. Finally!

    Overlay Icons and Progress Bars

    Cleaning the Notification Area warrants us to consider other ways that programs can surface important information. We’ll always had overlay icons throughout Windows (e.g. to show shortcuts in Explorer) so we decided to bring this functionality to the taskbar. An icon can now be shown over a program’s taskbar button. Furthermore, programs can also give feedback about progress by having their taskbar button turn into a progress bar.

    Windows 7 Progress Bar

    Fig 8. Progress Bars: Explorer utilizes taskbar progress to show a copy operation in process

    A program can now easily show an icon or progress in context of its taskbar button which furthers the one place, one button philosophy of the taskbar.

    Color Hot-track

    Color hot-track is a small touch that typifies the new taskbar’s personality. When a person moves her mouse over a running program on the taskbar, she will be pleasantly surprised to find that a light source tracks her mouse and the color of the light is actually based on the icon itself. We calculate the most dominant RGB of the icon and dynamically paint the button with this color. Color hot-track provides a delight factor, it offers feedback that a program is running and it showcases a program’s icon. We’ve always believed that programs light up the Windows platform and now, we’re returning the favor.

    Windows 7 Color Hot-track

    Fig 9. Color Hot-track: moving the mouse across a running window reveals a dynamically colored light effect

    Start Menu

    Vista introduced several changes to the Start Menu so we decided to minimize churn to this UI in Windows 7. Notable improvements include the availability of Jump Lists and a better power button that defaults to Shutdown, but makes it easy to customize.

    Different, Yet Familiar

    Despite all the features of the new taskbar, it is worthwhile noting the UI retains its familiarity. We like to describe our work as evolutionary, not revolutionary. The taskbar continues to be a launch surface, a window switcher and a whisperer of notifications. Whether one is relatively new to Windows or a seasoned pro, we realize change comes at a cost. It is for this reason that we took the time to carefully evaluate feedback, we performed numerous studies to validate our designs and finally, we will continue to provide scoped settings that keep the UI flexible.

    We hope this post provided more insight into the new Windows 7 taskbar. Expect future discussions on our design process, how we tested our features and advanced functionality for all you enthusiasts.

    - Chaitanya

  • Engineering Windows 7

    Disk Space

    • 103 Comments

    This post is about disk space and the disk space “consumed” by Windows 7. Disk space is the sort of thing where everyone wants to use less, but the cost of using a bit more relative to the benefits has generally been a positive tradeoff. Things have changed recently with the availability of solid-state drives in capacities significantly smaller than the trend in spinning drives. Traditionally most all software, including Windows, would not hesitate to consume a 100MB on a specific (justified) need when looking at a 60GB (or 1,500GB) drive; with desirable machines shipping with 16GB of solid-state storage, we are looking carefully at the disk space used by Windows—both at setup time and also as a PC “ages”. We also had a specific session at WinHEC on solid-state drives that might be interesting to folks. This post is authored by Michael Beck, a program manager in the core OS deployment feature team. --Steven

    Let’s talk about “footprint”. For the purposes of this post, when I say “footprint” I’m talking about the total amount of physical disk space used by Windows. This includes not only the Windows binaries, but all disk space consumed or reserved for system operations. Later in this entry, I’ll discuss in detail how the disk footprint is consumed by various Windows technologies.

    A number of comments have asked about disk footprint and what to expect in terms of Windows 7’s usage of disk space. Like many of the design issues we have talked about, disk space is also one where there are tradeoffs involved so this post goes into the details of some of those tradeoffs and also discusses some of the feedback we have received. It should be noted, that we are not at the point where we are committing to system requirements for Windows 7, so consider this background and engineering focus.

    To structure this post we’ll take two important points of feedback or questions we have received:

    • What does the WinSxS directory contains and why is it so big, and can I just delete it?
    • Where does all the disk space go for Windows components?

    We’ll then talk about the focus and engineering of Windows 7.

    WinSxS directory

    We definitely get a lot of questions about the new (to Vista) Windows SxS directory (%System Root%\winsxs) and many folks believe this is a big consumer of disk space as just bringing up the properties on a newly installed system shows over 3000 files and over 3.5 GB of disk consumed. Over time this directory grows to even higher numbers. Yikes--below is an example from a Steven's home PC.

    Example properties sheet for WinSxS directory.

    “Modularizing” the operating system was an engineering goal in Windows Vista. This was to solve a number of issues in legacy Windows related to installation, servicing and reliability. The Windows SxS directory represents the “installation and servicing state” of all system components. But in reality it doesn’t actually consume as much disk space as it appears when using the built-in tools (DIR and Explorer) to measure disk space used. The fact that we make it tricky for you to know how much space is actually consumed in a directory is definitely a fair point!

    In practice, nearly every file in the WinSxS directory is a “hard link” to the physical files elsewhere on the system—meaning that the files are not actually in this directory. For instance in the WinSxS there might be a file called advapi32.dll that takes up >700K however what’s being reported is a hard link to the actual file that lives in the Windows\System32, and it will be counted twice (or more) when simply looking at the individual directories from Windows Explorer.

    The value of this is that the servicing platform (the tools that deliver patches and service packs) in Windows can query the WinSxS directory to determine a number of key details about the state of the system, like what’s installed, or available to be installed (optional components, more on those later), what versions, and what updates are on the system to help determine applicability of Windows patches to your specific system. This functionality gives us increased servicing reliability and performance, and supports future engineering efforts providing additional system layering and great configurability.

    The WinSxS directory also enables offline servicing, and makes Windows Vista “safe for imaging”. Prior to Windows Vista, inbox deployment support was through “Setup” only. IT professionals would install a single system, and then leverage any number of 3rd party tools to capture the installed state as a general image they then deployed to multiple systems. Windows wasn’t built to be “image aware”. This meant that greater than 80% of systems were deployed and serviced using a technology that wasn’t supported natively, and required IT departments to create custom solutions to deploy and manage Windows effectively. In addition, state stored in the WinSxS directory can be queried “offline”, meaning the image doesn’t have to be booted or running, and patches can be applied to it. These two features of WinSxS give great flexibility and cost reductions to IT departments who deploy Windows Vista, making it easier to create and then service standard corporate images offline.

    While it’s true that WinSxS does consume some disk space by simply existing, and there are a number of metadata files, folders, manifests, and catalogs in it, it’s significantly smaller than reported. The actual amount of storage consumed varies, but on a typical system it is about 400MB. While that is not small, we think the robustness provided for servicing is a reasonable tradeoff.

    So why does the shell report hard links the way it does? Hard links work to optimize disk footprint for duplicate files all over the system. Application developers can use this functionality to optimize the disk consumption of their applications as well. It’s critical that any path expected by an application appear as a physical file in the file system to support the appropriate loading of the actual file. In this case, the shell is just another application reporting on the files it sees. As a result of this confusion and a desire to reduce disk footprint, many folks have endeavored to just delete this directory to save space.

    There have been several blogs and even some “underground” tools that tell you it’s ok to delete the WinSxS directory, and it’s certainly true that after installation, you can remove it from the system and it will appear that the system boots and runs fine. But as described above, this is a very bad practice, as you’re removing the ability to reliably service, all operating system components and the ability to update or configure optional components on your system. Windows Vista only supports the WinSxS directory on the physical drive in its originally installed location. The risks far outweigh the gains removing it or relocating it from the system, given the data described above.

    Where does the disk space go?

    As we all know adding new functionality consumes additional disk space--in Windows or any software. In reality, “code” takes up a relatively small percentage of the overall Windows footprint.  The actual code required for a Windows Vista Ultimate install is just over 2GB, with the rest of the footprint going to “data” broadly defined. Let’s dig deeper into the use of storage in a Windows Vista installation and what we mean by "data".

    Reliability and security were core considerations during the engineering process that built Windows Vista. Much of the growth in footprint comes from a number of core reliability features that users depend on for system recovery, performance, data protection, and troubleshooting. Some of these include system restore, hibernation, page file, registry back up, and logging. Each of these represent “backup state” that is available to the system to recover from any number of situations, some planned and others not. Because we know that different customers will want to make different tradeoffs of disk space relative to recovery (especially on small footprint devices) with Windows 7 we want to make sure you have more control than you currently do to decide ahead of time how much disk space to use for these mechanisms, and we will also tune our defaults to be more sensitive to overall consumption due to the changing nature of storage.

    System restore and hibernation are features that help users to confidently recover their system and prevent data loss, in a number of situations such as low battery (hibernation), bad application installation or other machine corruption (system restore). Combined, these features consume a large percentage of the footprint. Because of the amount of space these use, they are easy to identify and make decisions regarding.

    System restore protects users by taking snapshots of the system prior to changes and on regular intervals. In Windows Vista, system restore, is configured to consume 300mb minimally, and up to 15% of the physical disk. As the amount of space fills up with restore points, System Restore will delete older restore points to make room for new ones. The more space you have, the greater the number of restore points you have available to “roll back” to. We have definitely heard the feedback from Windows Vista customers around system restore and recognize that the it takes significant space and is not easy to tune. Some have already seen the pre-beta for Windows 7 provides an interface to manage the space better.

    Hibernate is primarily used on mobile PCs and saves your work to the hard disk and puts the computer in an extremely low power state.  Hibernate is used on mobile PCs when the battery drains below a certain threshold or when turning the computer off without using Shut Down to extend battery life as much as possible.  On Windows Vista, Hibernate is also automatically used with Sleep on desktop PCs to keep a backup copy of open programs and work. This feature is called Hybrid Sleep and is used to save state to the hard disk in case power fails while the computer is sleeping.  Hibernate writes all of the content in memory (RAM) to a file on the hard drive named Hiberfil.sys.  Therefore, the size of the reserved Hiberfil.sys is equal to the amount of RAM in the machine.  In the Windows Vista timeframe, the amount of RAM being built into computers has increased significantly, thus the disk footprint of Hibernate is more noticeable than before. This space must be reserved up front to guarantee that in a critical low battery situation, the system can easily write memory contents to the disk.  Any mobile PC user that has experienced their computer automatically entering Hibernate when the battery is critically low can appreciate the peace of mind this footprint growth provides. While we're talking about RAM and disk footprint in the same paragraph, Mark Russinovich has a post this week on virtual memory and how big the swapfile could/should/can be that you might find interesting.

    Now it’s clear that in the description above, I don’t account for the entire footprint required by Windows Vista. For instance, we also include many sample files, videos, high resolution backgrounds that allow users to easily customize their experience, and try out new features, but we’ve covered a couple of the more common questions out there.

    It’s important that we consider more than just the size of the system once deployed, but we must also look at how the system grows over time as services write logs, updates, and service packs are installed, system snapshots are taken etc. For many, the “growth” over time of the installation proves to be the most perplexing—and we hear that and need to do better to (a) make smarter choices and (b) make it clearer what space is being consumed and can be reclaimed.

    The following table provides one view of the installation footprint of a Windows Vista Premium/Ultimate installation. This includes the full installation, but to make it digestible this has been broken down into some logical categories and also highlights some specific features. Part of the reason to highlight specific feature is to illustrate the “costs” for items that have been raised as questions (or questionable).

    Table of disk space utilization of Windows Vists SP1.

    Here are some items worth calling out:

    • ~1GB driver support. Windows Vista works with thousands and thousands of different devices. The ability to plug in almost any device, even your old printer and have it get recognized and install automatically is something customers have come to expect from Windows. We receive lots of feedback wanting to remove some or all of these and each release we carefully scrub the “in-box” device support relative to what we see from telemetry in terms of used devices. The ability to install a printer or USB device while offline is a key value, especially with laptops representing over half of all PCs being sold. In the future we can possibly assume “always go to Windows Update” but we’re not there yet in most places in the world.
    • ~1GB of system growth in serviced and superseded components to support robust rollback and recovery, after installing critical security and functionality updates. We receive a lot of positive feedback about the robustness of servicing but at the same time, the desire to rollback a specific fix for any variety of reasons remains an important robustness and reliability measure. We also understand the feedback we have received regarding the disk requirements to install SP1 on top of RTM. We hope folks are aware of the vsp1cln.exe utility in the system32 directory, for those that are in need of disk space.
    • ~1GB hibernation support is necessary in order to prevent data loss when a machine has been in standby for many hours. This can be removed via the Disk Cleanup wizard or via an elevated command prompt (powercfg /h off).
    • ~315mb of Fonts. Windows users speak many different languages, often on the same PC, and wish Windows to “speak” to them. Windows Vista contains native font support to allow users with systems defaulted to one language to be able to read documents, or websites in another. As we know, however, fonts are east to delete should you desire.
    • ~52MB of log files. Whether it is the event log, servicing logs, or device installation logs (or more) this space is consumed and becomes critical when trying to diagnose a failure. These logs are often used by our support personnel or corporate helpdesks to diagnose a specific failure.

    Engineering Windows 7

    Windows disk space consumption has trended larger over time. While not desirable, the degree to which it’s been allowed is due in large part to ever-increasing hard drive capacity, combined with a customer need and engineering focus that focused heavily on recoverability, data protection, increasing breadth of device support, and demand for innovative new features. However, the proliferation of Solid State Drives (SSDs) has challenged this trend, and is pushing us to consider disk footprint in a much more thoughtful way and take that into account for Windows 7.

    This doesn’t mean that we’re going to stop adding great features or make Windows less reliable or recoverable. As we look to the future, it’s critical that as we innovate, we do so treating the disk space consumed by our work as a valuable resource, and have a clearer design for how Windows uses it. We want to make sure that we are making smart choices for the vast majority of customers and for those desiring more control providing places to fine tune these choices as appropriate. This design goal isn’t about a type of machine, or specific design, all Windows editions benefit from efforts that focus on a reduction of the overall footprint.

    For example, as we consider the driver support discussed above, Windows Vista with SP1 installs almost 1GB of drivers on the system to support plug and play of devices. This local cache can get out of date as IHVs release updates to their drivers, and as a result, users are pushed to Windows update to get the latest version once the device is installed.

    Why not extend the PnP user experience to include (or only use) the Windows Update cache of drivers and save some disk space? This has several benefits:

    1. Because MobilePCs rarely lack a network connection, they can simply get the new driver from the web.
    2. People don’t have to install the driver twice on updated devices because they do the round trip to the web anyways.

    With this example it’s easy to see how engineering for a minimal footprint might actually deliver a better experience for people when attaching new devices to their systems. At the same time, we want to be careful about going too far too soon. We get a tremendous amount of feedback regarding the “plug and play” experience or feedback about costly download times (if download is at all possible). For Windows 7 we are going to continue to be deliberate in what we include based on the telemetry of real world devices and reducing the inbox set to cover the most popular devices around the world. At the same time we will continue a very significant effort around having the best available Windows Update site for all devices we can possibly support.

    Windows features installed by default make sense in most cases to support many scenarios. We should consider how we make some features/components (such as Media Center) optional when they are not required rather than installing them by default on every system. We’re committed to make more features of Windows optionally installed. As you might notice today in Windows, when you choose to add a feature that was not installed Windows does not require a source (a DVD or network location). This is because the feature is stashed away as part of a complete Windows install—this is itself a feature. We will always keep features available and will always service them even when components are not installed—that way if you add a component later you do not risk adding a piece of code that might have been exploited earlier. This is another important way we keep Windows up to date and secure, even for optional features.

    System growth over time is an area where we need to provide more “transparency”. For instance, Windows will archive previous versions of updated system components to allow robust rollback. A new system will install patches as Windows Update makes them available, just as expected by design. As a Service Pack or other large update is installed that contains or supersedes any of the previous patches; we can simply recover the space used by the old updates sometime after the update is successfully installed.

    Windows writes logs in many places to aid in troubleshooting and these logs can grow very large. For instance, when an application crashes, Windows will archive a very large dump file to support analysis of the failure. There are many good reasons for this behavior, but as we change our mindset towards footprint, we need to extend our scenarios to include discussions of how to manage the growth, and recover the disk space consumed whenever possible. Other areas where we are considering the default disk space reserved include System restore and hibernation. On a disk constrained system, the 1GB or more reserved to support hibernation is costly and there may be ways to shrink the size of hiberfil.sys. System restore should be configurable, and default in all cases to the minimally useful number of snapshots vs. a blanket 15% of the system disk.

    At WinHEC we had several machines on display with 16GB drives/partitions and on those you could see there was plenty of free disk space. Like all the benchmarks, measuring disk space on the pre-beta is not something we’re encouraging at this time.

    In conclusion, as we develop Windows 7 it’s likely that the system footprint will be smaller than Windows Vista with the engineering efforts across the team which should allow for greater flexibility in system designs by PC manufacturers. We will do so with more attention to defaults, more control available to OEMs, end-users and IT pros, and will do so without compromising the reliability and robustness of Windows overall.

    -Michael Beck

  • Engineering Windows 7

    Action Center

    • 112 Comments

    We’re back! We’ve had a pretty incredible couple of weeks at the PDC and WinHEC. Based on what we talked about you can imagine we are all rather busy as we transition from milestone 3 to beta. We trust many of you are enjoying 6801 (or perhaps we should say 6801+). Over the next few weeks we’re going to start posting on the engineering and design of the specifics of different aspects of Windows 7 that we’ve talked about. Some posts will be very detailed and others will be a bit more high level and cover more territory. In all cases, we’ll be watching the comments carefully and also looking for opportunities on follow up posts. Thank you!

    One of the big themes of Windows 7 from a design perspective (as you might have seen in Sam’s PDC session and certainly a topic we have talked about here) is making sure that you are “in control” of what is happening on your PC. This post, by senior program manager Sean Gilmour, is about “notifications” or the balloon popups that come from the system tray. In Vista we offered some controls over this area and in Windows 7 we have worked hard to make this an area that defaults to more well-behaved functionality and is also much more tunable to your needs. By improving how Windows itself uses the APIs and “guidelines” we want to encourage other ISVs to do the same. This topic is a great example of how the whole ecosystem comes into the picture as well and so we hope developers reading this will see the passion around the topic and the desire for software on Windows to take the steps necessary to honor the your intent. --Steven

    The notification area has been talked about a couple times in previous posts (User Interface: Starting, Launching, and Switching and Follow-up: Starting, Launching, and Switching). This post is going to go into a bit more detail regarding notification balloons as well as one of the ways we’re working to quiet the system in Window 7.

    Where We're At Today

    Windows can be a busy place – with many things vying for your attention, even while you’re trying to do work. One we hear a lot about from you is the system notification balloons – those little pop-ups that appear above icons in the notification area (typically right side of the taskbar near the clock). In this post I’ll be talking to notifications sent utilizing Shell_NotifyIcon function provided in Windows, not custom notifications, often called “toast”, like the notifications presented by many applications (some like Outlook even from Microsoft). We see these in instant messenger programs, printer notifications, auto updaters, wifi and Bluetooth utilities, and more – these often use custom methods to present these “balloons” from the system tray, not necessary the Windows API. People have made their feelings loud and clear – Windows is too noisy and the noise distracts from the work at hand. Here are some quotes from the Windows Feedback Panel that illustrate that point.

    “Too many notification messages, esp. re: security (eg. Firewall), activation”

    “Notifications telling me my system is secure, when I know it is secure, are annoying”

    “I'm tired of error messages and pop ups.”

    And some posts from the blog discussions

    @Jalf writes “Having 20 icons and a balloon notification every 30th second taking up space at the taskbar where it's *always* taking up space is just not cool.

    @Lyesmith writes “The single biggest annoyance in the taskbar is notification balloons.”

    So how noisy is the system? First a quick definition - a ‘session’ is the period of time between log-on and log-off or 24 hours whichever is shorter. As you can see from the following chart, 60% of sessions experience at least one notification. That doesn’t sound all that bad, but if you dig a bit deeper you realize that 37% of sessions see two or more notifications and 25% of sessions see three or more notifications. That’s a lot of distractions interrupting your work.

    Number of notification sent per session as a percentage of total sessions - August through September, 2008 

    Figure 1: Number of notification sent per session as a percentage of total sessions - August through September, 2008

    So we know how much noise notifications create but how effective are notifications? Well, as the following chart, notification click-through rate shows the more notifications the less effective they become.

    Notification click-through rate - August through September, 2008

    Figure 2: Notification click-through rate - August through September, 2008

    So, as shown in the above chart, used sparingly and in the right context, notification balloons can be rather useful. Unfortunately, that isn’t what is happening today. Instead the notification area often feels like a constant scrolling billboard of messages some important, many not. So what’s the answer? It’s a big area to tackle – there are system notifications, third party notification, and custom notifications. For Windows 7 we chose to focus on making sure Windows and its in-box components notify you responsibly and don’t contribute to the noise in the system. Ideally the ISV community will follow suit and as you’ve seen in some sessions, we’re doing this work in Windows Live for example. One of the reasons we focused internally was data showing that Windows components are responsible for at least 28% of the notifications presented. Additionally, we were able to identify seven Windows components that are mostly responsible for that noise. In all, 20 applications account for 62% of the notifications presented. The following chart shows the break-out.

    Which software accounts for notifications - August through September, 2008

    Figure 3: Which software accounts for notifications - August through September, 2008

     

    Windows 7

    Our effort to quiet the system and make sure you are in control took the following approach:

    • Working across Windows 7 to reduce unnecessary notifications
    • Put you in control of the notifications you see
    • Creating Action Center with the following goals
      • Reduce the number of notification balloons sent to you and make the ones that are sent more meaningful
      • Provide a contextual way to address the issues with a single click
      • Reduce the user-interface clutter in the system to streamline solving system issues

    While there are many other efforts going around notifications and the notification area I’m going to focus on Action Center. In a nutshell, Action Center is a central location for dealing with messages about your system and the starting point for diagnosing and solving issues with your system. You can think of Action Center as a message queue displaying the items that need your attention that you can manage on your schedule. It serves as an aggregate for ten components in Windows Vista that contributed a large number of somewhat questionably effective notification balloons, but notifications that could not just be eliminated. At the heart of the Action Center effort is the idea that your time is extremely valuable it should never be wasted. To that end we took three steps.

    First we looked hard at the messages we were sending and worked to reduce balloons and clarify messages. We took the following steps:

    • Putting messages into one of two categories – normal or important. Normal messages simply appear in the Action Center control panel. Important messages send a notification balloon as well as appearing in the Action Center.
    • Setting a high bar for important messages. A message is only deemed important if the security of the system or the integrity of your data is at risk.
    • Reducing the frequency of notifications so that you’re not seeing them pop-up “all the time”
    • Looking at all the messages and asking the hard questions –“is this something you really need to know about?”

    The last filter led to our second step. We decided that all messages need to have an action associated with them - a solution, if you will, to whatever problem we were presenting to you. This meant cutting any FYI, Action Success, and Confirmation messages. It also meant that the way we presented these messages would be action based. For example, we replaced, “Antivirus is out of date”, with “Update Antivirus Signatures.” We believe that we should let people know specifically how to resolve an issue instead of making them guess or read lots of text. This is the heart of the other goal of Action Center – to help people solve system issues quickly and conveniently.

    Finally, we designed the user experience (UX) of the Action Center in two parts. The first and most immediately visible is system icon in the notification area, which is a "lighthouse" in 6801. In the spirit of our efforts, this icon replaces five notification area icons from Vista, further reducing the clutter and noise in the system. The lighthouse icon provides a high level view of the number of messages in Action Center and their importance. It also has a fly-out menu on single left click which lists the four most recent notifications and supports you acting on messages contextually. We give the people the ability to click on a notification in that fly-out menu and immediately go to the UI to solve the issue. Again, the focus is solving issues instead of simply notifying.

    Action Center notification area icon and fly-out menu

    Figure 4: Action Center notification area icon and fly-out menu

    The second part of the UX is the control panel, which builds upon the icon and fly-out by serving as a repository for all messages as well as providing more details about the issue and the solution. It is also action based so the layout emphasizes messages and the corresponding solutions with even more detail. Additional actions are available if you expand the UI to view them. Finally, we know that we won’t always have messages about the issues a person might be having on their machine. To make sure you can solve those issues, we provide top level links to Troubleshooter and Recovery options.

    Action Center Control Panel with a few messages queued up

    Figure 5: Action Center Control Panel with a few messages queued up

    Action Center boils down to understanding that your time is valuable and that it is your PC you want to control, not be controlled by your PC. We reduced messages, focused on solving issues not just telling you about them, and streamlined the experience so you can focus on what you what to do not want Windows needs you to do. We are aiming to get most sessions down to zero notifications from Windows itself. This reduction in notifications could significantly increase the possibility that the notification balloon will be effective in delivering its message and prompting user action as shown in the Figure 2 (notification click through).

    We will of course be evangelizing to ISV the goal of following this direction and reducing notification balloons – and we believe we’ve taken the first steps to making Windows a quieter place. Hopefully you will find it less distracting and easier to work with.

    Sean Gilmour, senior program manager

  • Engineering Windows 7

    Back from the PDC…next up, WinHEC

    • 108 Comments

    This has been an amazingly special week for the Windows 7 team.  We’re all incredibly appreciative of the reception of Windows 7 this week at the PDC.  Thank you!

    All of us on the team have been closely watching the news reports and blogs of those who have been “kicking the tires” of the Windows 7 pre-beta.  The reception has been fantastic and we’re humbled by the excitement and enthusiasm for the release.  We know we have a ton of work ahead of us to get to beta and then the path to RTM, and the reception has definitely given us an extra special motivation (though we were already pretty motivated).

    Next week is our conference dedicated to the hardware partners in the ecosystem we have talked about.  Called WinHEC (Windows Hardware Engineering Conference), we’ll have another series of sessions and keynotes.  Jon DeVaan will be taking the lead as we dive into the details of “fundamentals” and the work we are doing with some of the many partners involved in Windows 7.  WinHEC also has a strong focus on Windows Server 2008 R2 (the server built off the Windows 7 kernel).  These sessions will all be available online as well.

    So with all the shows we’re taking a short break from the blog as the folks that do the presenting are also the writers (myself included).

    Below is a list of all the sessions on Windows 7 from the PDC.  Please take some time to have a look as the information is very detailed for sure.  How about using the comments on this post to ask questions of the sessions that you’d like to see more details on down the road?  That would be really helpful for us to target our posts.

    Many of you have written asking about the beta and how to sign up or download it.  The best source for information on that will be the site http://www.microsoft.com/windows/windows-7 which our product marketing team owns and will keep up to date as the beta information is available.  Also note that the Windows Vista blog which is where you will see announcements / news has been updated to reflect the inclusion of Windows 7.  This blog is now known as the Windows Blog.

    One of the very fun moments for me at the PDC was an “Open Space” session on the floor of the “Big Room” which was an open-microphone discussion.  Channel9 captured this and might be a fun watch.  See http://channel9.msdn.com/posts/Charles/Steven-Sinofsky-at-the-PDC2008-Open-Space/

    For those of you interested in the Windows 7 APIs and what’s new for developers there is an overview document that you might find valuable.  See Windows 7 Developer Guide on MSDN.

    Thank you very much for all the emails you have sent.  I always share them with the team and really appreciate it.

    Presentation URL
    KYN02 Day Two #1 - Ray Ozzie, Steven Sinofsky, Scott Guthrie and David Treadwell (Windows 7 starts +17:00 minutes) http://channel9.msdn.com/pdc2008/KYN02/
    PC01 Windows 7: Web Services in Native Code http://channel9.msdn.com/pdc2008/PC01/
    PC02 Windows 7: Extending Battery Life with Energy Efficient Applications http://channel9.msdn.com/pdc2008/PC02/
    PC03 Windows 7: Developing Multi-touch Applications http://channel9.msdn.com/pdc2008/PC03/
    PC04 Windows 7: Writing Your Application to Shine on Modern Graphics Hardware http://channel9.msdn.com/pdc2008/PC04/
    PC13 Windows 7: Building Great Audio Communications Applications http://channel9.msdn.com/pdc2008/PC13/
    PC14 Windows 7 Scenic Ribbon: The next generation user experience for presenting commands in Win32 applications. http://channel9.msdn.com/pdc2008/PC14/
    PC15 Windows 7: Benefiting from Documents and Printing Convergence http://channel9.msdn.com/pdc2008/PC15/
    PC16 Windows 7: Empower users to find, visualize and organize their data with Libraries and the Explorer http://channel9.msdn.com/pdc2008/PC16/
    PC18 Windows 7: Introducing Direct2D and DirectWrite http://channel9.msdn.com/pdc2008/PC18/
    PC19 Windows 7: Designing Efficient Background Processes http://channel9.msdn.com/pdc2008/PC19/
    PC22 Windows 7: Design Principles for Windows 7 http://channel9.msdn.com/pdc2008/PC22/
    PC23 Windows 7: Integrate with the Windows 7 Desktop http://channel9.msdn.com/pdc2008/PC23/
    PC24 Windows 7: Welcome to the Windows 7 Desktop http://channel9.msdn.com/pdc2008/PC24/
    PC25 Windows 7: The Sensor and Location Platform: Building Context-Aware Applications http://channel9.msdn.com/pdc2008/PC25/
    PC42 Windows 7: Deploying Your Application with Windows Installer (MSI) and ClickOnce http://channel9.msdn.com/pdc2008/PC42/
    PC43 Deep Dive: What's New with user32 and comctl32 in Win32 http://channel9.msdn.com/pdc2008/PC43/
    PC44 Windows 7: Programming Sync Providers That Work Great with Windows http://channel9.msdn.com/pdc2008/PC44/
    PC50 Windows 7: Using Instrumentation and Diagnostics to Develop High Quality Software http://channel9.msdn.com/pdc2008/PC50/
    PC51 Windows 7: Best Practices for Developing for Windows Standard User http://channel9.msdn.com/pdc2008/PC51/
    PC52 Windows 7: Writing World-Ready Applications http://channel9.msdn.com/pdc2008/PC52/
    ES20 Developing Applications for More Than 64 Logical Processors in Windows Server 2008 R2 http://channel9.msdn.com/pdc2008/ES20/

    See you on this blog soon enough!

    --Steven

  • Engineering Windows 7

    Follow-up: Windows Desktop Search

    • 77 Comments

    The discussion and email about desktop search offered an opportunity for us to have a deeper architectural discussion about engineering Windows 7.  There were a number of comments suggesting alternate implementation methods so we thought we’d discuss another approach and the various pros and cons associated with it.  It offers a good example of the engineering balance we are striving for with Windows 7.  Chris McConnell wrote this follow-up.  --Steven (See you at the PDC in a week!)

    Thanks for all the great feedback on our first blog post on Windows Desktop Search.  I’ve summarized a number of points that have been made and added some comments about the architectural choices we have made and why.

    Integration with the File System

    As some posters have pointed out, one possible implementation is to integrate indexing with the file system so that updating a file immediately updates the indices.  Windows Desktop Search takes a different approach.   There are two aspects of file system integration: knowing when a file changes and actually updating the indices before a file is considered “closed” and available.   On an NTFS file system, the indexer is notified whenever a file changes.   The indexer never scans the NTFS file system except during the initial index.  It is on the second point—updating the indices immediately when a file is closed that we made a different choice.  Updating immediately has the benefit that a file is not available until it is indexed, but it also comes with a number of potential disadvantages.  We chose to decouple indexing from file system operations because it allows for more flexibility while still being almost real-time.   Here are some of the benefits we see in the approach we took:

    1. Fewer resources are used.  Inverted indices are global.  An inverted index maps from a word found in a property to a list of every document that contains that word.  Indexing a single file requires updating an index for every single unique word found in the file.   A single document might then update a very large number of individual indices.  Making these changes and committing them with the same robustness found on individual files would be very expensive.  The design of the indexer allows scheduling and aggregating these changes so that much less work is done overall—that means less CPU and less disk I/O.  The system can be more robust because indexing doesn’t only happen when a file is closed—and it can even be retried if necessary.
    2. File system operations are prioritized over indexing.  Getting files robustly updated and available is necessary for applications to use them.  We don’t want to delay that availability by forcing the cost of indexing into file close operations.   Searching over files is important, but is less important than actually working with files.  We wouldn’t want applications to decide individually if the indexer should be turned on or off just because they were seeking the best performance with respect to the file system.
    3. There are lots of file types.  Microsoft supplies extractors (IFilter/IPropertyHandler) for many common file types as part of Windows.  There are many other file types as well so it is important to allow non-Microsoft developers to write their own extractors.  In Vista (and Windows 7), these extractors run in a locked down process that ensures that they are secure and do not affect the performance of the whole system.  If indexing had to happen before a file was available, then an extractor could impact (intentionally or not) all file system operations.  
    4. Some files are more valuable to index then others.  If indexing happened when a file is closed, then there is no control over the order files are indexed.  Decoupling allows prioritizing indexing some files over others.  For example, searching for music is much more likely than searching for binary files.  If both music files and binary files have changed, then the indexer ensures it indexes the music files first.  Some files are not worth indexing at all for most people.  Several comments suggested that we should index the whole drive.  We can do that—and for those who would find it valuable it easy to add folders to be indexed.  (You can also remove them, but that is much less common so that is controlled through the control panel “Indexing Options.”)  For most people indexing system files is just a cost—they would never search for them and would be confused if they showed up as the result of a search. 
    5. Not everything is a file in single file system.  Windows is all about supporting diversity.  There are many different file systems like FAT32 and CDFS and we would like to be able to search over those as well.   If we integrated with only NTFS, then we would have to still have a loosely coupled system for other file systems.  Many applications also have databases optimized for their own needs.  For example, Outlook has a database of email.  If only files were indexed, then the email in the database could not be indexed unless Outlook either compromised their experience by using files only, or complicated their implementation by duplicating everything in both the file system and the database.

    Advanced Queries

    A number of people expressed frustration with the lack of an advanced query UI.  Microsoft has many advanced query user-interfaces in many products, but these are generally focused on well-defined query languages (SQL) or on specific domains (like the Advanced Find in Outlook).  With Vista we wanted to address the query problem in a manner more familiar to people today—a single edit control.  Our implementation supports a rich query language within that edit control.  This is the same approach people are familiar with for web searching for both standard and advanced queries.

    We had two observations that led to this approach:

    1. The most important part of a search are the search terms.  Usually a single term is enough (and as we know from web searching, the majority of searches are one or two words).   And for refinement the file system tools of thumbnails, sorting, and/or type ahead can be used to narrow the search.  
    2. It is reasonable to consider a design for an advanced query UI covering property based search, but it will generally be unwieldy for all but the bravest people.  As we mentioned, Windows Search covers over 300 properties by default so if you show every property then the UI is unusable.  If we only show the most commonly used properties then how do you handle all of the other properties?  Would properties be grouped by the common application or by attributes such as times, names, file attributes, etc.?  Some of you might value the Outlook Advanced Find… interface, but there you see some of the challenges and that is within a specific domain where the grouping or related properties probably can be understood. 

    In designing Vista we incorporated the feedback that it is desirable to do precise queries.  The approach taken in Vista was to support a rich query language which allows all properties and a fairly natural syntax.  For example typing “from:gerald sent:today” will find all email from “Gerald”  sent today!   The big issue is that people do not know or the query language.  In Windows 7, we have focused on helping people see how to use the query language in context. For now, you can see the following for some information on Vista’s query syntax.  Much of this syntax and experience is similar to web search that we all use today.

    A number comments were about substring matches in filenames, which we do not currently support.  This is part of the overall discussion about advanced queries.  In order to efficiently execute queries, the indexer builds indices that are based on individual words.  In Vista we introduced “searching as you type” to our search UI.  Under the hood this is implemented as prefix matches on the indexed words.  So when you type, ‘foo’, we look for all terms that start with those letters including ‘food’ and ‘football’.    Even more interesting if you type ‘foo net’ we will match on items that have the words ‘food’ and ‘network’ in them.   (If what you really want is to match the phrase “foo net” then typing those words inside quotes will do that—another example of advanced query syntax)   We have focused primarily on searching for terms found in any property, but there is no question that filenames are special.  In recognition of that we support suffix queries on filenames.  If you type ‘*food’ then we will return files that end in ‘food’ like “GoodFood”.  We do this by reversing the filename and then indexing it as a word.  For example, the reverse filename of “GoodFood” would be “DooFdooG” which we index as a word.  The suffix query ‘*food” is transformed into a prefix query “doof*” over the reverse filename index—clever, no?   So we support prefix matches for all properties and suffix matches for filenames, but we do not support substring matches. 

    Performance and Citizenship

    A number of comments focused on improving performance and citizenship—and we definitely agree on this input.   We are always striving to make Windows do more with fewer resources.  For those who have turned off indexing all together we hope that our continued improvements will make you reconsider.  Even if you organize all of your files and don’t find search useful for files, perhaps you will find start menu search, email search or Internet Explorer 8 address bar search useful.  We have worked hard at improving performance and citizenship across Windows.  Some of this progress is visible in WS4 and soon in Windows 7.  We have improved along all of our dimensions including indexing cost, battery life, citizenship, query speed and scrolling speed.  We have some tremendous tools that help us track down performance problems.  If you want to help, please contact idx-help@microsoft.com and we will tell you how to collect performance traces we can analyze so that we can continue to make improvements.

    Chris McConnell

    Find and Organize

  • Engineering Windows 7

    From Idea to Feature: A view from Design

    • 67 Comments

    Larry is very appreciative of the reception and comments his post received. Thank you!  It is worth noting that we’ve surpassed over 2000 comments and I’ve received and equal amount of email. I am trying to reply as often as I can! 

    We’re 10 days from the PDC and so we might take a short break from the blog while we practice our demos of Windows 7…we’ll keep an eye on comments for sure and maybe a post or two on the way.  :-)

    Let's move "up" in the dev process and look at how we come up with what is in a release and how we think about taking a feature from an idea to feature. 

    As we’ve posted on various engineering challenges we’ve often distilled the discussion down to a few decisions, often between two options (make a feature optional or not, add a window management feature one of two ways, etc.) Yet this doesn’t quite get to the challenge of where does the product definition begin and how do we take an idea and turn it into a feature. Most choices in engineering Windows are not between two choices, but the myriad of considerations, variables, and possibilities we have before we even get to just a couple of options. This post looks a bit at the path from an idea to a feature.

    A common thread we’ve seen in the feedback is to make “everything customizable and everything optional” (not a direct quote of course). Of course, by virtue of providing a platform we aim to offer the utmost in extensibility and customization by writing to the APIs we provide. There is an engineering reality that customization and extensibility have their cost—performance, complexity, and forward compatibility come to mind. One way to consider this is that if a feature has two “modes” (often enable the new feature or enable the old feature) in one release, then in a follow-up release if the feature is changed it potentially has four modes (old+old, old+new, new+old, new+new), and then down the road 8 modes, and so on. The complexity of providing a stable and consistent platform comes with the cost that we aren’t always able to “hook” everything and do have to make practical choices about how a feature should work, in an effort to plan for the future. Designing a feature is also about making choices, tough choices. At the same time we also want to provide a great experience at the core operating system functions of launching programs, managing windows, working with files, and using a variety of peripherals--to name just a few things Windows does. This experience should be one that meets the needs of the broadest set of people across different skill levels and different uses of PCs, and also providing mechanisms to personalize with user interface and to customize with code. Every release we plan is a blending of fixing things that just don’t work like we all had hoped and developing new solutions to both old and new problems, a blending of features and extensibility, and a blending of better support for existing hardware and support for new hardware.

    This post is jointly written by Samuel Moreau the manager of the user experience design team for the Windows Experience, Brad Weed, Director of User Experience Design and Research for Windows and Windows Live, and Julie Larson-Green, the VP of Program Management for the Windows Experience. With the number of comments that describe a specific feature idea, we thought it would be good to give you an overview of how we approach the overall design process and how ideas such as the ones you mention flow into our process. Also for those of you attending the PDC, Sam will be leading a session on the design principles of Windows 7. –Steven

    Designing Windows – from idea to feature

    In general, we follow a reasonably well-understood approach to product design, but that doesn’t make it easy or “automatic”. Often this is referred to as a "design funnel" where ideas go from concept to prototype to implementation and then refinement.  By reading the various design ideas in the comments of Chaitanya’s post on “Starting, Launching and Switching”, you can see how difficult it can be to arrive at a refined feature design. In those comments you can find equally valid, yet somewhat opposite points of view. Additionally, you can also find comments that I would paraphrase as saying “do it all”. It is the design process that allows us to work through the problem to get from idea to feature in the context of an overall product that is Windows.

    From a product design perspective, the challenge of building Windows is the breadth of unique usage of just a single product. In a sense, one of the magic elements of software is that it is “soft” and so you can provide all the functionality to all customers with little incremental cost and little difference in “raw materials” (many comments have consistently suggested we have everything available along with options to choose components in use and we have talked about minimizing the cost when components and features are not used even if they are available). And at the same time, there is a broad benefit to developers when they can know a priori that a given PC has a common set of functions and can take advantage of specific APIs that are known to be there and known to behave a specific way--the platform. This benefit of course accrues to individuals too as you can walk up to any PC and not only have a familiar user experience, but if you want to do your own work, use a specific device, or run a certain program on the PC you can also do that. This breadth of functionality is a key part of the value of a Windows PC. Yet it also poses a pretty good design challenge. Learning, understanding, and acting on the broad set of inputs into designing Windows is an incredibly fun and challenging part of building Windows.

    As Larry pointed out the design and feature selection happens takes place in his part of the organization (not way up here!). There’s another discussion we will have in a future post about arriving at the themes of the overall release and how we develop the overall approach to a release so that the features fit together and form a coherent whole and we address customer needs in an end-to-end fashion.

    We have a group of product designers that are responsible for the overall interaction design of Windows, the sequence and visualization of Windows. Program Managers across the team work with product designers as they write the specifications. Along with designers we have UX Researchers who own the testing and validation of the designs as we’ve talked about before. The key thing is that we apply a full range of skills to develop a feature while making sure that ownership is clear and end-to-end design is clear. The one thing we are not is a product where there is “one person” in charge of everything. Some might find that to be a source of potential problems and others might say that a product that serves so many people with such a breadth of features could not be represented by a single point of view (whether that is development, testing, design, marketing, etc.). We work to make sure engineers are in charge of engineering, that the product has a clear definition that we are all working towards implementing and that product definition represents the goals across all the disciplines it takes to deliver Windows to customers around the world.  And most importantly, with Windows 7 we are making renewed effort at "end to end" design.

    Let’s look at the major phases of product design in Engineering Windows. What we’ll talk about is of course generalized and doesn’t apply to each specific incident. One thing we always say internally is that we’re a learning organization—so no process is perfect or done and we are always looking to make it better as we move through each and every iteration of building Windows.

    Throughout this post when we say “we” what this really means is the individuals of each discipline (dev, test, pm, design) working together—there’s no big feature or design committee.

    Pick the question or get an idea

    We get an idea from somewhere of something to do—it could be big (build UX to support a new input method such as touch), wild (change the entire UI paradigm to use 3D), or an improvement / refinement of an existing feature (multi-monitor support), as some examples. There is no shortage of creative ideas, because frankly, that is the easy part. Ideas flow in from all corners of the ecosystem, ourselves included. We’ve talked a lot about comments and feedback from this blog and that is certainly one form of input. Product reviews, enterprise customers, customer support lines, PC makers, hardware and software developers, blogs, newsgroups, MVPs, and many others have similar input streams into the team.

    The key is that working on Windows is really a constant stream of inputs. We start with a framework for the release that says what goals and scenarios we wish to make easier, better, faster. And with that program management builds up candidate ideas—that is ideas that will make their way through features. The job of getting a feature “baked” enough falls to program management and they do this work by working across the product and working with design, development, and testing (as Larry described).

    With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.

    Gather information and data

    Given any idea, the first step is to understand what data we have “surrounding” the idea. Sometimes the idea itself comes to us in a data-centric manner (customer support incidents) or other times it is anecdotal (a blog).

    The first place we look is to see what data do we have based on real world usage that would support the development of a hypothesis, refute or support the conventional wisdom, or just shed some light on the problem.  The point is that the feature starts its journey by adding more perspectives to the input.

    Essentially, we need an objective view that illuminates the hypothesis. We gather this data from multiple sources including end users, customers, partners, and in various forms such as instrumentation, research, usability studies, competitive products, direct customer feedback, and product support.

    As many (including us) have pointed out, telemetry data has limitations. First, it can never tell you what a person might have been trying to do—it only tells you what they did. Through usability, research, and observation, we are able to get more at the intent.  For example, the way we talked about high dpi and how the telemetry showed one thing but the intent was different (and the impact of those choices was unintended). The best way to see this is to remember that a person using a PC is not interesting in “learning to use a PC” but is trying to get their own work done (or their own playtime). And when faced with a “problem” the only solutions available are the buttons and menu commands right there—the full solution set is the existing software. Our job is to get to the root of the problem and then either expand the solution set or just make the problem go away altogether.

    What about unarticulated needs?  The data plus intent shows the “known world” and “known solution space”, but one role we have is to be forward thinking and consider needs or desires that are not clearly articulated by those who do not have the full time job to consider all the potential solution spaces. The solution space could potentially be much broader than readily apparent from the existing and running product—it might involve a rearchitecture, new hardware, or an invention of a new user interface.

    A great example of this was mentioned in one of the comments on the taskbar post. The comment (paraphrasing) indicated that the order of icons on the taskbar matters so sometimes he/she would simply close all the open programs and then restart them just so the programs were in the preferred order on the taskbar. Here the data would look like an odd sequence of launch/exit/launch/exit/launch/lauch. And only through other means would we learn why someone would be doing that, and for the most part if you just walked up without any context and said “how can we make Windows easier” it isn’t likely this would bubble up to the top of the list of “requests”. Thus we see a number of neat things in this one example—we see how the data would not show the intent, the “request” would not be at the top of any list, and the solution might take any number of forms, and yet if solved correctly could be a pretty useful feature. Above all, in hindsight this is one of those “problems” that seems extraordinarily obvious to solve (and I am sure many of you are saying—“you should have just asked me!”) So we also learn the lesson that no matter what data and information we gather or what design we’re talking about, someone always noticed or suggested it :-).

    Hypothesize

    The next step is where we propose a clear hypothesis – “people would benefit from rearranging icons on the taskbar because positional memory across different sessions will reduce the time to switch applications and provide a stronger sense of control and mastery of Windows”.

    What is our hypothesis (in a scientific sort of way) as to what opportunity exists or what problem we would solve, and what the solution would look like, or why does the problem exist?  Part of designing the feature is to think through the problem—why does it exist—and then propose the benefit that would come from solving the problem. It is important that we have a view of the benefit in the context of the proposed solution. It is always easy to motivate a change because it feels better or because something is broken so a new thing has to be better, but it is very important that we have a strong motivation for why something will benefit customers.

    Another key part about the hypothesis is to understand the conventional wisdom around this area, especially as it relates to the target customer segment (end-user, enthusiast, PC maker, etc.) The conventional wisdom covers both the understanding of how/why a feature is a specific way today and also if there is a community view of how something should be solved. There are many historic examples where the conventional wisdom was very strong and that was something that had to be considered in the design or something that had to be considered knowing the design was not going to take this into account—a famous example is the role of keyboard shortcuts in menus that the “DOS” world felt would be required (because not every PC had a mouse) but on the Mac were “unnecessary” because there was always a mouse. Conventional wisdom in the DOS world was that a mouse was optional.

    Experiment

    For any hypothesis, there are numerous design alternatives. It is at this stage where we cast a broad net to explore various options. We sketch, write scenarios, story board, do wireframes and generate prototypes in varying fidelity. Along the way we are working to identify not just the “best answer” but to tease out the heart and soul of the problem and use the divergent perspectives to feed into the next step of validation.

    This is a really fun part of the design process. If you walk our hallways you might see all sorts of alternatives in posters on the walls, or you might catch a program manager or designer with a variety of functional prototypes (PowerPoint is a great UI design tool for scenarios and click-thrus that balance time to create with fidelity, and Visio is pretty cool for this as well) or our designers often mock up very realistic prototypes we can thoroughly test in the lab.

    Interpret and Validate

    With a pile of options in front of us we then take the next step of interpreting our own opinions, usability test data and external (to the team) feedback. This is the area where we end up in conversations that, as an example, could go something like this… “Option ‘A’ is better at elevating the discovery of a new feature, but option ‘B’ has a stronger sense of integration into the overall user experience”.

    As we all know, at the micro level you can often find a perfect solution to a specific problem. But when you consider the macro level you start to see the pros and cons of any given solution. It is why we have to be very careful not to fall into the trap of a “tests”. The trap here is that it is not often possible to test a feature within the full context of usage, but only within the context of a specific set of scenarios. You can’t test how a feature relates to all potential scenarios or usages while also getting rich feedback on intent. This is why designing tests and interpreting the results is such a key part of the overall UX effort led by our researchers.

    A mathematic way of looking at this is the “local min” versus a “global min”. A local min is one you find if you happen to start optimizing at the wrong spot on the curve. A good example of this in software is when faced with a usability challenge you develop a new control or new UI widget. It seems perfectly rational and often will test very well, especially if the task asked of the subject is to wiggle the widget appropriately. However, what we’re after is a global optimization where one can see that the potential costs (in code, quality, and usability) of another widget by erase any potential benefits gained by introducing a new widget. Much has been written about the role of decision theory as it relates to choosing between options, but our challenge with design is the preponderance of qualitative elements.

    Choosing

    Ultimately we must pick a design and that choice is informed by the full spectrum of data, qualitative and quantitative.

    Given a choice for a design and an understanding of how to implement it and the cost, there is still one more choice—should we do the feature at all. It sounds strange that we would go through all this work and then still maybe not build a specific feature. But like a movie director that shoots a scene that ends up on the cutting room floor, sometimes the design didn’t pan out as we had hoped, sometimes we were not able to develop an implementation plan within reason, or sometimes there were other ideas that just seemed better. And this is all before we get to the implementation, which as Larry pointed out has challenges as well.

    We have two tools we use to assist us in prioritizing features and designs. First is the product plan—the plan says at a high level what we “require” the product to achieve in terms of scenarios, business goals, schedule, and so on. Most of the time features don’t make it all the way through prototyping and testing because they just aren’t going to be consistent with the overall goals of the release. These goals are important otherwise a product doesn’t “hang together” and runs the risk of feeling like a “bunch of features”. These high level goals inform us quite a bit in terms of what code we touch and what scenarios we consider for a release.

    And second we have the “principles of design” for the release. These principles represent the language or vocabulary we use. These represent the core values—we often think of the design principles as anthropomorphizing the product—“if Windows were a person then it would be…”. This is the topic of Sam’s talk at the PDC.

    As mentioned in the introduction, it isn’t possible to do everything twice. We do have to decide. This could be a whole series of posts—customization, compatibility modes, and so on. We definitely hear folks on these topics and always do tons of work to enable both “tweaking” and “staying put” and at the same time we need to balance these goals with the goals of providing a robust and performant platform and also moving the OS forward. Some of us were involved in Office 2007 and there is a fun case study done by Harvard Business School [note fee associated with retrieving the full text] about the decision to (or not to) provide a “compatibility mode” for Office 2007. This was a choice that was difficult at the time and a few folks have even mentioned it in some comments.

    Implement and Integrate

    Finally, we build and iterate to refine the chosen solution. Inevitably there are new discoveries in the implementation phase and we adjust accordingly. As we integrate the solution into its place in Windows, that discovery continues. The beta period is a good example of how we continue to expand and learn from usage and feedback. In a Windows beta we are particularly interested in compatibility and real-world performance as those are two aspects of the design that are difficult to validate without the breadth of usage we can get if we do a great beta.

    It is important to keep in mind that we follow intensely all the feedback we receive from all forms—reviews, blogs, and of course all the telemetry about how the product is used (realizing that the beta is a select group of people).

    One of the things we hope to do with the blog, as you might have seen on the IE 8 Blog, is to discuss the evolution of the product in real-time. We’re getting close to this transition and are looking forward to talking more about the design choices we made!

    -- Sam, Brad, and Julie

  • Engineering Windows 7

    Engineering 7: A view from the bottom

    • 63 Comments

    Aka: A developers view of the Windows 7 Engineering process

    This post is by Larry Osterman.  Larry is one of the most “experienced” developers on the Windows team and has been at Microsoft since the mid 1980’s.  There are only three other folks who have worked at Microsoft longer on the entire Windows team!  Personally, I remember knowing about Larry when I started at Microsoft back in 1989—I remember he worked on “multimedia” (back when we used to host the Microsoft CD-ROM Conference) and he was one of those people that stood up and received a “5 Year” award from Bill Gates at the first company meeting I went to—that seemed amazing back then!  For Windows 7, Larry is a developer on the Devices and Media team which is where we work on audio, video, bluetooth, and all sorts of cool features for connecting up devices to Windows. 

    Larry wrote this post without any prodding and given his experience on so many Windows releases these thoughts seemed really worthwhile in terms of sharing with folks.  This post goes into “how” we work as a team, which for anyone part of a software team might prove pretty interesting.  While this is compared and contrasted with Vista, everyone knows that there is no perfect way to do things and this is just a little well-informed perspective.

    So thank you Larry!  --Steven

    Thanks to Steven and Jon for letting me borrow their soapbox :-).

    I wanted to discuss my experiences working on building Windows 7 (as opposed to the other technical stuff that you’ve read on this blog so far), and to contrast that with my experiences building Windows Vista. Please note that these are MY experiences. Others will have had different experiences; hopefully they will also share their stories here.

    The experience of building Windows 7 is dramatically different from the experience of building Vista. The rough outlines of the product development process haven’t changed, but organizationally, the Windows 7 process is dramatically better.

    For Windows Vista, I was a part of the WAVE (Windows Audio Video Excellence) group. The group was led by a general manager who was ultimately responsible for the deliverables. There was a test lead, a development lead and a program management lead who reported to the general manager. The process of building a feature roughly worked like this: the lead program managers decided (based on criteria which aren’t relevant to the post) which features would be built for Windows and which program managers would be responsible for which feature. The development leads decided which developers on the team would be responsible for the feature. The program manager for the feature wrote a functional specification (which described the feature and how it should work) in conjunction with development. Note that the testers weren’t necessarily involved in this part of the process. The developer(s) responsible for the feature wrote the design specification (which described how the feature was going to be implemented). The testers associated with the feature then wrote a test plan which described how to test the feature. The program manager or the developer also wrote the threat model for the feature.

    The developer then went off to code the feature, the PM spent their time making sure that the feature was on track, and when the developer was done, the tester started writing test cases.

    Once the feature was coded and checked into the source tree, it moved its way up to the “winmain” branch. Aside: The Windows source code has been arranged into “branches” – the root is “winmain”, which is the code base that would ultimately become Windows Vista. Each developer works in what are called “feature branches”, which merge changes into “aggregation branches”, the aggregation branches move into winmain.

    After the feature was coded, the testers tested, the developers fixed bugs and the program managers managed the program :-). As the product moved further along, it got harder and harder to get bug fixes checked into winmain (every bug fix carries with it a chance that the fix will introduce a regression, so the risk associated with each bug fix needs to be measured and the tolerance for risk decreases incrementally). The team responsible for managing this process met in the “ship room” where they made decisions every single day about which changes went into the product and which ones were left out. There could be a huge amount of acrimony associated with that – often times there were debates that lasted for hours as the various teams responsible for quality discussed the merits associated with a particular fix.

    All-in-all, this wasn’t too different from the way that features have been developed at Microsoft for decades (and is basically consistent with what I was taught back in my software engineering class back in college).

    For Windows 7, management decided to alter the engineering structure of the Windows organization, especially in the WEX [Windows Experience] division where I work. Instead of being fairly hierarchical, Steven has 3 direct reports, each representing a particular discipline: Development, Test and Program Management. Under each of the discipline leads, there are 6 development/test/program management managers, one for each of the major groups in WEX. Those 2nd level managers in turn have a half a dozen or so leads, each one with between 5 and 15 direct reports. This reporting structure has been somewhat controversial, but so far IMHO it’s been remarkably successful.

    The other major change is the introduction of the concept of a “triad”. A “triad” is a collection of representatives from each of the disciplines – Dev, Test and PM. Essentially all work is now organized by triads. If there’s ever a need for a group to concentrate on a particular area, a triad is spun off to manage that process. That means that all three disciplines provide input into the process. Every level of management is represented by a triad – there’s a triad at the top of each of the major groups in WEX, each of the second level leads forms a triad, etc. So in my group (Devices and Media) there’s a triad at the top (known as DKCW for the initials of the various managers). Within the sound team (where I work), there’s another triad (known as SNN for the initials of the various leads). There are also triads for security, performance, appcompat, etc.

    Similar to Windows Vista, the leads of all three disciplines get together and decide a set of features that go in each release. They then created “feature crews” to implement each of the features. Typically a feature crew consists of one or two developers, a program manager and one or two testers.

    This is where one of the big differences between Vista and Windows 7 occurs: In Windows 7, the feature crew is responsible for the entire feature. The crew together works on the design, the program manager(s) then writes down the functional specification, the developer(s) write the design specification and the tester(s) write the test specification. The feature crew collaborates together on the threat model and other random documents. Unlike Windows Vista where senior management continually gave “input” to the feature crew, for Windows 7, management has pretty much kept their hands off of the development process. When the feature crew decided that it was ready to start coding (and had signed off on the 3 main documents), the feature crew met with the second level triad (in my case with DKCW) to sanity check the feature – this part of the process is critical because the second level triad gets an opportunity to provide detailed feedback to the feature crew about the viability of their plans.

    And then the crew finally gets to start coding. Sort-of. There are still additional reviews that need to be done before the crew can be considered “ready”. For instance, the feature’s threat model needs to be reviewed by one of the members of the security triad. There are other parts of the document that need to be reviewed by other triads as well.

    A feature is not permitted to be checked into the winmain branch until it is complete. And I do mean complete: the feature has to be capable of being shipped before it hits winmain – the UI has to be finished, the feature has to be fully functional, etc. In addition, when a feature team takes a dependency on another Windows 7 feature, the feature teams for the two features MUST sign a service level agreement to ensure that each team knows about the inter-dependencies. This SLA is especially critical because it ensures that teams know about their dependants – that way when they change the design or have to cut parts of the feature, the dependent teams aren’t surprised (they may be disappointed but they’re not surprised). It also helps to ensure tighter integration between the components – because one team knows the other team, they can ensure that both teams are more closely in alignment.

    Back in the Vista day, it was not uncommon for feature development to be spread over multiple milestones – stuff was checked into the tree that really didn’t work completely. During Win7, the feature crews were forced to produce coherent features that were functionally complete – we were told to operate under the assumption that each milestone was the last milestone in the product and not schedule work to be done later on. That meant that teams had to focus on ensuring that their features could actually be implemented within the milestone as opposed to pushing them out.

    For the nuts and bolts, The Windows 7 development process is scheduled over several 3-month long milestones. Each milestone allowed for 6 weeks of development and 6 weeks of integration – essentially time to fine-tune the feature and ensure that most of the interoperability problems were shaken out.

    Ok, that’s enough background (it’s bad when over half a post on Windows 7 is actually about Windows Vista, but a baseline needed to be established). As I said at the beginning, this post is intended to describe my experiences as a developer on Windows 7. During Windows 7, I worked on three separate feature crews. The first crew delivered two features, the second crew delivered about 8 different features all relatively minor and the third crew delivered three major features and a couple of minor features. I also worked as the development part of the WEX Devices and Media security team (which is where my series of post on Threat Modeling came from – I wrote them while I was working with the members of D&M on threat modeling). And I worked as the development part of an end-to-end scenario triad that was charged with ensuring that scenarios that the Sound team defined at the start of the Windows 7 planning process were actually delivered in a coherent and discoverable way.

    In addition, because the test team was brought into the planning process very early on, the test team provided valuable input and we were able to ensure that we built features that were not only code complete but also test complete by the end of the milestone (something that didn’t always happen in Vista). And it ensured that the features we built were actually testable (it sounds stupid I know, but you’d be surprised at how hard it can be to test some features). As a concrete example, we realized during the planning process that some aspect of one of the features we were working on in M2 couldn’t be completed during the milestone. So before the milestone was completed, we ripped the feature out (to be more accurate, we changed the system so that the new code was no longer being built as a part of the product). During the next milestone, after the test team had finished writing their tests, we re-enabled the feature. But we remained true to the design philosophy – at the end of the milestone everything that was checked into the “main” branch was complete – it was code AND test complete, so that even if we had to ship Windows 7 without M3 there was no test work that was not complete. This is a massive change from Vista – in Vista, since the code was complete we’d have simply checked in the code and let the test team deal with the fallout. By integrating the test teams into the planning process at the beginning we were able to ensure that we never put the test organization into that bind. This in turn helped to ensure that the development process never spiraled out of control. Please note that features can and do stretch across multiple milestones. In fact one of the features on the Sound team is scheduled to be delivered across three milestones – the feature crews involved in that feature carefully scheduled the work to ensure that they would have something worth delivering whenever Windows 7 development was complete.

    Each of the feature crews I’ve worked on so far has had dramatically different focuses – some of the features I worked on were focused on core audio infrastructure, some were focused almost entirely on UX (user experience) changes, and some features involved much higher level components. Because each of the milestones was separate, I was able to work on a series of dramatically different pieces of the system, something I’ve really never had a chance to do before.

    In Windows 7, senior management has been extremely supportive of the various development teams that have had to make the hard decisions to scale back features that were not going to be able to make the quality bar associated with a Windows release – and there absolutely are major features that have gone all the way through planning only to discover that there was too much work associated with the feature to complete it in the time available. In Vista it would have been much harder to convince senior management to abandon features. In Win7 senior management has stood behind the feature teams when they’ve had to make the tough decisions. One of the messages that management has consistently driven home to the teams is “cutting is shipping”, and they’re right. If a feature isn’t coming together, it’s usually far better to decide NOT to deliver a particular feature then to have that feature jeopardize the ability to ship the whole system. In a typical Windows release there are thousands of features and it would be a real shame if one or two of those features ended up delaying the entire system because they really weren’t ready.

    The process of building 7 has also been dramatically more transparent – even sitting at the bottom of the stack, I feel that I’ve got a good idea about how decisions are being made. And that increased transparency in turn means that as an individual contributor I’m able to make better decisions about scheduling. This transparency is actually a direct fallout of management’s decision to let the various feature teams make their own decisions – by letting the feature teams deeper inside the planning process, the teams naturally make better decisions.

    Of course that transparency works both ways. Not only were teams allowed to see more about what was happening in the planning process, but because management introduced standardized reporting mechanisms across the product, the leads at every level of the hierarchy were able to track progress against plan at a level that we’ve never had before. From an individual developer’s standpoint, the overhead wasn’t too onerous – basically once a week, you were asked to update your progress against plan on each of your work items. That status was then rolled up into a series of spreadsheets and web pages that allowed each manager to track all the teams’ progress against plan. This allowed management to easily and quickly identify which teams were having issues and take appropriate action to ensure that the schedules were met (either by simplifying designs, assigning more developers, or whatever).

    In general, it’s been a total blast building 7. We’ve built some truly awesome features into the operating system and we’ve managed to keep the system remarkably stable during that entire process.

    --Larry Osterman

  • Engineering Windows 7

    Windows Desktop Search

    • 138 Comments

    One of the points of feedback has been about disabling services and optionally installing components—we’ve talked about our goals in this area in previous posts.  A key driver around wanting this type of control (but not the only driver) is a perception around performance and resource consumption of various platform components.  A goal of Windows is to provide a reliable and consistent platform for developers—one where they can count on system services as being available, as well as a set of OS features that all customers have the potential to benefit from.  At the same time we must do so in a way that is efficient in system resource usage—efficient enough so the benefit outweighs the cost.  We recognize that some percentage of customers believe solving this equation can only be done manually—much like some believe that the best car performance can only come from manual transmission.  For this post we’re going to look into the desktop search functionality from the perspective of the work we’re doing as both a broadly available platform component and to provide the rich end-user functionality, and also look at the engineering tradeoffs involved and techniques we use to build a great solution for everyone.  Chris McConnell, a principal SDE on the Find and Organize team, contributed this post.  --Steven

    Are you one of those folks who believes that search indexing is the cause of your drive light flashing like mad? Do you believe this is the reason you’re getting skooled when playing first person shooters with friends? If so, this blog post is for you! The Find and Organize team owns the ‘Windows Search’ service, which we simply refer to as the ‘indexer’. A refrain that we hear from some Vista power-users is they want to disable the indexer because they believe it is eating up precious system resources on their PC, offering little in return. Per our telemetry data, at most about 1.5% of Vista users disable the indexing service, and we believe that this perception is one motivator for doing so.

    The goal of this blog post is to clarify the role of the indexer and highlight some of the work that has been done to make sure the indexer uses system resources responsibly. Let’s start by talking about the function of the indexing service – what is it for? why should you leave it running?

    Why Index?

    Today’s PCs are filled with many rich types of files, such as documents, photos, music, videos, and so on. The number of files people have on their PC is growing at a rapid pace, making it harder and harder for them to find what they’re looking for, no matter how organized their files may (or may not) be. Increasingly, these files contain a good deal of structure, with metadata properties which describe their contents. A typical music file contains properties which describe the artist, album name, year of release, genre, duration of the song, and others which can be very useful when searching for music.

    Although search indexing technologies date back to the early days of Windows, With Windows Vista Microsoft introduced a consumer operating system that brought this functionality to mainstream users more prominently. Prior to Vista, searching was pretty rudimentary – often a brute force crawl through the files on your machine, looking only at simple file properties such as file name, date modified, and size, or an application specific index of application specific data. Within Windows, a more comprehensive search option allowed you to also examine the contents of the files, but this wasn’t widely used. It was fairly basic functionality – it treated all files just the same, without the tapping in to the rich metadata properties available in the files.

    In Windows Vista, the indexing service is on by default and includes expanded support in terms of the number of file formats and properties which are indexed. The indexer watches specific folders on your PC and catalogues their contents to facilitate fast searching of those files. When Windows indexes your music files, it also knows how to extract the music-specific properties which you’re most likely to search for. This enables support for more powerful searches and richer views over your files which wasn’t possible before. But this indexing doesn’t come free, and this is where engineering gets interesting. There’s a non-zero cost (in terms of system resources) that has to be paid to enable this functionality, and there are trade-offs involved in when and how you pay that price. There is nothing unique to indexing—all features have this cost-benefit tradeoff. 

    Trade-Offs

    Many search solutions follow(ed) the traditional “grep” model which means every search will read all of the files you wanted to search. In this case, you paid with your time as you waited for the search to execute. The more files you searched, the longer you waited each time you searched. If you wanted to perform the same search again, you would “pay” again. And the value you were getting in return wasn’t very good since the search functionality wasn’t particularly powerful. With Windows Vista , the indexer tries to read all of your files before you search so that when you search, it’s generally quicker and more responsive. This requires the indexer to scan all of your files just once initially, and not each and every time you perform a search. If the file were to change, the indexer would receive a notification (a “push” event) so that it could read that file again. When the indexer reads a file, it extracts the pertinent information about the file to enable more powerful searches and views. The challenge is to do this quickly enough so that the index is always up to date and ready for you to search, but also doing so in such a way that it doesn’t impact the performance of your system in a negative way. This is always a balancing act requiring trade-offs, and there are a number of things the indexer does to maintain its standing as a good Windows citizen while working to make sure that the index is always up-to-date.

    A Model Citizen

    A lot of work has gone into making the indexer be a model Windows citizen. We’ve written an extensive whitepaper on the issue, but it’s worth covering some of the highlights here. First and foremost, the indexer only monitors certain folders, which limits the amount of work it needs to do to just those files that you’re most likely to search. The indexer also “backs off” when you are actively using your PC. It indexes files more slowly, or stops entirely depending on the level of activity on the PC. When the indexer is reading files it uses low priority I/O and CPU and immediately releases the file if another application needs access.

    It’s critical that we get all of these issues right for the indexer, because it’s not only important for the features that our team builds (like Windows Search), but it’s important to the Windows platform as a whole. There are a host of applications which require the ability to search file contents on the PC. Imagine if each one of those applications built their own version of the indexer! Even if all of these applications did a great job, there will be a lot of unnecessary and redundant activity happening on your PC. Every time you saved one of your documents there will be a flurry of activity as these different indexers rushed to read the new version. To combat that, the indexer is designed to do this work for any application which might choose to use it and provide an open platform and API with flexibility and extensibility for developers. The API designed to be flexible enough to meet needs across the Windows ecosystem. Out of the box, the indexer has knowledge of about 200 common file types, cataloging nearly 400 different properties by default. And there is support for applications to add new file types and properties at any time. Applications can also add support for indexing of data types that aren’t file-based at all, like your e-mail. Just a few of the applications that are leveraging the indexer today are Microsoft Office Outlook and OneNote, Lotus Notes, Windows Live Photo Gallery, Internet Explorer 8, and Google Desktop Search. As with all extensible systems, developers often find creative uses for components for the system services. One example of this is the way the Tablet PC components leverage the index contents to improve handwriting accuracy.

    Constantly Improving

    We’re constantly working to improve the indexer’s performance and reliability. Version 3 shipped in Windows Vista.  Major improvements in this version included:

    • The indexer runs as a system service vs. as a per user process.  This minimizes impact on multi-user scenarios e.g. only one catalog per system results in reduction in catalog size and prevents re-indexing of the same content over and over.  Additional benefit is gained from the robust nature of services.
    • The indexer employs low priority I/O to minimize impact of indexing on responsiveness of PC.  Before Windows Vista, all I/O was treated equally.

    We’ve already released Windows Search version 4 as an enhancement to either Windows XP or Vista which goes even further in terms of performance and stability improvements, such as:

    • Significant improvements across the board for queries which involve sorting, filtering or grouping. Example improvements on Vista include:
      1. Getting all results while sorting or grouping has been improved. Typical query improvements  are up to 38% faster.
      2. CPU time has been reduced by 80%
      3. Memory usage has been reduced by 20%
    • Load on Exchange servers is reduced over 95% when Outlook is running in online mode.  With previous versions of Windows Search, large numbers of Outlook clients running in online mode could easily overwhelm the Exchange server.
    • Reliability improvements including:
      1. We made a number of fixes to address user-reported situations that previously caused indexing to stop working.
      2. We improved the indexer’s ability to both prevent and recover from index corruptions.  Now, when catalog corruption is detected it is always rebuilt automatically – previously this only happened in certain cases.
      3. We added new logging and events to help track down and fix reliability issues.

    And we’ve done even more to improve performance and reliability for the indexer in Windows 7 which you’ll soon see at the PDC. If you still believe that the indexer is giving you trouble, we’ve got a few things for you to try:

    • Download and install Windows Search 4 (on Vista or XP).
    • Download and install the Indexer Gadget from the Windows Live Gadget Gallery (Vista only). This gadget was written by one of our team members, and gives you a quick way to view the number of items indexed. It also allows you to pause indexing, or to make it run full-speed (without backing off).
    • If you‘re one of those people who like to get under the hood of the car and poke around the engine, you can use the Windows Task manager and/or Resource Monitor to monitor the following processes: SearchIndexer, SearchFilterHost, SearchProtocolHost.

    If you feel as though your system is slow, and you suspect the indexer is the culprit, watch the gadget as you work with your PC. Is the number of indexed items changing significantly when you’re experiencing problems? If you pause the indexer, does your system recover? We’re always looking to make our search experience better, so if you are still running into issues, we want to hear about them. Send your feedback to idx-help@microsoft.com.

    Chris McConnell

    Find and Organize

  • Engineering Windows 7

    User Account Control

    • 188 Comments

    We promised that this blog would provide a view of Engineering Windows 7 and that means that we would cover the full range of topics—from performance to user interface, technical and non-technical topics, and of course easy topics and controversial topics.  This post is about User Account Control.  Our author is Ben Fathi, vice president for core OS development.  UAC is a feature that crosses many aspects of the Windows architecture—security, accounts, user interface, design, and so on—we had several other members of the team contribute to the post. 

    We continue to value the discussion that the posts seem to inspire—we are betting (not literally of course) that this post will bring out comments from even the most reserved of our readers.  Let’s keep the comments constructive and on-topic for this one.

    FWIW, the blogs.msdn.com server employs some throttles on comments that aim to reduce spam.  We don’t control this and have all the “unmoderated” options checked.  I can’t publish the spam protection rules since that sort of defeats the purpose (and I don’t know them).  However, I apologize if your comment doesn’t make it through.  --Steven

    User Account Control (UAC) is, arguably, one of the most controversial features in Windows Vista. Why did Microsoft add all those popups to Windows? Does it actually improve security? Doesn’t everyone just click “continue”? Has anyone in Redmond heard the feedback on users and reviewers? Has anyone seen a tv commercial about this feature? 

    In the course of working on Windows 7 we have taken a hard look at UAC – examining customer feedback, volumes of data, the software ecosystem, and Windows itself. Let’s start by looking at why UAC came to be and our approach in Vista.

    The Why of UAC

    Technical details aside, UAC is really about informing you before any “system-level” change is made to your computer, thus enabling you to be in control of your system. An “unwanted change” can be malicious, such as a virus turning off the firewall or a rootkit stealthily taking over the machine. However an “unwanted change” can also be actions from people who have limited privileges, such as a child trying to bypass Parental Controls on the family computer or an employee installing prohibited software on a work computer. Windows NT has always supported multiple user account types – one of which is the “standard user,” which does not have the administrative privileges necessary to make changes like these. Enterprises can (and commonly do) supply most employees with a standard user account while providing a few IT pros administrative privileges. A standard user can’t make system level changes, even accidentally, by going to a malicious website or installing the wrong program. Controlling the changes most people can make to the computer reduces help desk calls and the overall Total Cost of Ownership (TCO) to the company. At home, a parent can create a standard user account for the children and use Parental Controls to protect them.

    However, outside the enterprise and the Parental Controls case, most machines (75%) have a single account with full admin privileges. This is partly due to the first user account defaulting to administrator, since an administrator on the machine is required, and partly due to the fact that people want and expect to be in control of their computer. Since most users have an Administrator account, this has historically created an environment where most applications, as well as some Windows components, always assumed they could make system-level changes to the system. Software written this way would not work for standard users, such as the enterprise user and parental control cases mentioned above. Additionally, giving every application full access to the computer left the door open for damaging changes to the system, either intentionally (by malware) or unintentionally (by poorly written software.)

    Percentage of machines (server excluded) with one or more user accounts from January 2008 to June 2008.  75% of machines have one account.

    Figure 1. Percentage of machines (server excluded) with one or more user accounts from January 2008 to June 2008.

    User Account Control was implemented in Vista to address two key issues: one, incompatibility of software across user types and two, the lack of user knowledge of system-level changes. We expanded the account types by adding the Protected Admin (PA), which became the default type for the first account on the system. When a PA user logs into the system, she is given two security tokens – one identical to the Standard User token that is sufficient for most basic privileges and a second with full Administrator privileges. Standard users receive only the basic token, but can bring in an Administrator token from another account if needed.

    When the system detects that the user wants to perform an operation which requires administrative privileges, the display is switched to “secure desktop” mode, and the user is presented with a prompt asking for approval. The reason the display is transitioned to “secure desktop” is to avoid malicious software attacks that attempt to get you to click yes to the UAC prompt by mimicking the UAC interface (spoofing the UI.) They are not able to do this when the desktop is in its “secure” state. Protected Admin users are thus informed of any system changes, and only need to click yes to approve the action. A standard user sees a similar dialog, but one that enables her to enter Administrative credentials (via password, smart card PIN, fingerprint, etc) from another account to bring in the Administrator privileges needed to complete the action. In the case of a home system utilizing Parental Controls, the parent would enter his or her login name and password to install the software, thus enabling the parent to be in control of software added to the system or changes made to the system. In the enterprise case, the IT administrator can control the prompts through group policy such that the standard user just gets a message informing her that she cannot change system state.

    What we have learned so far

    We are always trying to improve Windows, especially in the areas that affect our customers the most. This section will look at the data around the ecosystem, Windows, and end-users—recognizing that the data itself does not tell the story of annoyance or frustration that many reading this post might feel. 

    UAC has had a significant impact on the software ecosystem, Vista users, and Windows itself. As mentioned in previous posts, there are ways for our customers to voluntarily and anonymously send us data on how they use our features (Customer Experience Improvement Program, Windows Feedback Panel, user surveys, user in field testing, blog posts, and in house usability testing). The data and feedback we collect help inform and prioritize the decisions we make about our feature designs. From this data, we’ve learned a lot about UAC’s impact.

    Impact on the software ecosystem

    UAC has resulted in a radical reduction in the number of applications that unnecessarily require admin privileges, which is something we think improves the overall quality of software and reduces the risks inherent in software on a machine which requires full administrative access to the system.

    In the first several months after Vista was available for use, people were experiencing a UAC prompt in 50% of their “sessions” - a session is everything that happens from logon to logoff or within 24 hours. Furthermore, there were 775,312 unique applications (note: this shows the volume of unique software that Windows supports!) producing prompts (note that installers and the application itself are not counted as the same program.) This seems large, and it is since much of the software ecosystem unnecessarily required admin privileges to run. As the ecosystem has updated their software, far fewer applications are requiring admin privileges. Customer Experience Improvement Program data from August 2008 indicates the number of applications and tasks generating a prompt has declined from 775,312 to 168,149.

    Number of unique applications and tasks creating UAC prompts.  Shows a significant decline.

    Figure 2. Number of unique applications and tasks creating UAC prompts.

    This reduction means more programs work well for Standard Users without prompting every time they run or accidentally changing an administrative or system setting. In addition, we also expect that as people use their machines longer they are installing new software or configuring Windows settings less frequently, which results in fewer prompts, or conversely when a machine is new that is when there is unusually high activity with respect to administrative needs. Customer Experience Improvement Program data indicates that the number of sessions with one or more UAC prompts has declined from 50% to 33% of sessions with Vista SP1.

    Percentage of sessions with prompts over time. 

    Figure 3. Percentage of sessions with prompts over time.

    Impact on Windows

    An immediate result of UAC was the increase in engineering quality of Windows. There are now far fewer Windows components with full access to the system. Additionally, all the components that still need to access the full system must ask the user for permission to do so. We know from our data that Windows itself accounts for about 40% of all UAC prompts. This is even more dramatic when you look at the most frequent prompts: Windows components accounted for 17 of the top 50 UAC prompts in Vista and 29 of the top 50 in Vista SP1. Some targeted improvements in Vista SP1 reduced Windows prompts from frequently used components such as the copy engine, but clearly we have more we can (and will) do. The ecosystem also worked hard to reduce their prompts, thus the number of Windows components on the top 50 list increased. Windows has more of an opportunity to make deeper architectural changes in Windows 7, so you can expect fewer prompts from Windows components. Reducing prompts in the software ecosystem and in Windows is a win-win proposition. It enables people to feel confident they have a greater choice of software that does not make potentially destabilizing changes to the system, and it enables people to more readily identify critical prompts, thus providing a more confident sense of control.

    One important area of feedback we’ve heard a lot about is the number of prompts encountered during a download from Internet Explorer. This is a specific example of a more common situation - where an application’s security dialogs overlap with User Account Control. Since XP Service Pack 2, IE has used a security dialog to warn users before running programs from the internet. In Vista, this often results in a double prompt – IE’s security dialog, followed immediately by a UAC dialog. This is an area that should be properly addressed.

    Number of Microsoft prompters in the top 50 over time.

    Figure 4. Number of Microsoft prompters in the top 50 over time.

    Impact on Customers

    One extra click to do normal things like open the device manager, install software, or turn off your firewall is sometimes confusing and frustrating for our users. Here is a representative sample of the feedback we’ve received from the Windows Feedback Panel:

    • “I do not like to be continuously asked if I want to do what I just told the computer to do.”
    • “I feel like I am asked by Vista to approve every little thing that I do on my PC and I find it very aggravating.”
    • “The constant asking for input to make any changes is annoying. But it is good that it makes kids ask me for password for stuff they are trying to change.”
    • “Please work on simplifying the User Account control.....highly perplexing and bothersome at times”

    We understand adding an extra click can be annoying, especially for users who are highly knowledgeable about what is happening with their system (or for people just trying to get work done). However, for most users, the potential benefit is that UAC forces malware or poorly written software to show itself and get your approval before it can potentially harm the system.

    Does this make the system more secure? If every user of Windows were an expert that understands the cause/effect of all operations, the UAC prompt would make perfect sense and nothing malicious would slip through. The reality is that some people don’t read the prompts, and thus gain no benefit from them (and are just annoyed). In Vista, some power users have chosen to disable UAC – a setting that is admittedly hard to find. We don’t recommend you do this, but we understand you find value in the ability to turn UAC off. For the rest of you who try to figure out what is going on by reading the UAC prompt , there is the potential for a definite security benefit if you take the time to analyze each prompt and decide if it’s something you want to happen. However, we haven’t made things easy on you - the dialogs in Vista aren’t easy to decipher and are often not memorable. In one lab study we conducted, only 13% of participants could provide specific details about why they were seeing a UAC dialog in Vista.  Some didn’t remember they had seen a dialog at all when asked about it. Additionally, we are seeing consumer administrators approving 89% of prompts in Vista and 91% in SP1. We are obviously concerned users are responding out of habit due to the large number of prompts rather than focusing on the critical prompts and making confident decisions. Many would say this is entirely predictable.

    Percentage of prompts over time per prompt type.

    Figure 5. Percentage of prompts over time per prompt type.

    Percentage of UAC prompts allowed over time.

    Figure 6. Percentage of UAC prompts allowed over time.

    Looking ahead…

    Now that we have the data and feedback, we can look ahead at how UAC will evolve—we continue to feel the goal we have for UAC is a good one and so it is our job to find a solution that does not abandon this goal. UAC was created with the intention of putting you in control of your system, reducing cost of ownership over time, and improving the software ecosystem. What we’ve learned is that we only got part of the way there in Vista and some folks think we accomplished the opposite.

    Based on what we’ve learned from our data and feedback we need to address several key issues in Windows 7:

    • Reduce unnecessary or duplicated prompts in Windows and the ecosystem, such that critical prompts can be more easily identified.
    • Enable our customers to be more confident that they are in control of their systems.
    • Make prompts informative such that people can make more confident choices.
    • Provide better and more obvious control over the mechanism.

    The benefits UAC has provided to the ecosystem and Windows are clear; we need to continue that work. By successfully enabling standard users UAC has achieved its goal of giving IT administrators and parents greater control to lock down their systems for certain users. As shown in our data above, we’ve seen the number of external applications and Windows components that unnecessarily require Admin privileges dramatically drop. This also has the direct benefit of reducing the total amount of prompts users see, a common complaint we hear frequently. Moving forward we will look at the scenarios we think are most important for our users so we can ensure none of these scenarios include prompts that can be avoided. Additionally, we will look at “top prompters” and continue to engage with third-party software vendors and internal Microsoft teams to further reduce unnecessary prompts.

    More importantly, as we evolve UAC for Windows 7 we will address the customer feedback and satisfaction issues with the prompts themselves. We’ve heard loud and clear that you are frustrated. You find the prompts too frequent, annoying, and confusing. We still want to provide you control over what changes can happen to your system, but we want to provide you a better overall experience. We believe this can be achieved by focusing on two key principles. 1) Broaden the control you have over the UAC notifications. We will continue to give you control over the changes made to your system, but in Windows 7, we will also provide options such that when you use the system as an administrator you can determine the range of notifications that you receive. 2) Provide additional and more relevant information in the user interface. We will improve the dialog UI so that you can better understand and make more informed choices. We’ve already run new design concepts based on this principle through our in-house usability testing and we’ve seen very positive results. 83% of participants could provide specific details about why they were seeing the dialog. Participants preferred the new concepts because they are “simple”, “highlight verified publishers,” “provide the file origin,” and “ask a meaningful question.” 

    In summary, yes, we’ve heard the responses to the UAC feature – both positive and negative. We plan to continue to build on the benefits UAC provides as an agent for standard user, making systems more secure. In doing so, we will also address the overwhelming feedback that the user experience must improve.

    Ben Fathi

Page 2 of 3 (67 items) 123