Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

September, 2008

  • Engineering Windows 7

    User Interface: Starting, Launching, and Switching

    • 131 Comments

    Where to Start?  In this post, Chaitanya Sareen, a senior program manager on the Core User Experience team, sets the engineering context for the most frequently used user-interface elements in Windows – the Windows Taskbar.  -- Steven

    It should come as no surprise that we receive lots of feedback about the taskbar and its functionality in general. It should also come as no surprise that we are constantly trying to raise the bar and improve the taskbar experience for our customers, while making sure we bring forward the familiarity and benefits (and compatibility) of the existing implementation and design. In this post, the we would like to provide some insight into that unassuming bar most likely at the bottom of your Windows desktop. Let’s take a closer look at its various parts, data we’ve collected and how this learning will inform the engineering of Windows 7.

    Taskbar Basics

    Our taskbar made its debut way back in Windows 95 and its core functionality remains the same to this day. In short, it provides launching, switching and “whispering” functionality. Figure 1 shows the Vista taskbar and calls out its basic anatomy. Notable pieces are the taskband, Quick Launch, the Start Menu, Desktop Toolbars (aka Deskbands) and the Notification Area. Collectively, these components afford some of the most fundamental controls for customers to start, manage and monitor their tasks.

    Image of Windows taskbar pointing out names of various regions.

    Fig. 1: Windows Taskbar Anatomy

    Taskband: The faithful window switcher

    The taskband is one of the most important parts of the taskbar. It hosts buttons which represent most of the windows open on the desktop. Think of the taskband as a remote control for your computer—you can switch windows just like switching channels on a TV. The idea of switching windows is the most fundamental aspect of the Windows taskbar. Other operating systems also have bars at the bottom of their screen, although theirs may have different goals. For example, Mac OS X has a Dock which is primarily a program launcher and a program switcher. Clicking on an icon on the Dock usually brings up all the windows of a running program. In 2003 Apple introduced a window switcher known as Exposé which provides a different visual approach to our long-standing Alt-tab interface (Vista’s Flip 3D is yet another visual approach). These dedicated window switchers all aim to provide customers with a broad view of their open windows, but they each require the customer to first invoke them. The taskband on the other hand, is designed to always be visible so that windows remain within quick access of the mouse. This makes the taskbar the most prominent window switcher of the Windows operating system.

    Two noteworthy taskbar changes were introduced in the last eight years. Windows XP ushered in grouping which allows taskbar buttons to collapse into a single button to save space and organize windows by their process. Vista presented taskbar thumbnails. These visual representations give customers more information about the window they are looking for. While valuable, interfaces like the taskbar, Alt-tab and even Apple’s own Exposé reveal that thumbnails are not always large enough to guarantee recognition of a window. Their value further degrades when they have to shrink to accommodate many open windows, which is feedback we receive from those that often have lots of running programs x lots of open windows.

    The Start Menu: the Windows launch pad

    The Start Menu has always been anchored off the taskbar as a starting point for the customer’s key tasks such as launching or accessing system functionality. Microsoft of course used term “Start” and prominently labeled the Start Menu’s button as such. You may even recall the huge marketing campaign for Windows 95 which featured the Rolling Stone’s “Start Me Up”. In all seriousness though, our research showed that many customers didn’t always know where to go on their computer to start a task. When a customer was placed in front of a Windows 95 machine she now had a clearly labeled place to start. And yes, we’ve heard the joke that you click start to shutdown your machine. Speaking of shutdown, we did encounter some challenges with the power options in Vista’s Start Menu. The goal was to bubble-up and advertise the sleep option so that customers enjoy a faster resume. However, we now know despite our good intentions, customers are opening that fly-out menu and selecting other options. We’re looking into improving this experience.

    The Start Menu has undergone many changes over the years. One notable change was the appearance of a MFU (most frequently used) section in Windows XP that suggests commonly (well frequently) used programs. The goal here was to save the customer time by not having to always go to All Programs. Since these items appear automatically based on usage, no manual customization was even required. All Programs itself has undergone several iterations. Customer feedback revealed that people encountered difficulty in traversing the original All Programs fly-out menu. It wasn’t uncommon to have your mouse “fall off” the menu and then you’d have your restart the task all over again. This was particularly the case for laptop customers using a trackpad. It also didn’t help that expanding this menu suddenly filled the entire desktop which looked visually noisy and it also required lots of mouse movement. And of course, for machines with large number of items and/or groups it was especially complex, and even more so on small screens.  Vista introduced a single menu that requires less mouse acrobatics.

    Search was another important addition to the Start Menu that makes launching even easier. This new feature in Vista provides fast access to programs and files without the need to use a mouse at all. Typing in a phrase quickly surfaces programs, files and even e-mails. We’ve received many positive comments from enthusiasts who feel this is a key performance win in terms of “time to launch”.  It may be interesting to note that Start Menu’s search is optimized to first return program results as this was viewed as the most common scenario among our customers (using some of the Desktop Search technology). Search even permits customers to use parameters to further scope their queries. For instance, one can use “to:john” or “from:jane” to find a specific mail directly from the Start Menu. Our advanced customers also enjoy the benefit of using the Start Menu’s search as a replacement of the Run Dialog. Just as they would type the name of an executable along with some switches in the dialog, they can now just type this directly into the search field. We could (and will) dedicate an entire blog post to search alone, but hopefully you get a sense of how search certainly provides a powerful launch alternative to mouse navigation.

    Quick Launch: Launching at your fingertips

    Quick Launch provides a way for customers to launch commonly used programs, files, folders and websites directly off the taskbar. It was introduced to Windows 95 by Internet Explorer 4.0 with the Windows Desktop Update. Customizing Quick Launch is as simple as dragging shortcuts into to this area. It saves you a trip to the Start Menu, the desktop or a folder when you want to launch something. An interesting feature of Quick Launch that you may not be aware of is that it has always supported large icons (unlock the taskbar, right-click on Quick Launch and click on large icons under “View”) as seen in figure 2. Of course growing the icons begins to intrude on the real-estate of the taskband which is one of the reasons we have not enabled this configuration by default. As an aside, Windows XP had Quick Launch turned off by default in an attempt to reduce the number of different launching surfaces throughout Windows. Based on your feedback, we quickly rectified this faux pas and Quick Launch was turned on by default again. Don’t mess with quick access to things people use every day! We heard you loud and clear.

    clip_image004

    Fig. 2: Large Icons in Quick Launch. Large icons on the taskbar have been supported since Windows 95 with IE 4

    Desktop Toolbars (aka Deskbands): Gadgets for your taskbar

    Desktop Toolbars offer extensible and specialized functionality at the top-level of the taskbar. This functionality also came to the taskbar via Internet Explorer 4.0 back in the ‘90s. You can access toolbars by right-clicking on your taskbar and expanding “Toolbars”. Personally, I like to think of Desktop Toolbars as an early type of gadgets for the Windows platform. Over the years developers have written various toolbars including controls for background music (e.g. Windows Media Player’s mini-mode shown in figure 1), search fields, richer views of laptop batteries, weather forecasts and many more.

    One of the original scenarios of Desktop Toolbars was to allow customers to launch items directly off the taskbar. In fact, Quick Launch itself is a special type of toolbar that surfaces shortcuts in the Quick Launch folder. Did you know you can even create your own toolbar for any folder on your computer so that you have quick access to its contents (from the Toolbar menu, select “New Toolbar” and just choose the folder you’d like to access)? Apple’s latest OS introduced similar functionality to the Dock called Stacks. While I think their implementation of this feature is generally more visually appealing, it is interesting to note they recently released a new list representation that matches our original functionality. Seems like we both agree a simple list is usually the most efficient way to parse and navigate lots of items.

    After extolling all the greatness of Desktop Toolbars, we must also admit they introduce several challenges. For starters, they aren’t the easiest thing to discover. They also take up valuable space on an already busy taskbar. Most importantly though, they don’t always solve the customer goal. Sure you can have a folder’s contents accessible off your taskbar, but what if the files you want quick access to aren’t located in a single place? These are design challenges we intend to tackle.

    Notification Area: The whisperer

    The Notification Area is pretty much what you expect—an area for notifications. It was an original part of the taskbar and it was designed to whisper information to the customer. Here you can easily monitor the system, be alerted to the state of a program or even check the time. Icons were the predominant way to convey information until later versions of Windows introduced notification balloons that provide descriptive alerts with text. Also added was a collapsible UI that hid inactive icons so the taskbar would appear cleaner.

    With more developers leveraging its functionality, the Notification Area has grown in popularity over the years. Some may observe that it has changed from a subtle whisperer to something louder. Based upon the feedback we’ve collected from customers, we recognize the Notification Area could benefit from being less noisy and something more controllable by the end-user.

    Show Me the Data

    Earlier posts to this blog discussed how customers can voluntarily and anonymously send us data on how they use our features. We use these findings to help guide our designs. Please note that data do not design features for us, but they certainly help us prioritize our investments as well as validate our approach.  All to often we’re all guilty of saying something like “we know everyone does <x>” or “all users do <y>”.  Given the reliability and statistical accuracy of this data, we can speak with more real-world accuracy about how things are in used in practice.  Let’s look at some interesting information we have collected about how our customers use the taskbar.

    Figure 3 provides some of the most important data about the taskbar—window count. On average, we know that a vast majority of our customers encounter up to 6-9 simultaneous windows during a session (a session is defined as a log in / log out or 24 hours—whichever occurs first). It goes without saying that the taskbar should work for the entire distribution of this graph, but identifying the “sweet spot” helps focus our efforts on the area that matters most to the most amount of customers. So, we know that if we nail the 6-9 case and we work well for the 0-5 as well as the 10-14 scenarios, we’ve addressed almost 90% of typical sessions.

    Histogram indicating peak number of open windows in sessions.

    Fig. 3: What’s the maximum number of windows opened at a time?

    Figures 4 and 5 help us understand how customers customize their taskbars. We could probably spend an entire post focused solely on how we determine the options we expose. Perhaps another time we’ll tackle the paradox of choice and how options stress our engineering process yet also make the product more fun for a set of customers. Until then, let’s see what conclusions we can draw from these findings. The most obvious takeaway is that most customers do not change the default settings, which are a simple right-click Properties away. For example, it may be interesting to note how often end-users relocate the taskbar to other regions of the screen—less than 2% of sessions have a taskbar that’s not at the bottom of the screen. We also know that some small percentage of machines accidently relocate the taskbar and more often than not end-users have difficulty undoing such a state—though our data does not differentiate this situation.  This data does not necessarily mean we would remove relocation functionality, but rather we could prioritize investments in a default horizontal taskbar over other configurations.

    Image of Taskbar properties dialog indicating percentage of sessions where a particular option is customized.

    Fig. 4: How do people customize their taskbar? The red number indicates percentage of sessions in which the corresponding checkbox is enabled.

    LOCATION

    SESSION PERCENT

    Bottom (default)

    98.4%

    Top

    1.02%

    Left

    0.36%

    Right

    0.21%

    Fig. 5: Where do people put their taskbar?

    Figure 6 provides some insight into the Windows Media Player Desktop Toolbar. The Windows UX Guidelines prescribe that to create a toolbar on the customer’s taskbar, you must call a Windows Shell API that asks the customer for permission. Looking at the Windows Media Player usage we found that only 10% sessions show that the customer consented. Even more surprising is that only 3% of sessions see the toolbar at all (you still need to minimize Media Player to see the controls). In other words, 97% of sessions aren’t even enjoying this functionality at all! Since we do believe the scenario has value, we know to look into alternative designs. We’d like to surface this functionality to a larger set of customers while making sure the customer remains in control of her experience.

    STATE

    SESSION PERCENT

    Toolbar enabled

    10%

    Toolbar enabled and visible

    3%

    Fig. 6: How many people use the Windows Media toolbar? Enabled means user consented to the toolbar, visible means the toolbar actually appeared on the taskbar.

    Evolving the Taskbar

    Before the team even sat down to brainstorm ideas about improving the taskbar, we all took time to first respect the UI. The taskbar is almost 15 years old, everyone uses it, people are used to it and many consider it good enough. We also recognized that if we were to improve it, we could not afford to introduce usability failures where none existed. This automatically sets a very high bar. We proceeded carefully by first looking into areas for improvement.

    Here’s a small sample of some things we’ve learned from our data, heard from our customers and what we’ve observed ourselves. One of favorite ways of gaining verbatim comments in a lab setting where we can validate the instrumented data but also gain in-depth context via interviews and questionnaires.  In engineering Windows 7 we have hundreds of hours of studies like these.  Please remember this is just a glimpse of some feedback—this is not an exhaustive list nor it is implied that we will, or should, act upon all of these concepts.

    • Please let me rearrange taskbar buttons! Pretty please?
    • I sometimes accidently click on the wrong taskbar button and get the wrong window.
    • It would be great if the taskbar spanned multiple monitors so there’s more room to show windows I want to switch to.
    • There isn’t always enough text on the taskbar to identify the window I’m looking for.
    • There’s too much text on the taskbar. (Yes, this is the exact opposite of the previous item—we’ve seen this quite a bit in the blog comments as well.)
    • It may take several clicks to get to some programs or files that I use regularly.
    • Icons of pinned files sometimes look too much alike—I wish I could tell them apart better.
    • The bottom right side of my screen is too noisy sometimes. There are lots of little icons and balloons competing for my attention.
    • How do I add/remove “X” from the taskbar?
    • I would like Windows to tuck away its features cleverly and simplify its interface.

    In the abstract, we can summarize this feedback with a few principles:

    • Customers can switch windows with increased confidence and ease.
    • Commonly used items and tasks should be at the customer’s fingertips.
    • Customers should always feel in control.
    • The taskbar should have a cleaner look and feel.

    We hope this post provides a little more insight into the taskbar as well as our process of collecting and reacting to customer feedback. Stay tuned for more details in the future.

    - Chaitanya

  • Engineering Windows 7

    Reflecting on a few recent threads…

    • 126 Comments

    When we kicked off this blog, the premise was a dialog – a two-way conversation about the engineering of Windows 7.  We couldn’t be happier with the way things have been going in this short time.  As we said we intended to do, we’ve started a discussion about how we build the product and have had a chance to have some back and forth in comments and in posts about topics that are clearly important to you.  To put some numbers on things, I’ve personally received about 400 email messages (and answered quite a few) and all total we have had about 900 English language comments from about 500 different readers (with a few of you > 10 comments).  Early numbers show we have about 10x that latter number in readers+page views.

    A number of folks on the blog have asked for more details about how we build Windows—what’s the feature selection process, the daily build process, globalization, and so on.  And in keeping with our new tradition of seeing the other “side” of an issue, many folks have also said they feel like they have enough of that information and want to know the features.  So in this post I want to offer a perspective on a couple of features that have been talked about a bunch, and also a perspective on talking about features and feature selection.

    We love the response.  We have seen that some topics have created a forum for folks to do a lot of asking for features, and we will do our best to respond in the context of what we set out to do, which is to have a discussion about how Windows 7 is engineered, including how we make choices about what goes in the product.  I admit that it might be tempting (for me) to blog a big long list of features and then say “give us feedback“.  It is tempting because I have seen this in the past and it is a certainly an easy thing to do that might make people feel happier and more involved.  However, there are some challenges with this technique that make these sorts of forums less than satisfying for all of us.  First, it is “reactive” in that it asks you to just react to what you see.  Absent a shared context we won’t be remotely on the same page in terms of motivations, priorities, and so on.  This is especially the case when a feature is early and we aren’t really capable of “marketing” it effectively and telling the story of the feature.  Second, a broad set of anecdotal feedback (that is free text) is not really actionable data and doesn’t capture the dialog and discussion we are having.  Making decisions this way is almost certain to not go well with the “half” of the folks who don’t agree with the decision or prioritization.  And third, there's a tendency to feel that feedback given yields action in that direction.  These are some of the reasons why we have taken the approach of talking about how we are making Windows 7.

    Some have suggested that we publish a list of features and then have a ranking/voting process.  In fact some have gone as far as doing that for us on their own web sites.  Thank you--these are interesting sites and we do look at them.  But I think we can all agree that there is also a challenge that many folks are familiar with which is that a self-selected group provides one type of feedback which is likely to be different than a group that is selected intentionally as being representative.  I was recalling an old episode of Saturday Night Live, “Larry the Lobster”, where for a toll call you could vote to save Larry from the stove or not.  We all know that is a non-scientific poll, but we also don’t even know if it is a non-scientific poll of views of animal rights or of food preferences.  I think the value of voting on specific features goes beyond just entertainment, but we also have to spend the energy making sure we are thinking about the issues within the same context.  We also want any sample of customers we do to be representative of either the broad base of customers or the specific target customer “segment”.

    Thus a big part of this blog is about creating a forum where we hear from each other about what is important and what our relative contexts are that we bring to the discussion.  That’s why we think about this as a dialog—it is not a question and answer, request and response, point and counter-point, or announcement and comment.  Personally, I am genuinely benefiting from the dynamic nature of what we are going to blog about based on those participating in the blog.  So this is much more like a social where we all come to meet and talk, than a business meeting where we each have specific goals or a training class where one party does all the talking. 

    In that spirit, it seems good to continue a conversation about a few points that have come up quite a bit and I think folks have been asking for a point of view on these.  Each is worthy of a post on its own, but I also wanted to offer a point of view about some specific feature requests.  Let’s look at some topics that have come up as we have talked about performance or the overall Windows experience.  Because this is “responding” to comments and input, there is a potential to delve into point/counter-point, I am hoping we can look back at the “context” discussions we have been having before we get too deep in debate.

    Profile-based Setup

    In terms of feature ideas, a number of you have suggested that we offer a way at setup time to configure Windows for a specific scenario.  Some have suggested scenarios such as gaming, casual use, business productivity, web browsing, email, "lightweight usage", and so on.  There is an implication in there that Windows could perform (speed, space, etc.) better if we tune it for a specific scenario along these lines, but in reality this assumption probably won’t pan out in a consistent or general way.  There are many ways to consider this feature—it could be one where we tweak the contents of the Start Menu (something admins do in corporations all the time), or the performance metrics for some low level components (disk block size, tcp/ip frame sizes, etc.) or the level of user interface polish (aka “eye candy” as some have called it), and so on.  We’ve seen scenario or role-based setup as a very popular feature for Windows Server 2008.  In the server environment, however, each of these roles represents a different piece of hardware (likely with different configurations) or perhaps a specific VM on a very beefy machine, and also represent very clearly understood "workloads" (file server, print server, web server). 

    The desktop PC (or laptop) is different because there is only a single PC and the roles are not as well defined.  Only in the rarest cases is that PC dedicated to a single purpose.  And as Mike in product planning blogged, the reality is that we see very few PCs that run only a specific piece of software and in nearly every study we have ever done, just about every PC runs at least one piece of software that other people do not run.  So we should take away from this the difficulty in even labeling a PC as being role specific.  Now there are role-specific times when using a PC, and for that the goal of an OS is to adapt well in the face of changing workloads.  As just one example of this in Windows Vista, consider the work on making the indexer a low priority activity using the new low-priority I/O APIs.  I know some have mentioned that this is “something I always turn off” but the reality is that there is an upfront cost and then the ongoing cost of indexing is indeed very low.  And this is something we have made significant improvements in for Desktop Search 4.0 (released as a download) and in Windows 7.  The reality is that a general purpose OS should adjust to the workloads asked of it.  We know things are not perfect, and we know many of you (particularly gamers) are looking for every single potential ounce of performance.  But we also know that the complexity and fragility introduced by trying to “outsmart” core system services often overshadows the performance improvements we see across the broadest sampling of customers.  There’s a little bit of “mythbusters” we could probably embark on so -- how about sharing the systematic results you have achieved and we can address those in comments?

    Another challenge would be in developing this very taxonomy.  This is something I personally tried hard to do for Office 95 and Office 97.  We thought we could have a setup “wizard” ask you how much you used Word, Excel, PowerPoint, and Access, or a taxonomy that asked you a profession (lawyer, accountant, teacher).  From that we were going to pick not just which applications but which features of the applications we would install.  We consistently ran into two problems.  First, just arriving at descriptors or questions to “categorize” people failed consistently in usability tests—the classic problem when given a spectrum of choices people would peg all of them in the middle or would just “freeze up” feeling that none fit them (people don't generally like labels).  Second, we always had the problem of either multiple users of the same PC or people who would change roles or usage patterns.  It turns out our corporate customers learned this same thing for us and it became routine to “install everything” and thus began an era of installing the full suite of products and then training was used to narrow the usage scenarios. 

    The final challenge has been just how do you present this to customers and when.  This sequence of steps, the out of box experience, or OOBE, is what you go through when you unbox a PC (the overwhelming majority of Windows customers get it this way) or run setup from a DVD (the retail “packaged product” customer).  This leads to the next item which is looking to the OOBE as a place to do performance optimizations.  Trying to solve performance at this step is definitely a challenge and leads to our “context” for the out of box experience.

    Out of Box Experience - “OOBE”

    The OOBE is really the place that customers first experience Windows on a new PC.  As many have read in reviews of competitive (to Windows PCs) products the experience goals most people have relate to “how fast can I get from packing knife to the web”.  For Windows 7 we are working closely with our OEM partners to make sure it is possible to deliver the most streamlined experience possible.  Of course OEMs have a ton of flexibility and differentiation opportunties in what they offer as part of setting up a new PC, and what we want to do is make sure that the “core OS” portion of this is the absolute minimum required to get to the fun of using your PC. 

    By itself, this goal would run counter to introducing a “profiling” or “wizard” help gauge the intended (at time of purchase) uses/usage of a PC.  That doesn’t mean that an OEM could not offer such a profiled experience that could provide a differentiated OOBE experience, but it isn’t one we would ask all customers to go through as part of the “core OS” installation. 

    I recognize many of you as PC enthusiasts have gone through the experience of setting up a Linux PC using one of the varieties of package managers—probably many times just to get one installation working right.  As you’ve seen with these installs (especially as things have recently converged on one particular end-user focused disti), the number of ways you can produce a poorly running system exceeds the number of ways you can produce a fully functional (for your needs) setup  In practice, we know that many components end up depending on many others and ultimately this dependency graph is a challenge to manage and get right, even with a software dependency manager (like Windows Installer).  As a result, we generally see customers benefitting from a broad base of software on the machine so long as that does not have a high cost—developing that install is a part of developing the product, balancing footprint, architectural connections, system reliability, etc.

    So our context for the out of box experience would be that we don’t want to introduce complexity there, where customers are least interested in dealing with it as they want to get to the excitement of using their new PC.  I think of it a bit like the car dealers who won’t hand you the keys to your car until you sit and watch a DVD about the car and then get a guided tour of the car—if you’re like me you’re screaming “give me the keys and let me out of here”.  We think PC buyers are pretty much like that and our research confirms that around the world.

    We also recognize that there are expert users who might want to adjust the running system for any variety of reasons (performance, footprint, surface area, etc.)  We call this the “Turning Windows Features On or Off” which is the next item we’ve heard from you about.

    Windows Features

    If we install the typical installation of Windows as one that is basically all the features in the particular SKU a customer purchased, then what about the customer that wants to tweak what is installed and remove things?  Customers might want to remove some features because they just never use them and don’t want to accidently use them or carry with them the “code” that might run.  Customers might be defining a role for the PC (cash register) and so making sure that specific features are never there.  There are many reasons for this.  For many releases Windows has had the ability to install or uninstall various features that are part of Windows.  In Windows Vista this was made more robust as the features are removed from the running system but also remained available for reuse without the original DVD.  We also made the list of features longer in Windows Vista.

    For Windows 7, many have asked for us to make this list longer and have more features in it.  This is something we are strongly considering for Windows 7 as we think it is consistent with the design goals of “choice and control” that you have seen us talk about here and quite a bit with Internet Explorer 8.0 beta 2.

    Of course we have the same challenge that Linux distributions have which is you can quickly remove things could break other features by being removed, and then you have to have all the complexity of informing the customer of these “dependencies” and ultimately you end up feeling like everything is connected to everything else.  On some OS installations this packaging works reasonably well because there is duplication of features (you pick from several file browsers, several web browsers, several office suites, several GUIs even).  The core Windows OS, while not free from some duplication, does not have this type of configuration.  Rather we ship a platform where customers can add many components as they desire.

    For customers that wish to remove, replace, or just prevent access to Windows components we have several available tools:

    • Set Your Default Programs (or Set Program Access and Defaults).  In Vista these features allow you to set the default programs/handlers by file type or protocol.  This was introduced in Windows XP SP1.  In Vista the SYDP was expanded and we expect all Microsoft software to properly register and employ this mechanism.  So if you want to have a default email program, default handler for GIF, or your choice of web browser this is the user interface to use.  Windows itself respects these defaults for all the file types it manages. 
    • Customizing the start menu or group policy.  For quite some time, corporate admins have been creating “role-based” PCs by customizing the start menu (or even going way back to progman) to only show a specific set of programs.  We see this a lot in internet cafes these days as well.  The SPAD functionality takes this a step further and provides an end-user tool for removing access to installed email programs, web browsers, media players, instant messengers, and virtual machine runtimes. 
    • Removing code.  Sometimes customers just want to remove code.  With small footprint disks many folks have looked to remove more and more of Windows just to fit on SSDs.   I’ve certainly seen some of the tiny Windows installations.  The supported tool for removing code from Windows is to use the “Turn Windows Features on and off” (in Vista) user interface.   There are over 80 features in this tool in premium Vista packages today.

    Many folks want the list of Windows features that can be turned on / off to be longer and there have been many suggestions on the site for things to make available this way.  This is more complex because of the Windows platform—that is many developers rely on various parts of the Windows platform and just “assume” those parts are there.  Whether it is a media player that uses the windows address book, a personal finance package that uses advanced print spooling, or even a brand new browser that relies on advanced networking features.  These are real-world examples of common uses of system APIs that don’t seem readily apparent from the end-user view of the software. 

    Some examples are quite easy to see and you should expect us to do more along these lines, such as the TabletPC components.  I have a PC that is a very small laptop and while it has full tablet functionality it isn’t the best size for doing good ink work for me (I prefer a 12.1” or greater and this PC is a 10” screen).  The tablet code does have a footprint in memory and on the 1GB machine if I go and remove the tablet components the machine does perform better.  This is something I can do today.  Folks have asked about Photo Gallery, Movie Maker, Windows Mail, Windows Calendar…this is good feedback and good things for us to consider for Windows 7. 

    An important point is that a vast majority of things you remove this way consume little or no resources if you are not using them.  So while you can reduce the surface area of the PC you probably don’t make it perform better.  As one example, Windows Mail doesn’t slow you down at all if you don’t have any mail (or news) accounts configured.  And to be certain you could hide access with SPAD or just change the default protocol handler to your favorite mail program. Another example is you can just change the association and never see photogallery launched for images if that is your preference.  That means no memory is taken by these features.

    This was a chance to continue our discussion around how we are learning from our discussion and some specifics that have come up quite a bit.  I hope we are gaining a shared view of how we look at some of the topics folks have brought up. 

    So this turned into a record long post.  Please don’t expect this too often :-)

    --Steven

  • Engineering Windows 7

    The "Ecosystem"

    • 86 Comments

    In the emails and comments, there are many topics that are raised and more often than not we see the several facets or positions of the issue. One theme that comes through is a desire expressed by folks to choose what is best for them. I wanted to pick up on the theme of choice since that is such an incredibly important part of how we approach building Windows—choice in all of its forms. This choice is really because Windows is part of an ecosystem, where many people are involved in making many choices about what types of computers, configuration of operating system, and applications/services they create, offer, or use. Windows is about being a great component of the ecosystem and what we are endeavoring to do with Windows 7 is to make sure we do a great job on the ecosystem aspects of building Windows 7.

    Ecosystem and choice go hand in hand. When we build Windows we think of a number of key representatives within the ecosystem beyond Windows:

    • PC makers
    • Hardware components
    • Developers
    • Enthusiasts

    Each of these parties has a key role to play in delivering on the PC experience and also in providing an environment where many people can take a PC and provide a tailored and differentiated experience, and where companies can profit by providing unique and differentiated products and services (and choice to consumers). For Windows 7 our goals have been to be clearer in our plans and stronger in our execution such that each can make the most of these opportunities building on Windows.

    PC Makers (OEMs) are a key integration point for many aspects of the ecosystem. They buy and integrate hardware components and pre-install software applications. They work with retailers on delivering PCs and so on. The choices they provide in form factors for PCs and industrial design are something we all value tremendously as individuals. We have recently seen an explosion in the arrival of lower cost laptops and laptops that are ultra thin. Each has unique combinations of features and benefits. The choice to consumers, while sometimes almost overwhelming, allows for an unrivaled richness. For Windows 7 we have been working with OEMs very closely since the earliest days of the project to develop a much more shared view of how to deliver a great experience to customers. Together we have been sharing views on ways to provide differentiated PC experiences, customer feedback on pre-loaded software, and partnering on the end-to-end measurement of the performance of new PCs on key metrics such as boot and shutdown.

    Hardware components include everything from the CPU through the “core” peripherals of i/o to add-on components. The array of hardware devices supported by Windows through the great work of independent hardware vendors (IHVs) is unmatched. Since Windows 95 and the introduction of plug-and-play we have continued to work to improve the experience of obtaining a new device and having it work by just plugging it in—something that also makes it possible to experience OS enhancements independent of releases of Windows. This is an area where some express that we should just support fewer devices that are guaranteed to work. Yet the very presence of choice and ever-improving hardware depends on the ability of IHVs to provide what they consider differentiated experiences on Windows, often independent of a specific release of Windows. The device driver model is the core technology that Microsoft delivers in Windows to enable this work. For Windows 7 we have committed to further stabilization of the driver model and to pull forward the work done for Windows Vista so it seamlessly applies to Windows 7. Drivers are a place where IHVs express their differentiated experience so the breadth of choice and opportunity is super important. I think it is fair to say that most of us desire the experience where a “clean install” of Windows 7 will “just work” and seamlessly obtain drivers from Windows Update when needed. Today with most modern PCs this is something that does “just work” and it is a far cry from even a few years ago. As with OEMs we have also been working with our IHV partners for quite some time. At WinHEC we have a chance to show the advances in Windows 7 around devices and the hardware ecosystem.

    Developers write the software for Windows. Just as with the hardware ecosystem, the software ecosystem supports a vast array of folks building for the Windows platform. Developers have always occupied a special place in the collective heart of Microsoft given our company roots in providing programming languages. Each release of Windows offers new APIs and system services for developers to use to build the software they want to build. There are two key challenges we face in building Windows 7. First, we want to make sure that programs that run on Windows Vista continue to run on Windows 7. That’s a commitment we have made from the start of the project. As we all know this is perhaps the most critical aspect of delivering a new operating system in terms of compatibility. Sometimes we don’t do everything we can do and each release we look at how we can test and verify a broader set of software before we release. Beta tests help for sure but lack the systematic rigor we require. The telemetry we have improved in each release of Windows is a key aspect. But sometimes we aren’t compatible and then this telemetry allows us to diagnose and address post-release the issue. If you’ve seen an application failure and were connected to the internet there’s a good chance you got a message suggesting that an update is available. We know we need to close the loop more here. We also have to get better at the tools and practices Windows developers have available to them to avoid getting into these situations—at the other end of all this is one customer and bouncing between the ISV and Microsoft is not the best solution.

    Our second challenge is in providing new APIs for developers that help them to deliver new functionality for their applications while at the same time provide enough value that there is a desire to spend schedule time using these APIs. Internally we often talk about “big” advances in the GUI overall (such as the clipboard or ability to easily print without developing an application specific driver model). Today functionality such as networking and graphics play vital roles in application development. We’ve talked about a new capability which is the delivery of touch capabilities in Windows 7. We’ve been very clear about our view that 64-bit is a place for developers to spend their energy as that is a transition well underway and a place where we are clearly focused.

    Enthusiasts represent a key enabler of the ecosystem, and almost always the one that works for the joy of contributing. As a reader of this blog there’s a good chance you represent this part of the ecosystem—even if we work in the industry we also are “fans” of the industry. There are many aspects to a Windows release that need to appeal the enthusiasts. For example, many of us are the first line of configuration and integration for our family, friends, and neighbors. I know I spent part of Saturday setting up a new wireless network for a school teacher/friend of mine and I’m sure many of you do the same. Enthusiasts are also the most hardcore about wanting choice and control of their PCs. It is enthusiasts sites/magazines that have started to review new PCs based on the pre-installed software load and how “clean” that load is. It is enthusiasts that push the limits on new hardware such as gaming graphics. It is enthusiasts who are embracing 64-bit Windows and pushing Microsoft to make sure the ecosystem is 64-bit ready for Windows 7 (we’re pushing of course). I think of enthusiasts as the common thread running through the entire ecosystem, participating at each phase and with each segment. This blog is a chance to share with enthusiasts the ins and outs of all the choices we have to make to build Windows 7.

    There are several other participants in the ecosystem that are equally important as integration points. The system builders and VARs provide PCs, software, and service for small and medium businesses around the world. Many of the readers of this blog, based on the email I have received, represent this part of the ecosystem. In many countries the retailers serve as this integration point for the individual consumer. For large enterprise customers the IT professionals require the most customization and management of a large number of PCs. Their needs are very demanding and unique across organizations.

    Some have said that the an ecosystem is not the best approach that we could do a much better job for customers if we reduce the “surface area” of Windows and support fewer devices, fewer PCs, fewer applications, and less of Windows’ past or legacy. Judging by the variety of views we've seen I think folks desire a lot of choice (just in terms of DPI and monitor size).  Some might say that from an engineering view less surface area is an easier engineering problem (it is by definition), but in reality such a view would result in a radical and ever-shrinking reduction in the choices available for consumers. The reality is engineering is about putting constraints in place and those constraints can also be viewed as assets, which is how we view the breadth of devices, applications, and “history” of Windows. The ecosystem for PCs depends on opportunities for many people to try out many ideas and to explore ideas that might seem a bit crazy early on and then become mainstream down the road. With Windows 7 we are renewing our efforts at readying the ecosystem while also building upon the work done by everyone for Windows Vista.

    The ecosystem is a pretty significant in both the depth and breadth of the parties involved. I thought for the purposes of our dialog on this blog it is worth highlighting this up front. There are always engineering impacts to balancing the needs each of the aspects of the ecosystem. Optimizing entirely along one dimension sometimes seems right in the short term, but over any period of time is a risky practice as the benefits of a stable platform that allows for differentiation is something that seems to benefit many.

    With Windows 7 we committed up front to doing a better job as part of the PC ecosystem.

    Does this post reflect your view of the ecosystem? How could we better describe all those involved in helping to make the PC experience amazing for everyone?

    --Steven

  • Engineering Windows 7

    Follow-up: Starting, Launching, and Switching

    • 83 Comments

    Lots of discussion on the taskbar and associated user interface.  Chaitanya said he thought it would be a good idea to summarize some of the feedback and thoughts.  --Steven

    We’d like to follow up on some themes raised in comments and email.  This post looks at some observations on consistent feedback expressed (though not universal) and also provides some more engineering / design context for some of the challenges expressed.

    First it is worth just reinforcing a few points that came up that were consistently expressed:

    • Many of you agree that the Notification Area needs to be more manageable and customizable. 
    • We received several comments about rearranging taskbar buttons.  This speaks to the need for a predictable place where taskbar buttons appear as well as your desire for more control over the taskbar.
    • There were comments that talked about Quick Launch being valuable, but that it could stand to be an even better launching surface (e.g. larger by default or more room).
    • Thumbnails are valuable to many of you, but their size doesn’t always help you find the window you are looking for.  There is interest in a better identification method of windows that consistently provided the right amount of information.
    • Better scaling of supported windows was discussed.  This includes optimizing the taskbar for more windows and spanning multiple displays. 

    Data

    Several of you asked about the conclusions we are drawing from the data we collect and how we will proceed.

    @Computermensch writes “The problem with this "analysis" (show me the data) is that you're only managing current activities surrounding the taskbar. So with respect "to evolving the taskbar" you're only developing it within its current operational framework while developing or evolution of really should refer to developing the taskbars concept.” 

    @Bluvg posts “What if the UI itself was a reason that people didn't run more than 6-9 windows?  In other words, what if the UI has a window number upper bound of effectiveness?  Prioritizing around that 6-9 scenario would be taking away the wrong conclusion from the data, if that were the case.  The UI itself would be dictating the data, rather than being driven by user demand.”

    As we’ve said in all our posts around the data we collect and how we use it, data do not translate directly into our features, but informs the decisions.  Information we collect from instrumentation as well as from customer interviews merely provides us with real-world accuracy of how a product is currently used.  The goal is not necessarily to just design for the status quo.  However, we must recognize that if a new design emerges that does not satisfy the goals and behavior of our customers today, we risk resistance.  This is not to say one should never innovate and change the game—just that to do so must be respectful of the ultimate goal of the customer.  Offering a new solution to a problem is great; just make sure you’re solving the right problem and that there is a path from where people are today to where you think the better solution resides.  With that said, rest assured that our design process recognizes the need for the taskbar to scale more efficiently for larger sets of windows.  This would allow those who possibly feel “trapped” in the 6-9 window case to more comfortably venture to additional windows, if they really require it.  Also, the improvements we make to the 90% case should still hold benefits to the current outliers. 

    Notification Area

    With so much feedback, it is always valuable to recognize when customer comments converge.  The original post called out the problems with the Notification Area and these issues were further emphasized with your thoughts.

    @Jalf writes “Having 20 icons and a balloon notification every 30th second taking up space at the taskbar where it's *always* taking up space is just not cool. By all means, the information should be there if I need it, but can't we just assume that if I don't actively look for the information, it's probably because I don't want it.

    Jalf’s comment is particularly interesting because it speaks to both the pros and cons of notifications.  They certainly can be valuable, but they can also very easily overwhelm the customer as many of you note.  A careful balance therefore must be reached such that the customer is kept informed of information that is relevant while she continues to remain in control.  Since relevant is relative, the need for control is fundamental.  Rest assured we are aware of the issues and we are taking them very seriously.

    Multi-mon Support

    It comes as no surprise that many of you wrote to discuss multi-monitor support for the taskbar. This is a popular request from our enthusiasts (and our own developers) and was called out as an area of investigation in the original post. 

    @Justausr is very direct with this comment: “The lack of multi-monitor support is just about a crime.  We've seen pictures of Bill Gate's office and his use of 3 monitors.  Most developers have 2 monitors these days.  Why was multi-monitor support for the taskbar missing?  Once again, this is an example of the compartmentalization of the Windows team and the lack of a user orientation in defining and implementing features.  The fact that this is even a "possible" and not an "of course we're going to..." shows that you folks STILL don't get it.”

    At least in this particular case we tend to think we “get it”, but we also tend to think that the design of a multi-mon taskbar is not as simple as it may seem.  As with many features, there is more than one way to implement this one.  For example, some might suggest a unique taskbar that exists on each display and others suggest a taskbar that spans multiple displays.  Let’s look at both of these approaches.  While doing so also keep in mind the complexities of having monitors of different sizes, orientations, and alignments. 

    If one was to implement a taskbar for each display where each bar only contained windows for its respective portion of the desktop, some issues arise.  Some customers will cite advantages of less mouse travel since there is always a bar at the bottom on their screen.  However, such a design would now put the onus on the customer to track where windows are.  Imagine looking for a browser window and instead of going to a single place, you now had to look across multiple taskbars to find the item you want.  Worse yet, when you move a window from one display to another, you would have to know to look in a new place to find it.  This might seem at odds with the request to rearrange taskbar buttons because customers want muscle memory of their buttons.  It would be like having two remotes with dynamically different  functionality for your TV. This is one of the reasons that almost every virtual desktop implementation keeps a consistent taskbar despite the desktop you are working on.  

    Another popular approach is a taskbar that spans multiple desktops.  There are a few third-party tools that attempt to emulate this functionality for the Windows taskbar.  The most obvious advantage of this approach (as well as the dual taskbar) is that there is more room offered for launching, switching and whispering.  It is fairly obvious that those customers with multiple displays have more room to have more windows open simultaneously and hence, require even more room on their taskbar.  Some of our advanced customers address this issue by increasing the height of the taskbar to reveal multiple rows.  Others ask for a spanning taskbar.  The key thing to recognize is that the problem is not necessarily that the taskbar doesn’t span, but that more room is required to show more information about windows.  So, it stands to reason that we should come up with the best solution to this problem, independent of how many displays the customer has. 

    We thought it would be good to just offer a brief discussion on the specifics of solving this design problem as it is one we have spent considerable time on.  One of the approaches in general we are working to do more of, is to change things when we know it will be a substantial improvement and not also introduce complexities that outweigh the benefits we are trying to achieve.

    Once again, many thanks for your comments.  We look forward to talking soon.

    - Chaitanya

  • Engineering Windows 7

    Product Planning for Windows...where does your feedback really go?

    • 75 Comments

    Ed. Note:  Allow me to introduce Mike Angiulo who leads the Windows PC Ecosystem and Planning team.  Mike’s team works closely with all of our hardware and software partners and leads the engineering team's product planning and research efforts for each new version of Windows.  --Steven 

    In Windows we have a wide variety of mechanisms to learn about our customers and marketplace which all play roles in helping us decide what we build.  From the individual questions that our engineers will answer at WinHEC and PDC to the millions of records in our telemetry systems we have tools for answering almost every kind of question around what you want us to build in Windows and how well it’s all working.  Listening to all of these voices together and building a coherent plan for an entire operating system release is quite a challenge – it can feel like taking a pizza order for a billion of your closest friends!

     

    It should come as no surprise that in order to have a learning organization we need to have an organization that is dedicated to learning.  This is led by our Product Planning team, a group of a couple dozen engineers that is both organized and sits with the program managers, developers and testers in the feature teams.  They work throughout the product cycle to ensure that our vision is compelling and based on a deep understanding of our customer environment and is balanced with the business realities and competitive pressures that are in constant flux.  Over the last two years we’ve had a team of dozens of professional researchers fielding surveys, listening to focus groups, and analyzing telemetry and product usage data leading up to the vision and during the development of Windows 7 – and we’re not done yet.  From our independently run marketing research to reading your feedback on this blog we will continue to refine our product and the way we talk about it to customers and partners alike.  That doesn’t mean that every wish goes answered!  One of the hardest jobs of planning is in turning all of this data into actionable plans for development.  There are three tough tradeoffs that we have been making recently.

     

    First there is what I think of as the ‘taste test challenge.’ Over thirty years ago this meme was introduced in a famous war between two colas.  Remember New Coke?  It was the result of overemphasizing the very initial response to a product versus longer term customer satisfaction.  We face this kind of challenge all the time with Windows – how do we balance the need for the product to be approachable with the need for the product to perform throughout its lifecycle?  Do you want something that just boots as fast as it can or something that helps you get started?  Of course we can take this to either extreme and you can say we have – we went from c:\ to Microsoft Bob in only a matter of a decade.  Finding the balance between a product that is fresh and clean out of the box and continues to perform over time is a continual balance.  We have ethnographers who gather research that in some cases starts even before the point of purchase and continues for months with periodic visits to learn how initial impressions morph into usage patterns over the entire lifecycle of our products.

     

    Second we’re always looking out for missing the ‘trees for the forest.’ By this I mean finding the appropriate balance between aggregate and individual user data.  A classic argument around PCs has always been that a limited subset of actions comprises a large percentage of the use case.  The resulting argument is that a limited function device would be a simpler and more satisfying experience for a large percentage of customers!  Of course this can be shown to be flawed in both the short term and the long term.  Over the long term this ‘common use case’ has changed from typing & printing to consuming and burning CDs and gaming to browsing and will continue to evolve.  Even in the short term we have studied the usage of thousands of machines (from users who opt-in of course) and know that while many of the common usage patterns are in fact common, that nearly every single machine we’ve ever studied had one or more unique applications in use that other machines didn’t share!  This long tail phenomena is very important because if we designed for the “general case” we’d end up satisfying nobody.  This tradeoff between choice and complexity is one that benefits directly from a rigorous approach to studying usage of both the collective and individual and not losing sight of either.

     

    Third is all about timing.  Timing is everything.  We have an ongoing process for learning in a very dynamic market – one that is directly influenced by what we build.  The ultimate goal is to deliver the ultimate in software & hardware experiences to customers – the right products at the right time.  We’ve seen what happens if we wait too long to release software support for a new category (we should have done a better job with an earlier Bluetooth pairing standard experience) and what also happens when we ship software that the rest of the ecosystem isn’t ready for yet.  This problem has the dimension of working to evangelize technologies that we know are coming, track competing standards, watch user scenarios evolve and try to coordinate our software support at the same time.  To call it a moving target isn’t saying enough!  It does though explain why we’re constantly taking feedback, even after any given version of Windows is done.

     

    These three explicit tradeoffs always make for lively conversation – just look at the comments on this blog to date!  Of course being responsive to these articulated needs is a must in a market as dynamic and challenging as ours.  At the same time we have to make the biggest tradeoff of them all – balancing what you’re asking for today with what we think you’ll be asking for tomorrow.  That’s the challenge of defining unarticulated needs.  All technology industries face this tradeoff whether you call it the need to innovate vs. fix or subscribe to the S curve notion of discontinuities.  Why would two successful auto companies, both listening to the same market input, release the first commercial Hummer and first hybrid Prius in the same year?  It wasn’t that 1998 was that confusing, it was that the short term market demands and the long term market needs weren’t obviously aligned.  Both forces were visible but readily dismissed – the need for increased off road capacity to negotiate the crowded suburban mall parking lots and the impending environmental implosion being predicted on college campuses throughout the world.  We face balancing acts like this all the time.  How do we deliver backwards compatibility and future capability one release at a time?  Will the trend towards 64 bit be driven by application scenarios or by 4GB machines selling at retail?

     

    We have input on key tradeoffs.  We have a position on future trends.  That’s usually enough to get started on the next version of the product and we stay connected with customers and partners during throughout development to keep our planning consistent with our initial direction but isn’t enough to know we’re ready to ship.   Really being done has always required some post engineering feedback phase whether it’s a Community Technical Preview, Technology Adoption Program or a traditional public Beta.  The origin of Beta testing and even the current definition of the term aren’t clear.  Some products now seem to be in Beta forever!  We work to find the best possible timing for sharing the product and gathering final feedback.  If we release it too early it’s usually not in any shape to evaluate, especially with respect to performance, security, compatibility and other critical fundamentals.  If we release too late we can’t actually take any of the feedback you give us, and I can’t think of a worse recipe for customer satisfaction than to ask for feedback which gets systematically ignored.  I was just looking at another software “feedback” site where a bunch of the comments just asked the company to “please read this site!”   For Windows 7 we’re going to deliver a Beta that is good enough to experience and leaves us enough time to address areas where we need more refinement.  This blog will be an important part of the process because it will provide enough explanation and content and guidance to help you understand the remaining degrees of freedom, some of the core assumptions that went into each area and will structure our dialogue so that we can listen and respond to as much feedback as you’re willing to give.  Some of this will result in bugs that get fixed, some will result in bugs in drivers or applications that we help our partners fix.  And of course sometimes we’ll just end up with healthy debate – but even in this case we will be talking, we will respond to constructive comments, bugs and ideas and we will both be starting that conversation with more context than ever.  So please do keep your comments coming.  Please participate in the Customer Experience Improvement program.  Give us feedback at WinHEC and PDC and in the newgroups and forums – we’re listening!  

     

    Thanks,

    - Mike

     

  • Engineering Windows 7

    Follow-up on High DPI resolution

    • 73 Comments

    One of the cool results of this dialog is how much interest there is in diving into the details and data behind some of the topics as expressed in the comment and emails.  We’re having fun talking in more depth about these questions and observations.  This post is a follow-up to the comments about high DPI resolution, application compatibility, and the general problems with readability in many situations.  Allow me to introduce a program manager lead on our Desktop Graphics team, Ryan Haveson, who will expand on our discussion of graphics and Windows 7.  –Steven

    When we started windows 7 planning, we looked at customer data for display hardware, and we found something very interesting (and surprising). We found that roughly half of users were not configuring their PC to use the full native screen resolution. Here is a table representing data we obtained from the Windows Feedback Program which Christina talked about in an earlier post.

    Table showing that 55% of those with higher definition monitors lower their resolution.

    We don't have a way of knowing for sure why users adjust their screen resolution down, but many of the comments we’ve seen match our hypothesis that a lot of people do this to because they have difficulty reading default text on high resolutions displays.  With that said, some users probably stumble into this configuration by accident; for example due to a mismatched display driver or an application that changed the resolution for some reason but did not change it back. Regardless of why the screen resolution is lower, the result is blurry text that can significantly increase eye fatigue when reading on a PC screen for a long period of time. For LCD displays, much of the blurriness is caused by the fact that they are made up of fixed pixels. In non-native resolution settings, this means that the system must render fractional pixels across fixed units, causing a blurred effect. Another reason for the relative blurriness is that when the display is not set to native resolution, we can’t properly take advantage of our ClearType text rendering technology , which most people (though not all) prefer. It is interesting to note that the loss of fidelity due to changing screen resolution is less pronounced on a CRT display than on an LCD display largely because CRTs don’t have fixed pixels the way that LCDs do. However, because of the advantages in cost and size, and the popularity of the laptop PC, LCD displays are fast gaining market share in the installed base. Another problem with running in a non-native screen resolution is that many users inadvertently configure the display to a non-native aspect ratio as well. This results in an image that is both blurry and skewed! As you can imagine, this further exacerbates the issues with eye strain.

    Looking beyond text, in these scenarios the resulting fidelity for media is significantly reduced as well. With the configuration that many users have, even if their hardware is capable, they are not able to see native “high def” 720p or 1080p TV content, which corresponds to 1280x720 and 1920x1080 screen resolutions respectively. The PC monitor has traditionally been the “high definition” display device, but without addressing this problem we would be at risk of trailing the TV industry in this distinction. While it is true that only about 10% of users have a truly 1080p capable PC screen today, as these displays continue to come down in price the installed base is likely to continue to grow. And you can bet that there will be another wave of even higher fidelity content in the future which users will want to take advantage of. As an example, when displays get to 400 DPI they will be almost indistinguishable from looking at printed text on paper. Even the current generation of eBook readers with a DPI of ~170 look very much like a piece of paper behind a piece of glass

    From this we see that there is a real end user benefit to tap into here. It turns out that there is existing infrastructure in Windows called “High DPI” which can be used to address this. High DPI is not a new feature for Windows 7, but it was not until Vista that the OS user-interface made significant investments in support for high DPI (beyond the infrastructure present earlier). To try this out in Vista, rt. Click desktop -> personalize and select “Adjust Font Size (DPI)” from the left hand column. Our thinking for Windows 7 was that if we enable high DPI out of the box on capable displays, we will enable users to have a full-fidelity experience and also significantly reduce eye strain for on-screen reading. There is even infrastructure available to us to detect a display’s native DPI so we can do a better job of configuring default settings out of the box. However, doing this will also open up the door to expose some issues with applications which may not be fully compatible with high DPI configurations.

    One of the issues is that for GDI applications to be DPI aware, the developer must write code to scale the window frame, text size, graphical buttons, and layout to match the scaling factor specified by the DPI setting. Applications which do not do this may have some issues. Most of these issues are minor, such as mismatched font sizes, or minor layout artifacts, but some applications have major issues when run at high DPI settings.

    There are some mitigations that we can do in Windows, such as automatic scaling for applications which are not declared DPI aware (see Greg Schechter’s blog on the subject), but even these mitigations have problems. In the case of automatic scaling, applications which are not DPI aware are automatically scaled by the window manager. The text size matches the user preference, but it also introduces a blurry effect for that application’s window as a result. For people who can’t read the small text without the scaling, this is a necessary feature to make the high DPI configuration useful. However, other customers may only be using applications that scale well at high DPI or may be less impacted by mismatched text sizes and may find the resulting blurry effect caused by automatic scaling to be a worse option. Without a way for the OS to detect whether an application is DPI aware on not, we have to pick a default option. It always comes back to the question of weighing the benefits and looking at the tradeoffs. In the long term, the solution is to make sure that applications know how to be resolution independent and are able to scale to fit the desired user preference, which requires support in both our tools and documentation. The challenge for a platform is to figure out how to get there over time and how to produce the best possible experience during the transition.

    Short term vs. long term customer satisfaction

    Using the model of high definition TV, we can see that in the long term it is desirable to have a high fidelity experience. The only problem is that even though the high DPI infrastructure has been around for several windows releases (in fact there is an MSDN article dated 2001 on making applications DPI aware), we were not sure how many applications are actually tested in these configurations. So we were faced with an un-quantified potential short term negative customer impact caused by enabling this feature more broadly. The first thing we did is to quantify the exposure. We did this by performing a test pass with over 1,000 applications in our app compat lab to see how they behave at high DPI settings. The results we found are shown below, which shows the distribution of issues for these 1000 applications.

    One quick thing, when we say “bug” we mean any time software behaves in a manner inconsistent with expectations—so it can be anything from cosmetic to a crash. We categorize the severity of these bugs on a scale from 1 to 4, where Sev 1 is a really bad issue (such as a crash and/or loss of data or functionality) and Sev 4 is an issue which is quite subtle and/or very difficult to reproduce.

    It turns out that most applications perform well at high DPI, and very few applications have major loss of functionality. Of course, it is not the ones that work well which we need to worry about. And if 1% of applications have major issues at high DPI, that could be a significant number. So we took a look at the bugs and put them into categories corresponding to the issue types found. Here is what we came up with:

    Of 1000 applications tested for high DPI compatability, 1% had severity 1 issues, 1% severity 2, 5% serverity 3, and 2% severity 4, with 91% having no issue at all.

    What we found was that one of the most significant issues was with clipped UI. Looking into this deeper, it became apparent that most of these cases were in configurations where the effective screen resolution would be quite low (800x600 or lower). Based on this, we were able to design the configuration UI in such a way that we minimized the number of cases where users would configure such a low effective resolution. One by one we looked at the categories of issues and when possible, we came up with mitigations for each bucket. Of course, the best mitigation is prevention and so High DPI is a major focus for our developer engagement stories for PDC, WinHEC, and other venues coming up.

    Aggregate vs. individual user data

    One thing for us to look at is how many users are taking advantage of high DPI today (Vista/XP). Based on the data we have, only a very small percentage of users are currently enabling the high DPI feature. This could easily be interpreted as a clear end user message that they don’t care about this feature or have a need for this feature. An alternate explanation could be that the lack of adoption is largely because XP and Vista had only limited shell support for high DPI, and the version of IE which shipped on those platforms had significant issues with displaying mismatched font sizes and poorly scaled web pages. Also, we do know anecdotally that there are users who love this feature and have used it even before Vista. Once again, we have to make an interpretation of the data and it is not always crystal clear.

    Timing: is this the right feature for the market in this point in time?

    Fortunately, we don’t have a “chicken and egg” problem. The hardware is already out in the field and in the market, so it is just a matter of the OS taking advantage of it. From a software perspective, most of the top software applications are DPI aware (including browsers with improved zooming, such as IE 8), but there remain a number of applications which may not behave well at high DPI. Another key piece of data is that display resolution for LCD panels is reaching the maximum at standard DPI. For these displays, there is no reason to go beyond 1900x1200 without OS support for high DPI because the text would be too small for anyone to read. Furthermore, this resolution is already capable of playing the highest fidelity video (1080p) as well as 2 megapixel photos. The combination of existing hardware in the field, future opportunity to unlock better experiences, and the fact that the hardware is now blocked on the OS and the software speak to this being the right timing.

    Conclusion

    Looking at customer data helps us identify ways to improve the Windows experience. In this case, we saw clearly that we had an opportunity to help users easily configure their display such that they would enjoy a high fidelity experience for media as well as crisp text rendered at an appropriate size. With that said, anytime we invest in a feature that can potentially impact the ecosystem of Windows applications we want to be careful about bringing forward your investments in software. We also want to make sure that we engage our community of ISVs early and deeply so they can take advantage of the platform work we have done to seamlessly deliver those benefits to their customers. In the meantime, the internal testing we did and the data that we gathered was critically important to helping us make informed decisions along the way. High DPI is a good example of the need for the whole ecosystem to participate in a solution and how we can use the customer data in the field, along with internal testing, to determine the issues people are seeing and to help us select the best course of action.

     

     

    --Ryan

  • Engineering Windows 7

    The Windows Feedback Program

    • 72 Comments

    Introducing Christina Storm who is a program manager on the Windows Customer Engineering feature team working on telemetry. 

    In a previous article Steven has introduced the Windows 7 Feature Teams. I am a program manager working on telemetry on the Windows Customer Engineering team. Our team delivers the Windows Feedback Program, one of several feedback programs in place today that allow us to work directly with customers and make them part of our engineering process.

    The Windows Feedback Program (WFP) has been active for several years during the Windows XP and Windows Vista product cycles, and we are currently ramping up to get all aspects of this program ready for Windows 7. At the core of this program is a large research panel of customers who sign up via our website http://wfp.microsoft.com during open enrollment. Customers choose to be part of a survey program, an automated feedback program or both. They then complete a 20-minute profiling survey, which later allows us to look at their feedback based on their profile. We have customers spanning a wide spectrum of computer knowledge in our program, and we are constantly working on balancing the panel to staff up underrepresented groups. The majority of customers who are spontaneously willing to participate in a feedback program like ours are generally enthusiastic about technology. They are early adopters of consumer electronics, digital devices and new versions of software. In contrast, customers who see the PC as a tool to get a job done tend to be a bit more reluctant to join. And we also need more female participants!

    Customers who participate in the automated feedback program install a data collection tool developed by the Windows Telemetry Team. The privacy agreement of this program describes the data collections our tool performs. Here are a few examples:

    • Windows usage behavior including installed and used applications.
    • File and folder structures on your computer, including number of types of files in folders, such as number of jpg files in the Pictures folder.
    • System-specific information, such as hardware, devices, drivers, and settings installed on your computer.
    • Windows Customer Experience Improvement Program (CEIP) data.

    From the collected data we create reports that are consumed by a large number of Windows feature teams as well as planners and user researchers. This chart below shows the answer to the following question: What is the most common file type that customers who participate in our program store on their PCs and what are the most popular storage locations?

    Graph showing common file types and locations.  The most common file type are .jpg across all typical locations.

    The results help us both with planning for handling the volumes of data customers store on their PCs as well as mimicking real-life scenarios in our test labs by setting up PCs with similar file numbers and file sizes and distribution of files on the PCs.

    These data collections furthermore allow us to create reports based on profiled panelists. The above chart may look different if we created it based on data delivered only by developers and then compare it to data delivered only by gamers, just to name a couple of different profiles that participate in our program. The Windows knowledge level sometimes makes a difference, too. Therefore it is very important to us that customers participate in this program who consider themselves Windows experts as well as customers who don’t enjoy spending too much time with the PC, who just see the PC as a tool to get things done. Based on the data, we may decide to optimize certain functionality for a particular profile.

    In general, we utilize this data to better understand what to improve in the next version of Windows.  Let’s take a look at how the representative sample has their monitors configured.  First what resolutions do customers run with on their PCs?  The following table lists typical resolutions and the distribution based on the Windows Customer Experience Improvement Program, which samples all opt-in PCs (Note the numbers do not add to 100% because not every single resolution is included):

    Distribution of common screen resolutions.  Approximately 46% of customers run with 1600x1200 and 1280x1024.  Approximately 10% of customers run with HD resolution.

    One thing you might notice is that about 10% of consumers are running with HD or greater resolution.  In some of the comments, people were asking if our data represented the “top” or “power users”.  Given this sample size and the number of folks with industry leading resolution I think it is reasonable to conclude that we adequately represent this and all segments.  This sample is a large sample (those that opt-in) of an extremely large dataset (all Windows customers) so is statistically relevant for segmented studies,

    We have found that a large percentage of our program participants lower their display resolution from the highest usable for their display. Looking at the data coming from the Windows Customer Experience Improvement Program to compare to, and noticed a similar trend: over 50% of customers with 1600x1200 screen resolution displays are adjusting their resolution down to 1024x768, likely because they find it uncomfortable to read the tiny text on high resolution displays. The negative effect of this resolution change is the loss of fidelity to the point where reading text in editors and web browsers is difficult. High definition video content also won’t be able to render properly.

    Here is the data just for customers with displays capable of 1600x1200:

    Actual running resolution for customers with 1600x1200 capable displays shows that 68% of customers reduce their actual screen resolution.

    In a future blog post, one of the program managers from the Windows Desktop Graphics team is going to describe what we have done with that information to improve display quality and reading comfort in Windows 7.

    We also frequently use our data to select appropriate participants for a survey. A researcher may be interested in sending out an online survey only to active users of virtual machine applications. We would first determine that group of users by looking at our “application usage” data and then create the list of participants for the researcher. Sometimes we combine automatically collected data with survey responses to analyze the relationship between a customer’s overall satisfaction and their PC configuration.

    At the current point in time, 50% of our panel participants are using Windows XP and 50% are using Windows Vista. We are currently not offering open enrollment. If you are interested in being invited to this program, please send an email to winpanel@microsoft.com indicating “Notify me for enrollment” in the subject line. If you’d like to add a bit of information about yourself, including your Windows knowledge level, that would be much appreciated! We will add you to our request queue and make our best effort to invite you when we have capacity.

    When we release the Windows 7 beta we will also be collecting feedback from this panel and asking for participation from a set of Windows 7 beta users. Our current plans call for signing up for the beta to happen in the standard Microsoft manner on http://connect.microsoft.com. Stay tuned!

    -- Christina Storm

  • Engineering Windows 7

    More Follow up to discussion about High DPI

    • 48 Comments

     

    Excellent!  What a fun discussion we’ve been having on High DPI.  It has been so enriching that Ryan wrote up a summary of even more of the discussion.  Thanks so much!  --Steven

    There have been quite a few comments posted regarding high DPI, along with some lively discussion. Most of what has been said has been good anecdotal examples which are consistent with the data we have collected. For the areas where we didn’t have data, the comments have helped to validate many of our assumptions for this group. It is also clear that there are some areas of this feature which are confusing, and in some cases there is a bit of “myth” around what is ideal, what is possible, and what is there. This follow up post is mostly to summarize what we have heard, and to provide some details around the areas where there has been a bit of confusion.

    Here is a list of our top “assumptions” which have been echoed by the comments posted:

    • Most people adjust the screen resolution either to get larger text, or because it was an accident
    • There is a core of people who know about high DPI and who use it
    • Some people prefer more screen real-estate while others people prefer larger UI
    • Discoverability of the DPI configuration is a concern for some
    • App compat is a common issue, even a “deal breaker” in some cases
    • IE Scaling is one of the top issues listed (see IE8 which fixes many of these!)
    • Lots of complexities/subtleties and it is pretty hard to explain this feature to most people

    There have also been a number of areas where there has been a bit of confusion:

    • Is it true that if everything were vector-based, these problems would all go away?
    • Shouldn’t this just work without developers having to do anything?
    • How is this different from per-application scaling like IE zooming?
    • Should DPI be for calibration or for scaling?

    Most of these topics have been covered to some degree in the comments, but since there has been so much interest, we decided to go into a bit more details around a few of the top issues/concerns.

    Vector Graphics vs. Raster Graphics

    With PCs, there is always the hope of a “silver bullet” technology which solves all problems making life easy for users, developers, and designers across the board. As an example, some of the comments to the original posting suggested that if we just made the OS fully vector based, these scaling problems would go away. It turns out that there are several issues with using vector graphics which are worth explaining.

    The first issue is that oftentimes when an icon gets to be small in size, it is better to use an alternate representation so that the meaning is clearer. Notice the icons below. In this case, the designer has chosen a non-perspective view for the icon when it is rendered at it’s smallest size.

    Example of the same icon treated differently depending on size.

    This is because the designer felt that the information expressed by the icon was clearer with a straight-on view at the smallest size. Here is another example illustrating this point:

    Additional example of icons treated differently as the size changes.

    Of course, this means that the designer must create multiple versions of the source image design, so there is additional complexity. The point here is that there is a tradeoff that must be made and the tradeoff is not always clear.

    Even when one vector source is used, it is common to have size-dependent tweaking to make sure that the result is true to what the designer had in mind. Imagine a vector graphic which has a 1-pixel line at 128x128 that gets scaled down by 1/2 to 64x64. The display has no way of rendering a perfect 1/2 pixel line! In many cases the answer is that the renderer will “round” to a nearby pixel line. However, doing this inherently changes the layout of the sub-elements of the image. And there is the question of, “which pixel line to round to?” If the designer does not hand tune the source material, it will be up to the rendering engine to make this decision, and that can result in undesirable effects. One could say that this should therefore define rules about what elements should be use in a vector, but that only further restricts what concepts can be represented.

    It turns out that even the TrueType fonts which we use in Windows are hand-tuned with size-dependant information in order to make the result as high quality as possible. Most of the TrueType rendering pipeline is based on algorithmic scaling of a vector source, but there are size-dependent, hand-coded “hints” in TrueType which the designer specifies to direct the system how to handle edge cases, such as lines falling between pixel boundaries. Here is a link describing this in more detail: http://blogs.msdn.com/fontblog/archive/2005/10/26/485416.aspx

    It is not even true that vector graphics are necessarily smaller in size (especially for small images). Most designers create graphics using an editor which builds up an image using many layers of drawings and effects. With bitmap based graphics, designers will “flatten” the layers before adding it to a piece of software. Most designers today pay little attention to the size of the layers because the flattening process essentially converts it to a fixed size based on the image resolution. With vector graphics, there is no such flattening technique so designers need to carefully consider the tools that they use and the effects that they add to make sure that their icon is not extremely large. I spent some time with one of our designers who had a vector graphic source for one of our bitmaps in Windows and the file was 622k! Of course that file size is fixed regardless of the resulting resolution, but that huge file flattens into this 23k PNG bitmap.

    Of course, a hand-tuned vector based representation of this could be probably made smaller if the size constraints were part of the design time process. But that would be an additional constraint put on the designer, and one which they would need to learn how to do well.

    How do we help developers?

    For applications that need to carefully control the layout and graphics, or scale the fidelity of the images based on the available resolution, having a way of specifying specific pixel locations for items is important to get the best result. This is as true on the Mac as it is on the PC (see http://developer.apple.com/releasenotes/GraphicsImaging/RN-ResolutionIndependentUI/). There is often a belief that if we just provided the right tools or the right framework then all these problems would go away. We all know that each set of tools and each framework have their own set of tradeoffs (whether that is Win 32, .net, or HTML). While there is no silver bullet, there are things we can do to make writing DPI aware applications easier for developers. As an example, there are two important upcoming ecosystem events in which we will be talking in detail about High DPI. We will also have materials which we will be making available to developers which will help educate them on how to convert existing applications to be DPI aware. The first event is Microsoft Professional Developer Conference, where we will talk about High DPI as part of the talk “Writing your Application to Shine on Modern Graphics Hardware (link)”. The second is the Windows Hardware Engineering Conference, in which we will be discussing high DPI as part of the “High Fidelity Graphics and Media” track (link).

    Help with App Compat Issues

    There have been several posts on app compat and high DPI (for example bluvg’s comment). There have also been comments talking about the complexity and understandability of the High DPI configuration. In some cases the app compat issues can be mitigated by enabling or disabling the automatic scaling feature. This can be changed globally by going to the DPI UI, clicking the button labeled “Custom DPI” and changing the checkbox labeled, “Use Windows XP style DPI scaling”. When this checkbox is unchecked, applications which are not declared to be DPI aware are automatically scaled by the window manager. When it is checked, automatic scaling is disabled globally. It is interesting to note that for DPI settings < 144 DPI, this box is checked by default, and for DPI settings >= 144 it is unchecked by default. In some cases, changing the default settings can result in a better experience depending on the applications that you use and your DPI setting. It is also interesting to note that automatic scaling can be turned off on a per application basis using the Vista Program Compatibility properties. Here is a link for more info on how to do that: http://windowshelp.microsoft.com/Windows/en-US/help/bf416877-c83f-4476-a3da-8ec98dcf5f101033.mspx. (Look at the section for “Disable Display Scaling on high DPI settings”.)

    How is DPI scaling different from per-application scaling (like IE Zoom?)

    A typical application UI is made up of a content window and a frame UI. The frame UI is where the menu items and toolbar buttons are. The content window is the “document view”. For example, in IE the content window is the actual webpage. It turns out the code required to support high DPI scaling for the content windows is the same code required to do the zooming feature. In fact, for the content window, IE8 simply uses the high DPI setting to configure the default zoom factor (see DPI Scaling and Internet Explorer 8 for more details). However, high DPI has the additional side effect of scaling the size of the frame UI. Since most people use the scaling feature to make text larger to be more readable, it makes sense to scale the frame UI as well, since the text in the menu items and toolbar tooltips will also scale. In a sense if there is per-application scaling that is really about the content area, and applications will support that as developers see the customer need. DPI scaling makes the UI elements of all applications render similarly.

    Shouldn’t DPI really be used for calibrating the screen (so “an inch is an inch”)?

    Some have suggested that we should just use high DPI as a way to calibrate the screen so that the physical size of an object is the same regardless of the display. This makes a ton of sense from a logical perspective. The idea would be to calibrate the display so “in inch is an inch”. We thought about doing this, but the problem is that it does not solve the end user need of wanting to have a way to configure the size of the text and the UI. If we then had a separate “global scale” variable, this would mean that application developers would need to pay attention to both metrics, which would add complexity to the developer story. Furthermore, if a user feels that the UI is too small, should it be up to the developer or the user to set the preference of how big the UI should be? In other words if the designer wants the button to be an inch, but the user wants the button to be 1.5 inches to make it easier to use, who should decide? The way the feature works today, it is up to the user to set their preference, but it is up to the application developer to make sure that the user preference is honored.

    Once again, it is really great to see so much interest in high DPI. We certainly have some challenges ahead of us, but in many ways it seems like the ecosystem is ripe for this change. Hopefully this follow up post helped to consolidate some of feedback which we have heard on the previous post and explain some of the complexities of this feature in more detail.

    --Ryan Haveson

  • Engineering Windows 7

    Organizing the Windows 7 Project

    • 26 Comments

    Hi Jon DeVaan here.

    Steven wrote about how we organize the engineering team on Windows which is a very important element of how work is done. Another important part is how we organize the engineering project itself.

    I’d like to start with a couple of quick notes. First is that Steven reads and writes about ten times faster than I do, so don’t be too surprised if you see about that distribution of words between the two of us here. (Be assured that between us I am the deep thinker :-). Or maybe I am just jealous.) Second is that we want do want to keep sharing the “how we build Windows 7” topics since that gives us a shared context for when we dive into feature discussion as we get closer to the PDC and WinHEC. We want to discuss how we are engineering Windows 7 including the lessons learned from Longhorn/Vista. All of these realities go into our decision making on Windows 7.

    OK, on to the tawdry bits.

    Steven linked last time to the book Microsoft Secrets, which is an excellent analysis of what I like to call version two of the Microsoft Engineering System. (Version one involved index cards and “floppy net” and you really don’t want to hear about it.) Version two served Microsoft very well for far longer than anyone anticipated, but learning from Windows XP, the truly different security environment that emerged at that time and from Longhorn/Vista, it became clear that it was time for another generational transformation in how we approach engineering our products.

    The lessons from XP revolve around the changed security landscape in our industry. You can learn about how we put our learning into action by looking at the Security Development Lifecycle, which is the set of engineering practices recommended by Microsoft to develop more secure software. We use these practices internally to engineer Windows.

    The comments on this blog show that the quality of a complete system contains many different attributes, each of varying importance to different people, and that people have a wide range of opinions about Vista’s overall quality. I spend a lot of time on core reliability of the OS and in studying the telemetry we collect from real users (only if they opt-in to the Customer Experience Improvement Program) I know that Vista SP1 is just as reliable as XP overall and more reliable in some important ways. The telemetry guided us on what to address in SP1. I was glad to see one way pointed out by people commenting about sleep and resume working better in Vista. I am also excited by the prospect of continuing our efforts (we are) using the telemetry to drive Vista to be the most reliable version of Windows ever. I add to the list of Vista’s qualities successfully cutting security vulnerabilities by just under half compared to XP. This blog is about Windows 7, but you should know that we are working on Windows 7 with a deep understanding of the performance of Windows Vista in the real world.

    In the most important ways, people who have emailed and commented have highlighted opportunities for us to improve the Windows engineering system. Performance, reliability, compatibility, and failing to deliver on new technology promises are popular themes in the comments. One of the best ways we can address these is by better day-to-day management of the engineering of the Windows 7 code base—or the daily build quality. We have taken many concrete steps to improve how we manage the project so that we do much better on this dimension.

    I hope you are reading this and going, “Well, duh!” but my experience with software projects of all sizes and in many organizations tells me this is not as obvious or easily attainable as we wish.

    Daily Build Quality

    Daily quality matters a great deal in a software project because every day you make decisions based on your best understanding of how much work is left. When the average daily build has low quality, it is impossible to know how much work is left, and you make a lot of bad engineering decisions. As the number of contributing engineers increases (because we want to do more), the importance of daily quality rises rapidly because the integration burden increases according to the probability of any single programmer’s error. This problem is more than just not knowing what the number of bugs in the product is. If that were all the trouble caused then at least each developer would have their fate in their own hands. The much more insidious side-effect is when developers lack the confidence to integrate all of the daily changes into their personal work. When this happens there are many bugs, incompatibilities, and other issues that we can’t know because the code changes have never been brought together on any machine.

    I’ve prepared a graph to illustrate the phenomenon using a simple formula predicting the build breaks caused by a 1 in 100 error rate on the part of individual programmers over a spectrum of group sizes (blue line). A one percent error rate is good. If one used a typical rate it would be a little worse than that. I’ve included two other lines showing the build break probability if we cut the average individual error rate by half (red line) and by a tenth (green line). You can see that mechanisms that improve the daily quality of each engineer impacts the overall daily build quality by quite a large amount.

     image

    For a team the size of Windows, it is quite a feat for the daily builds to be reliable.

    Our improvement in Windows 7 leveraged a big improvement in the Vista engineering system, an investment in a common test automation infrastructure across all the feature teams of Windows. (You will see here that there is an inevitable link between the engineering processes themselves and the organization of the team, a link many people don’t recognize.) Using this infrastructure, we can verify the code changes supplied by every feature team before they are merged into the daily build. Inside of the feature team this infrastructure can be used to verify the code changes of all of the programmers every day. You can see in the chart how the average of 40 programmers per feature team balances the build break probability so that inside of a feature team the build breaks relatively infrequently.

    For Windows 7 we have largely succeeded at keeping the build at a high level of quality every day. While we have occasional breaks as we integrate the work of all the developers, the automation allows us to find and repair any issues and issue a high quality build virtually every day. I have been using Windows 7 for my daily life since the start of the project with relatively few difficulties. (I know many folks are anxious to join me in using Windows 7 builds every day—hang in there!)

    For fun I’ve included a couple pictures from our build lab where builds and verification tests for servers and clients are running 24x7:

    clip_image004 clip_image006

    Conclusion

    Whew! That seems like a wind sprint through a deep topic that I spend a lot of time on, but I hope you found it interesting. I hope you start to get the idea that we have been very holistic in thinking through new ways of working and improvements to how we engineer Windows through this example. The ultimate test of our thinking will be the quality of product itself. What is your point of view on this important software engineering issue?

Page 1 of 1 (9 items)