Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

  • Engineering Windows 7

    Follow-up: Managing Windows windows

    • 121 Comments

    There’s a lot of great discussion from the window arranging post.  This really shows how important these details are to people.  Being able to arrange how apps are shown on screen is key for productivity because it impacts almost every task.  It’s also very personal – people want to be in control of their work environment and have it set up the way that feels right. 

    One thing that should be clear is that it would not be possible for us to provide solutions to all the different ways people would like to work and all of the different tools and affordances people have suggested--I think everyone can see how overloaded we would be with options and UI absorbing all the suggestions!  At first this might seem to be a bit of a bummer, but one thing we loved was hearing about all the tools and utilities you use (and you write!) to make a Windows PC your PC.  Our goal is not to provide the solution to every conceivable way of potentially managing your desktop, but rather to provide an amazing way to manage your desktop along with customizations and personalizations plus a platform where people can develop tools that further enhance the desktop in unique and innovative ways.  And as we have talked about, even that is a huge challenge as we cannot provide infinite customization and hooks—that really isn’t technically possible.  But with this approach Windows provides a high degree (but not infinite) flexibility, developers provide additional tools, computer makers can differentiate their PCs, and you can tune the UI to be highly personalized and productive for the way you want to work using a combination of thos elements and your own preferences. 

    One other thing worth noting is that a lot of the comments referred to oft discussed elements in Windows, such as stealing the focus of windows, the registry, or managing the z-order of windows—a great source of history and witticisms about Windows APIs is from Raymond Chen’s blog.  Raymond is a long-time developer on the Windows team and author of Old New Thing, The: Practical Development Throughout the Evolution of Windows.  This is also a good source to read where the boundaries are between what Windows does and what developers of applications can choose to be responsible for doing (and what they are capable of customizing).

    With that intro, Dave wanted to follow up with some additional insights the team has taken away from the discussion.  --Steven

    We saw several pieces of feedback popping up consistently throughout the comments.  Paraphrasing the feedback (more details below), it sounds like there’s strong sentiment on these points:

    • The size of windows matters, but wasting time resizing windows is annoying.
    • Just let me decide where the windows go – I know best where my windows belong.
    • Dragging files around is cumbersome because the target window (or desktop) is often buried.
    • Desire for better ways to peek at the running windows in order to find what we’re trying to switch to.
    • Want a predictable way to make the window fit the content (not necessarily maximized).
    • Want to keep my personalized glass color, even when a window is maximized.

    For each of these needs, there’s a lot of great discussion around possible solutions – both features from other products, and totally novel approaches.  It’s clear from these comments that there’s a desire for improvement, and that you’ve been thinking about this area long enough to have come up with some fairly detailed recommendations!  Below are a excerpts from some of the conversations ongoing in the comments.

    Put the windows where I want them

    It’s super interesting to see people discussing the existing features, and where they work or don’t work.

    For example, @d_e is a fan of the existing tiling options in the taskbar:

    Arranging windows in a split-window fashion is actually quite easy: While pressing CTRL select multiple windows in the taskbar. Then right-click them and select one of the tiling options...

    But that approach doesn’t quite meet the goal for @Xepol:

    As for the window reorder buttons on the taskbar -> I've known they were there since Win95, but I never use them.  They never do what I want.  If they even get close to the right layout, its the wrong window order.  Since I have to drag stuff around anyways, its just easier to get exactly what I want the first time.

    @Aengeln suggests taking the basic idea of tiled windows to the next level in order to make them really useful:

    A very useful feature would be the ability to split the deskotop into separate portions, especially on larger screens.  For example, I might want to maximize my Messenger window to a small part on the right hand side of the desktop and still have the ability to maximize other windows into the remaing space. Non-maximized windows would be able to float across both (all) parts of the desktop.

    It sounds like there’s agreement that optimizing the screen space for more than one window would be super useful, if it would only let you stay in control of where windows ended up, and was easy and quick to use every day.  The current tiling features in the taskbar give hints at how this could be valuable, but aren’t quite fast and easy enough to be habit forming.

    Open at the right size

    We saw a lot of comments on the “default size” of windows, and questions about how that’s decided.  Applications get to choose what size they open at, and generally use whichever size they were at the last time they were closed (or they can choose not to honor those settings).  One of the cases that can trip people up is when IE opens a small window (websites will do this sometimes), because once you close it that will be the new “last size”. 

    @magicalclick suggested a solution:

    I wish I have one more caption button, FIXED SIZE. Actually it is a checkbox. When I check the box, it will save the window state for this application. After that, I can resize/move around. When I close window, it will not save the later changes.

    @steven_sinofsky offered this advanced user tip that you can use to start being more click-efficient right away:

    @magicalclick I dislike when that one happens!  Rather than add another button or space to click, I do the same thing in one click with a "power user" trick which is when you see the small window open don't close it until you first open up another copy of the application with the "normal" window size.  Then close the small one and then the normal one. 

    Of course this is a pain and close to impossible for anyone to find, but likely a better solution than adding a fourth UI affordance on the title bar.

    –steven

    Finding the right window

    The word being used is “Expose”:

    @Joey_j: Windows needs an Exposé-like feature. I want to see all of my windows at once.

    @Dan.F: one word - expose.  copy it.

    @GRiNSER : Expose has its own set of drawbacks: Like having 30 windows on a macbook pro 1400x1050 screen is really not that helpful. Though its way more helpful than Crap Flip 3D. Expose would be even more useful with keyboard window search...

    Regardless of the name, there’s a desire to visually find the window you’re looking for.  Something more random-access than the timeline approach of Alt-Tab or Flip-3d, and something that lets you pick the window visually from a set of thumbnails.  This is very useful for switching when there are a lot of windows open – but some current approaches don’t scale well and it is likely scaling will become an even more difficult problem as people run even more programs.

    Dragging files

    There were several comments (and several different suggestions) on making it easier to drag between windows:

    @Manicmarc:  I would love to see something like Mac OS's Springloaded folders. Drag something over a folder and hover, it pops up, drag over to the next folder, drop it.

    @Juan Antonio: It would be useful that when I´m dragging an object I could to open a list or thumbnail of the windows ( maybe a right- click )to select what window use to drop the object.

    On this topic, I loved @Kosher’s comment on the difference between being able to do something, and it feeling right.

    The UI could be enhanced quite a bit to make it much easier to do things. It's not just about how easy it is but it's also about how smoothly the user transitions between common UI workflows and tasks.  This is a bit like explaining the difference between a Ferrari and a Toyota to someone that has never driven a Ferrari though, so I don't know if it will ever happen.

    In designing Windows 7, we’ve really been taking the spirit of this comment to heart.  I can’t wait to hear what car Windows 7 is compared to once it’s available for a test drive.

    - Dave

  • Engineering Windows 7

    User Interface: Managing Windows windows

    • 135 Comments

    We’ve booted the machine, displayed stuff on the screen, launched programs, so next up we’re going to look at a pretty complex topic that sort of gets to the core role of the graphical user interface—managing windows.  Dave Matthews is program manager on the core user experience team who will provide some of the data and insights that are going into engineering Windows 7.  --Steven

    The namesake of the Windows product line is the simple “window” – the UI concept that keeps related pieces information and controls organized on screen.  We’ll use this post to share some of the background thinking and “pm philosophy” behind planning an update to this well established UI feature.

    The basic idea of using windows to organize UI isn’t new – it dates back (so I hear) to the first experiments with graphical user interfaces at Stanford over 40 years ago.  It’s still used after all this time because it’s a useful way to present content, and people like having control over how their screen space is used.  The “moveable windows” feature isn’t absolutely needed in an operating system – most cell phones and media center type devices just show one page of UI at a time – but it’s useful when multi-tasking or working with more than one app at a time.  Windows 2.0 was the first Windows release that allowed moveable overlapping windows (in Window 1.0 they were only able to be tiled, not overlapping.  This “tiled v. overlapping” debate had famous proponents on each side—on one side was Bill Gates and on the other side was Charles Simonyi).  In addition, Windows also has the unique notion of "the multiple document interface” or MDI, which allows one frame window to itself organized multiple windows within it.  This is somewhat of a precursor to the tabbed interfaces prevalent in web browsers. 

    As a side note, one of the earlier debates that accompanied the “tiled v. overlapping” conversations in the early Windows project was over having one menu bar at the top of the screen or a copy of the menu bar for each window (or document or application).  Early on this was a big debate because there was such limited screen resolution (VGA, 640x480) that the redundancy of the menu bar was a real-estate problem.  In today’s large scale monitors this redundancy is more of an asset as getting to the UI elements with a mouse or just visually identifying elements requires much less movement.  Go figure!

    Screenshot of Windows 2.0 Screenshot of Windows Vista
    From Windows 2.0 to Vista.

    An area I’ve been focusing on is in the “window management” part of the system – specifically the features involved in moving and arranging windows on screen (these are different than the window switching controls like the taskbar and alt-tab, but closely related).  In general, people expect windows to be moveable, resizable, maximizable, minimizable, closeable; and expect them to be freely arranged and overlapping, with the currently used window sitting on top.  These transformations and the supporting tools (caption buttons, resize bars, etc) make up the basic capabilities that let people arrange and organize their workspace to their liking. 

    In order to improve on a feature area like this we look closely at the current system - what have we got, and what works?  This means looking at the way it’s being used in the marketplace by ISVs, and the way it’s used and understood by customers.

    Standard caption buttons or upper right corner of a window in Vista.

    Caption buttons give a simple way to minimize, maximize, and close.  Resizable windows can be adjusted from any of their 4 edges.

    Data on Real-World Usage 

    As pointed out in the previous Taskbar post, on average people will have up to 6 – 9 windows open during a session.  But from looking at customer data, we find that most time is spent with only one or two windows actually visible on screen at any given time.  It’s common to switch around between the various open windows, but for the most part only a few are visible at once.

    Typical number of visible windows (one window 60%, two windows 29%, three or more windows 11%.
    Windows Feedback Panel data

    As part of our planning, we looked at how people spend their time and energy in moving and sizing their windows. This lets us understand what’s working well in the current system, and what could be improved.

    For example, we know that maximize is a widely used feature because it optimizes the work space for one window, while still being easy to switch to others.  Users respond to that concept and understand it.  Since most of the time users just focus on one window, this ends up being very commonly used.  We know that for many applications people ask for every single pixel (for example spreadsheets where a few pixels gain a whole extra row of column) and thus the beyond maximize features for “full screen” become common, even for everyday productivity.

    An issue we've heard (as recently as the comments on the taskbar post!) with maximize in Vista is that the customized glass color isn’t very visible, because the windows and taskbar become dark when a window is maximized. (In Vista you can customize the glass window color – and in 29% of sessions a custom color has been set).  The darker look was used to help make it clear that the window is in the special maximized state.  This was important because if you don’t notice that a window is maximized and then try to move it, nothing will happen - and that can be frustrating or confusing.  For Windows 7 we’re looking at a different approach so that the customized color can be shown even when a window is maximized.

    Pie chart showing custom window color selection.  29% of sessions have a custom color of "glass".

    Interestingly, people don’t always maximize their windows even when they’re only using one window at a time.  We believe one important reason is that it’s often more comfortable to read a text document when the window is not too wide.  The idea of maximizing is less useful on a wide monitor when it makes the sentences in an email run 20+ inches across the screen; 4 or 5 inches tends to be a more pleasant way to read text.  This is important because large desktop monitors are becoming more common, and wide-aspect monitors are gaining popularity even on laptops.  Since Windows doesn’t have a maximize mode designed for reading like this, people end up manually resizing their windows to make them as tall as possible, but only somewhat wide.  This is one of the areas where a common task like reading a document involves excessive fiddling with window sizes, because the system wasn’t optimized for that scenario on current hardwarwe.

    Worldwide LCD monitor shipments by form factor. Distribution of native monitor resolution 2005-20010 est.
    Resolution data suggests wide aspect-ratio monitors will become the norm.

    Being able to see two windows side by side is also a fairly common need.  There are a variety of reasons why someone may need to do this – comparing documents, referring from one document into another, copying from one document or folder into another, etc.  It takes a number of mouse movements to set up two windows side by side – positioning and adjusting the two windows until they are sized to roughly half the screen.  We often see this with two applications, such as comparing a document in a word processor with the same document in a portable reader format. 

    Users with multiple monitors get a general increase in task efficiency because that setup is optimized for the case of using more than one window at once.  For example, it’s easy to maximize a window on each of the monitors in order to efficiently use the screen space.  In a Microsoft Research study on multi-tasking, it was found that participants who had multiple monitors were able to switch windows more often by directly clicking on a window rather than using the taskbar, implying that the window they want to switch to was already visible.  And interestingly, the total number of switches between windows was lower.  In terms of task efficiency, the best click is an avoided click.  

    Multimonitor users rely less on the taskbar and more on window interactions to switch among windows.
    MSR research report

    Single monitor machines are more common than multi-mon machines, but the window managing features aren’t optimized for viewing multiple windows at once on one monitor.  The taskbar does has context menu options for cascade, stack, or side-by-side, but we don't believe they're well understood or widely used, so most people end up manually resizing and moving their windows whenever they want to view two windows side by side.

    An interesting multiple window scenario occurs when one of the windows is actually the desktop.  The desktop is still commonly used as a storage folder for important or recent files, and we believe people fairly often need to drag and drop between the desktop and an explorer window, email, or document.  The “Show Desktop” feature gives quick access to the desktop, but also hides the window you're trying to use.  This means you either have to find and switch back to the original window, or avoid the Show Desktop feature and minimize everything manually.  It’s very interesting to see scenarios like this where the people end up spending a lot of time or effort managing windows in order complete a simple task.  This kind of experience comes across in our telemetry when we see complex sequences repeated.  It takes further work to see if these are common errors or if people are trying to accomplish a multi-step task.

    Evolving the design

    To find successful designs for the window management system, we explore a number of directions to see which will best help people be productive.  From extremes of multi-tasking to focusing on a single item, we look for solutions that scale but that are still optimized for the most common usage.  We look at existing approaches such as virtual desktops which can help when using a large number of different windows (especially when they are clustered into related sets), or docking palettes that help efficiently arrange space (as seen in advanced applications such as Visual Studio).  And we look at novel solutions tailored to the scenarios we're trying to enable.

    We also have to think about the variety of applications that the system needs to support.  SDI apps (single document interface) rely heavily on the operating system to provide window management features, while MDI apps (multiple document interface)  provide some of the window management controls for themselves (tabbed UI is an increasingly popular approach to MDI applications).  And some applications provide their own window sizing and caption controls in order to get a custom appearance or behavior.  Each of these approaches is valuable, and the different application styles need to be taken into account in making any changes to the system.

    For Window 7 our goal is to reduce the number of clicks and precise movements needed to perform common activities.  Based on data and feedback we've gotten from customers,  a number of scenarios have been called out as important considerations for the design.  As with all the designs we’re talking about—it is important to bring forward the common usage scenarios, make clear decisions on the most widely used usage patterns, address new and “unarticulated needs”, and to also be sure to maintain our philosophy of “in control”.  Some of the scenarios that are rising to the top include:

    • Can efficiently view two windows at once, with a minimal amount of set up.
    • Simple to view a document at full height and a comfortable reading width.
    • Quick and easy to view a window on the desktop.
    • The most common actions should require the least effort - quicker to maximize or restore windows with minimal mouse precision required.
    • Keyboard shortcuts to replace mouse motions whenever possible for advanced users.
    • Useful, predictable, and efficient window options for a range of displays: from small laptops to 30” or larger screens; with single or multiple monitors.
    • Easy to use different input methods: mouse, keyboard, trackpad, pen, or touch screens.
    • Customized window glass color visible even when maximized.
    • Overall - customers feel in control, and that the system makes it faster and easier to get things done.

    This last point is important because the feeling of responsiveness and control is a key test for whether the design matches the way people really work.  We put designs and mockups in the usability lab to watch how people respond, and once we see people smiling and succeeding easily at their task we know we are on the right track. The ultimate success in a design such as this is when it feels so natural that it becomes a muscle memory.  This is when people can get the feeling that they’ve mastered a familiar tool, and that the computer is behaving as it should.

    This is some of the background on how we think about window management and doing evolutionary design in a very basic piece of UI.  We can’t wait to hear feedback and reactions, especially once folks start getting their hands on Windows 7 builds.

    - Dave

  • Engineering Windows 7

    Follow-up: Starting, Launching, and Switching

    • 83 Comments

    Lots of discussion on the taskbar and associated user interface.  Chaitanya said he thought it would be a good idea to summarize some of the feedback and thoughts.  --Steven

    We’d like to follow up on some themes raised in comments and email.  This post looks at some observations on consistent feedback expressed (though not universal) and also provides some more engineering / design context for some of the challenges expressed.

    First it is worth just reinforcing a few points that came up that were consistently expressed:

    • Many of you agree that the Notification Area needs to be more manageable and customizable. 
    • We received several comments about rearranging taskbar buttons.  This speaks to the need for a predictable place where taskbar buttons appear as well as your desire for more control over the taskbar.
    • There were comments that talked about Quick Launch being valuable, but that it could stand to be an even better launching surface (e.g. larger by default or more room).
    • Thumbnails are valuable to many of you, but their size doesn’t always help you find the window you are looking for.  There is interest in a better identification method of windows that consistently provided the right amount of information.
    • Better scaling of supported windows was discussed.  This includes optimizing the taskbar for more windows and spanning multiple displays. 

    Data

    Several of you asked about the conclusions we are drawing from the data we collect and how we will proceed.

    @Computermensch writes “The problem with this "analysis" (show me the data) is that you're only managing current activities surrounding the taskbar. So with respect "to evolving the taskbar" you're only developing it within its current operational framework while developing or evolution of really should refer to developing the taskbars concept.” 

    @Bluvg posts “What if the UI itself was a reason that people didn't run more than 6-9 windows?  In other words, what if the UI has a window number upper bound of effectiveness?  Prioritizing around that 6-9 scenario would be taking away the wrong conclusion from the data, if that were the case.  The UI itself would be dictating the data, rather than being driven by user demand.”

    As we’ve said in all our posts around the data we collect and how we use it, data do not translate directly into our features, but informs the decisions.  Information we collect from instrumentation as well as from customer interviews merely provides us with real-world accuracy of how a product is currently used.  The goal is not necessarily to just design for the status quo.  However, we must recognize that if a new design emerges that does not satisfy the goals and behavior of our customers today, we risk resistance.  This is not to say one should never innovate and change the game—just that to do so must be respectful of the ultimate goal of the customer.  Offering a new solution to a problem is great; just make sure you’re solving the right problem and that there is a path from where people are today to where you think the better solution resides.  With that said, rest assured that our design process recognizes the need for the taskbar to scale more efficiently for larger sets of windows.  This would allow those who possibly feel “trapped” in the 6-9 window case to more comfortably venture to additional windows, if they really require it.  Also, the improvements we make to the 90% case should still hold benefits to the current outliers. 

    Notification Area

    With so much feedback, it is always valuable to recognize when customer comments converge.  The original post called out the problems with the Notification Area and these issues were further emphasized with your thoughts.

    @Jalf writes “Having 20 icons and a balloon notification every 30th second taking up space at the taskbar where it's *always* taking up space is just not cool. By all means, the information should be there if I need it, but can't we just assume that if I don't actively look for the information, it's probably because I don't want it.

    Jalf’s comment is particularly interesting because it speaks to both the pros and cons of notifications.  They certainly can be valuable, but they can also very easily overwhelm the customer as many of you note.  A careful balance therefore must be reached such that the customer is kept informed of information that is relevant while she continues to remain in control.  Since relevant is relative, the need for control is fundamental.  Rest assured we are aware of the issues and we are taking them very seriously.

    Multi-mon Support

    It comes as no surprise that many of you wrote to discuss multi-monitor support for the taskbar. This is a popular request from our enthusiasts (and our own developers) and was called out as an area of investigation in the original post. 

    @Justausr is very direct with this comment: “The lack of multi-monitor support is just about a crime.  We've seen pictures of Bill Gate's office and his use of 3 monitors.  Most developers have 2 monitors these days.  Why was multi-monitor support for the taskbar missing?  Once again, this is an example of the compartmentalization of the Windows team and the lack of a user orientation in defining and implementing features.  The fact that this is even a "possible" and not an "of course we're going to..." shows that you folks STILL don't get it.”

    At least in this particular case we tend to think we “get it”, but we also tend to think that the design of a multi-mon taskbar is not as simple as it may seem.  As with many features, there is more than one way to implement this one.  For example, some might suggest a unique taskbar that exists on each display and others suggest a taskbar that spans multiple displays.  Let’s look at both of these approaches.  While doing so also keep in mind the complexities of having monitors of different sizes, orientations, and alignments. 

    If one was to implement a taskbar for each display where each bar only contained windows for its respective portion of the desktop, some issues arise.  Some customers will cite advantages of less mouse travel since there is always a bar at the bottom on their screen.  However, such a design would now put the onus on the customer to track where windows are.  Imagine looking for a browser window and instead of going to a single place, you now had to look across multiple taskbars to find the item you want.  Worse yet, when you move a window from one display to another, you would have to know to look in a new place to find it.  This might seem at odds with the request to rearrange taskbar buttons because customers want muscle memory of their buttons.  It would be like having two remotes with dynamically different  functionality for your TV. This is one of the reasons that almost every virtual desktop implementation keeps a consistent taskbar despite the desktop you are working on.  

    Another popular approach is a taskbar that spans multiple desktops.  There are a few third-party tools that attempt to emulate this functionality for the Windows taskbar.  The most obvious advantage of this approach (as well as the dual taskbar) is that there is more room offered for launching, switching and whispering.  It is fairly obvious that those customers with multiple displays have more room to have more windows open simultaneously and hence, require even more room on their taskbar.  Some of our advanced customers address this issue by increasing the height of the taskbar to reveal multiple rows.  Others ask for a spanning taskbar.  The key thing to recognize is that the problem is not necessarily that the taskbar doesn’t span, but that more room is required to show more information about windows.  So, it stands to reason that we should come up with the best solution to this problem, independent of how many displays the customer has. 

    We thought it would be good to just offer a brief discussion on the specifics of solving this design problem as it is one we have spent considerable time on.  One of the approaches in general we are working to do more of, is to change things when we know it will be a substantial improvement and not also introduce complexities that outweigh the benefits we are trying to achieve.

    Once again, many thanks for your comments.  We look forward to talking soon.

    - Chaitanya

  • Engineering Windows 7

    User Interface: Starting, Launching, and Switching

    • 131 Comments

    Where to Start?  In this post, Chaitanya Sareen, a senior program manager on the Core User Experience team, sets the engineering context for the most frequently used user-interface elements in Windows – the Windows Taskbar.  -- Steven

    It should come as no surprise that we receive lots of feedback about the taskbar and its functionality in general. It should also come as no surprise that we are constantly trying to raise the bar and improve the taskbar experience for our customers, while making sure we bring forward the familiarity and benefits (and compatibility) of the existing implementation and design. In this post, the we would like to provide some insight into that unassuming bar most likely at the bottom of your Windows desktop. Let’s take a closer look at its various parts, data we’ve collected and how this learning will inform the engineering of Windows 7.

    Taskbar Basics

    Our taskbar made its debut way back in Windows 95 and its core functionality remains the same to this day. In short, it provides launching, switching and “whispering” functionality. Figure 1 shows the Vista taskbar and calls out its basic anatomy. Notable pieces are the taskband, Quick Launch, the Start Menu, Desktop Toolbars (aka Deskbands) and the Notification Area. Collectively, these components afford some of the most fundamental controls for customers to start, manage and monitor their tasks.

    Image of Windows taskbar pointing out names of various regions.

    Fig. 1: Windows Taskbar Anatomy

    Taskband: The faithful window switcher

    The taskband is one of the most important parts of the taskbar. It hosts buttons which represent most of the windows open on the desktop. Think of the taskband as a remote control for your computer—you can switch windows just like switching channels on a TV. The idea of switching windows is the most fundamental aspect of the Windows taskbar. Other operating systems also have bars at the bottom of their screen, although theirs may have different goals. For example, Mac OS X has a Dock which is primarily a program launcher and a program switcher. Clicking on an icon on the Dock usually brings up all the windows of a running program. In 2003 Apple introduced a window switcher known as Exposé which provides a different visual approach to our long-standing Alt-tab interface (Vista’s Flip 3D is yet another visual approach). These dedicated window switchers all aim to provide customers with a broad view of their open windows, but they each require the customer to first invoke them. The taskband on the other hand, is designed to always be visible so that windows remain within quick access of the mouse. This makes the taskbar the most prominent window switcher of the Windows operating system.

    Two noteworthy taskbar changes were introduced in the last eight years. Windows XP ushered in grouping which allows taskbar buttons to collapse into a single button to save space and organize windows by their process. Vista presented taskbar thumbnails. These visual representations give customers more information about the window they are looking for. While valuable, interfaces like the taskbar, Alt-tab and even Apple’s own Exposé reveal that thumbnails are not always large enough to guarantee recognition of a window. Their value further degrades when they have to shrink to accommodate many open windows, which is feedback we receive from those that often have lots of running programs x lots of open windows.

    The Start Menu: the Windows launch pad

    The Start Menu has always been anchored off the taskbar as a starting point for the customer’s key tasks such as launching or accessing system functionality. Microsoft of course used term “Start” and prominently labeled the Start Menu’s button as such. You may even recall the huge marketing campaign for Windows 95 which featured the Rolling Stone’s “Start Me Up”. In all seriousness though, our research showed that many customers didn’t always know where to go on their computer to start a task. When a customer was placed in front of a Windows 95 machine she now had a clearly labeled place to start. And yes, we’ve heard the joke that you click start to shutdown your machine. Speaking of shutdown, we did encounter some challenges with the power options in Vista’s Start Menu. The goal was to bubble-up and advertise the sleep option so that customers enjoy a faster resume. However, we now know despite our good intentions, customers are opening that fly-out menu and selecting other options. We’re looking into improving this experience.

    The Start Menu has undergone many changes over the years. One notable change was the appearance of a MFU (most frequently used) section in Windows XP that suggests commonly (well frequently) used programs. The goal here was to save the customer time by not having to always go to All Programs. Since these items appear automatically based on usage, no manual customization was even required. All Programs itself has undergone several iterations. Customer feedback revealed that people encountered difficulty in traversing the original All Programs fly-out menu. It wasn’t uncommon to have your mouse “fall off” the menu and then you’d have your restart the task all over again. This was particularly the case for laptop customers using a trackpad. It also didn’t help that expanding this menu suddenly filled the entire desktop which looked visually noisy and it also required lots of mouse movement. And of course, for machines with large number of items and/or groups it was especially complex, and even more so on small screens.  Vista introduced a single menu that requires less mouse acrobatics.

    Search was another important addition to the Start Menu that makes launching even easier. This new feature in Vista provides fast access to programs and files without the need to use a mouse at all. Typing in a phrase quickly surfaces programs, files and even e-mails. We’ve received many positive comments from enthusiasts who feel this is a key performance win in terms of “time to launch”.  It may be interesting to note that Start Menu’s search is optimized to first return program results as this was viewed as the most common scenario among our customers (using some of the Desktop Search technology). Search even permits customers to use parameters to further scope their queries. For instance, one can use “to:john” or “from:jane” to find a specific mail directly from the Start Menu. Our advanced customers also enjoy the benefit of using the Start Menu’s search as a replacement of the Run Dialog. Just as they would type the name of an executable along with some switches in the dialog, they can now just type this directly into the search field. We could (and will) dedicate an entire blog post to search alone, but hopefully you get a sense of how search certainly provides a powerful launch alternative to mouse navigation.

    Quick Launch: Launching at your fingertips

    Quick Launch provides a way for customers to launch commonly used programs, files, folders and websites directly off the taskbar. It was introduced to Windows 95 by Internet Explorer 4.0 with the Windows Desktop Update. Customizing Quick Launch is as simple as dragging shortcuts into to this area. It saves you a trip to the Start Menu, the desktop or a folder when you want to launch something. An interesting feature of Quick Launch that you may not be aware of is that it has always supported large icons (unlock the taskbar, right-click on Quick Launch and click on large icons under “View”) as seen in figure 2. Of course growing the icons begins to intrude on the real-estate of the taskband which is one of the reasons we have not enabled this configuration by default. As an aside, Windows XP had Quick Launch turned off by default in an attempt to reduce the number of different launching surfaces throughout Windows. Based on your feedback, we quickly rectified this faux pas and Quick Launch was turned on by default again. Don’t mess with quick access to things people use every day! We heard you loud and clear.

    clip_image004

    Fig. 2: Large Icons in Quick Launch. Large icons on the taskbar have been supported since Windows 95 with IE 4

    Desktop Toolbars (aka Deskbands): Gadgets for your taskbar

    Desktop Toolbars offer extensible and specialized functionality at the top-level of the taskbar. This functionality also came to the taskbar via Internet Explorer 4.0 back in the ‘90s. You can access toolbars by right-clicking on your taskbar and expanding “Toolbars”. Personally, I like to think of Desktop Toolbars as an early type of gadgets for the Windows platform. Over the years developers have written various toolbars including controls for background music (e.g. Windows Media Player’s mini-mode shown in figure 1), search fields, richer views of laptop batteries, weather forecasts and many more.

    One of the original scenarios of Desktop Toolbars was to allow customers to launch items directly off the taskbar. In fact, Quick Launch itself is a special type of toolbar that surfaces shortcuts in the Quick Launch folder. Did you know you can even create your own toolbar for any folder on your computer so that you have quick access to its contents (from the Toolbar menu, select “New Toolbar” and just choose the folder you’d like to access)? Apple’s latest OS introduced similar functionality to the Dock called Stacks. While I think their implementation of this feature is generally more visually appealing, it is interesting to note they recently released a new list representation that matches our original functionality. Seems like we both agree a simple list is usually the most efficient way to parse and navigate lots of items.

    After extolling all the greatness of Desktop Toolbars, we must also admit they introduce several challenges. For starters, they aren’t the easiest thing to discover. They also take up valuable space on an already busy taskbar. Most importantly though, they don’t always solve the customer goal. Sure you can have a folder’s contents accessible off your taskbar, but what if the files you want quick access to aren’t located in a single place? These are design challenges we intend to tackle.

    Notification Area: The whisperer

    The Notification Area is pretty much what you expect—an area for notifications. It was an original part of the taskbar and it was designed to whisper information to the customer. Here you can easily monitor the system, be alerted to the state of a program or even check the time. Icons were the predominant way to convey information until later versions of Windows introduced notification balloons that provide descriptive alerts with text. Also added was a collapsible UI that hid inactive icons so the taskbar would appear cleaner.

    With more developers leveraging its functionality, the Notification Area has grown in popularity over the years. Some may observe that it has changed from a subtle whisperer to something louder. Based upon the feedback we’ve collected from customers, we recognize the Notification Area could benefit from being less noisy and something more controllable by the end-user.

    Show Me the Data

    Earlier posts to this blog discussed how customers can voluntarily and anonymously send us data on how they use our features. We use these findings to help guide our designs. Please note that data do not design features for us, but they certainly help us prioritize our investments as well as validate our approach.  All to often we’re all guilty of saying something like “we know everyone does <x>” or “all users do <y>”.  Given the reliability and statistical accuracy of this data, we can speak with more real-world accuracy about how things are in used in practice.  Let’s look at some interesting information we have collected about how our customers use the taskbar.

    Figure 3 provides some of the most important data about the taskbar—window count. On average, we know that a vast majority of our customers encounter up to 6-9 simultaneous windows during a session (a session is defined as a log in / log out or 24 hours—whichever occurs first). It goes without saying that the taskbar should work for the entire distribution of this graph, but identifying the “sweet spot” helps focus our efforts on the area that matters most to the most amount of customers. So, we know that if we nail the 6-9 case and we work well for the 0-5 as well as the 10-14 scenarios, we’ve addressed almost 90% of typical sessions.

    Histogram indicating peak number of open windows in sessions.

    Fig. 3: What’s the maximum number of windows opened at a time?

    Figures 4 and 5 help us understand how customers customize their taskbars. We could probably spend an entire post focused solely on how we determine the options we expose. Perhaps another time we’ll tackle the paradox of choice and how options stress our engineering process yet also make the product more fun for a set of customers. Until then, let’s see what conclusions we can draw from these findings. The most obvious takeaway is that most customers do not change the default settings, which are a simple right-click Properties away. For example, it may be interesting to note how often end-users relocate the taskbar to other regions of the screen—less than 2% of sessions have a taskbar that’s not at the bottom of the screen. We also know that some small percentage of machines accidently relocate the taskbar and more often than not end-users have difficulty undoing such a state—though our data does not differentiate this situation.  This data does not necessarily mean we would remove relocation functionality, but rather we could prioritize investments in a default horizontal taskbar over other configurations.

    Image of Taskbar properties dialog indicating percentage of sessions where a particular option is customized.

    Fig. 4: How do people customize their taskbar? The red number indicates percentage of sessions in which the corresponding checkbox is enabled.

    LOCATION

    SESSION PERCENT

    Bottom (default)

    98.4%

    Top

    1.02%

    Left

    0.36%

    Right

    0.21%

    Fig. 5: Where do people put their taskbar?

    Figure 6 provides some insight into the Windows Media Player Desktop Toolbar. The Windows UX Guidelines prescribe that to create a toolbar on the customer’s taskbar, you must call a Windows Shell API that asks the customer for permission. Looking at the Windows Media Player usage we found that only 10% sessions show that the customer consented. Even more surprising is that only 3% of sessions see the toolbar at all (you still need to minimize Media Player to see the controls). In other words, 97% of sessions aren’t even enjoying this functionality at all! Since we do believe the scenario has value, we know to look into alternative designs. We’d like to surface this functionality to a larger set of customers while making sure the customer remains in control of her experience.

    STATE

    SESSION PERCENT

    Toolbar enabled

    10%

    Toolbar enabled and visible

    3%

    Fig. 6: How many people use the Windows Media toolbar? Enabled means user consented to the toolbar, visible means the toolbar actually appeared on the taskbar.

    Evolving the Taskbar

    Before the team even sat down to brainstorm ideas about improving the taskbar, we all took time to first respect the UI. The taskbar is almost 15 years old, everyone uses it, people are used to it and many consider it good enough. We also recognized that if we were to improve it, we could not afford to introduce usability failures where none existed. This automatically sets a very high bar. We proceeded carefully by first looking into areas for improvement.

    Here’s a small sample of some things we’ve learned from our data, heard from our customers and what we’ve observed ourselves. One of favorite ways of gaining verbatim comments in a lab setting where we can validate the instrumented data but also gain in-depth context via interviews and questionnaires.  In engineering Windows 7 we have hundreds of hours of studies like these.  Please remember this is just a glimpse of some feedback—this is not an exhaustive list nor it is implied that we will, or should, act upon all of these concepts.

    • Please let me rearrange taskbar buttons! Pretty please?
    • I sometimes accidently click on the wrong taskbar button and get the wrong window.
    • It would be great if the taskbar spanned multiple monitors so there’s more room to show windows I want to switch to.
    • There isn’t always enough text on the taskbar to identify the window I’m looking for.
    • There’s too much text on the taskbar. (Yes, this is the exact opposite of the previous item—we’ve seen this quite a bit in the blog comments as well.)
    • It may take several clicks to get to some programs or files that I use regularly.
    • Icons of pinned files sometimes look too much alike—I wish I could tell them apart better.
    • The bottom right side of my screen is too noisy sometimes. There are lots of little icons and balloons competing for my attention.
    • How do I add/remove “X” from the taskbar?
    • I would like Windows to tuck away its features cleverly and simplify its interface.

    In the abstract, we can summarize this feedback with a few principles:

    • Customers can switch windows with increased confidence and ease.
    • Commonly used items and tasks should be at the customer’s fingertips.
    • Customers should always feel in control.
    • The taskbar should have a cleaner look and feel.

    We hope this post provides a little more insight into the taskbar as well as our process of collecting and reacting to customer feedback. Stay tuned for more details in the future.

    - Chaitanya

  • Engineering Windows 7

    The "Ecosystem"

    • 86 Comments

    In the emails and comments, there are many topics that are raised and more often than not we see the several facets or positions of the issue. One theme that comes through is a desire expressed by folks to choose what is best for them. I wanted to pick up on the theme of choice since that is such an incredibly important part of how we approach building Windows—choice in all of its forms. This choice is really because Windows is part of an ecosystem, where many people are involved in making many choices about what types of computers, configuration of operating system, and applications/services they create, offer, or use. Windows is about being a great component of the ecosystem and what we are endeavoring to do with Windows 7 is to make sure we do a great job on the ecosystem aspects of building Windows 7.

    Ecosystem and choice go hand in hand. When we build Windows we think of a number of key representatives within the ecosystem beyond Windows:

    • PC makers
    • Hardware components
    • Developers
    • Enthusiasts

    Each of these parties has a key role to play in delivering on the PC experience and also in providing an environment where many people can take a PC and provide a tailored and differentiated experience, and where companies can profit by providing unique and differentiated products and services (and choice to consumers). For Windows 7 our goals have been to be clearer in our plans and stronger in our execution such that each can make the most of these opportunities building on Windows.

    PC Makers (OEMs) are a key integration point for many aspects of the ecosystem. They buy and integrate hardware components and pre-install software applications. They work with retailers on delivering PCs and so on. The choices they provide in form factors for PCs and industrial design are something we all value tremendously as individuals. We have recently seen an explosion in the arrival of lower cost laptops and laptops that are ultra thin. Each has unique combinations of features and benefits. The choice to consumers, while sometimes almost overwhelming, allows for an unrivaled richness. For Windows 7 we have been working with OEMs very closely since the earliest days of the project to develop a much more shared view of how to deliver a great experience to customers. Together we have been sharing views on ways to provide differentiated PC experiences, customer feedback on pre-loaded software, and partnering on the end-to-end measurement of the performance of new PCs on key metrics such as boot and shutdown.

    Hardware components include everything from the CPU through the “core” peripherals of i/o to add-on components. The array of hardware devices supported by Windows through the great work of independent hardware vendors (IHVs) is unmatched. Since Windows 95 and the introduction of plug-and-play we have continued to work to improve the experience of obtaining a new device and having it work by just plugging it in—something that also makes it possible to experience OS enhancements independent of releases of Windows. This is an area where some express that we should just support fewer devices that are guaranteed to work. Yet the very presence of choice and ever-improving hardware depends on the ability of IHVs to provide what they consider differentiated experiences on Windows, often independent of a specific release of Windows. The device driver model is the core technology that Microsoft delivers in Windows to enable this work. For Windows 7 we have committed to further stabilization of the driver model and to pull forward the work done for Windows Vista so it seamlessly applies to Windows 7. Drivers are a place where IHVs express their differentiated experience so the breadth of choice and opportunity is super important. I think it is fair to say that most of us desire the experience where a “clean install” of Windows 7 will “just work” and seamlessly obtain drivers from Windows Update when needed. Today with most modern PCs this is something that does “just work” and it is a far cry from even a few years ago. As with OEMs we have also been working with our IHV partners for quite some time. At WinHEC we have a chance to show the advances in Windows 7 around devices and the hardware ecosystem.

    Developers write the software for Windows. Just as with the hardware ecosystem, the software ecosystem supports a vast array of folks building for the Windows platform. Developers have always occupied a special place in the collective heart of Microsoft given our company roots in providing programming languages. Each release of Windows offers new APIs and system services for developers to use to build the software they want to build. There are two key challenges we face in building Windows 7. First, we want to make sure that programs that run on Windows Vista continue to run on Windows 7. That’s a commitment we have made from the start of the project. As we all know this is perhaps the most critical aspect of delivering a new operating system in terms of compatibility. Sometimes we don’t do everything we can do and each release we look at how we can test and verify a broader set of software before we release. Beta tests help for sure but lack the systematic rigor we require. The telemetry we have improved in each release of Windows is a key aspect. But sometimes we aren’t compatible and then this telemetry allows us to diagnose and address post-release the issue. If you’ve seen an application failure and were connected to the internet there’s a good chance you got a message suggesting that an update is available. We know we need to close the loop more here. We also have to get better at the tools and practices Windows developers have available to them to avoid getting into these situations—at the other end of all this is one customer and bouncing between the ISV and Microsoft is not the best solution.

    Our second challenge is in providing new APIs for developers that help them to deliver new functionality for their applications while at the same time provide enough value that there is a desire to spend schedule time using these APIs. Internally we often talk about “big” advances in the GUI overall (such as the clipboard or ability to easily print without developing an application specific driver model). Today functionality such as networking and graphics play vital roles in application development. We’ve talked about a new capability which is the delivery of touch capabilities in Windows 7. We’ve been very clear about our view that 64-bit is a place for developers to spend their energy as that is a transition well underway and a place where we are clearly focused.

    Enthusiasts represent a key enabler of the ecosystem, and almost always the one that works for the joy of contributing. As a reader of this blog there’s a good chance you represent this part of the ecosystem—even if we work in the industry we also are “fans” of the industry. There are many aspects to a Windows release that need to appeal the enthusiasts. For example, many of us are the first line of configuration and integration for our family, friends, and neighbors. I know I spent part of Saturday setting up a new wireless network for a school teacher/friend of mine and I’m sure many of you do the same. Enthusiasts are also the most hardcore about wanting choice and control of their PCs. It is enthusiasts sites/magazines that have started to review new PCs based on the pre-installed software load and how “clean” that load is. It is enthusiasts that push the limits on new hardware such as gaming graphics. It is enthusiasts who are embracing 64-bit Windows and pushing Microsoft to make sure the ecosystem is 64-bit ready for Windows 7 (we’re pushing of course). I think of enthusiasts as the common thread running through the entire ecosystem, participating at each phase and with each segment. This blog is a chance to share with enthusiasts the ins and outs of all the choices we have to make to build Windows 7.

    There are several other participants in the ecosystem that are equally important as integration points. The system builders and VARs provide PCs, software, and service for small and medium businesses around the world. Many of the readers of this blog, based on the email I have received, represent this part of the ecosystem. In many countries the retailers serve as this integration point for the individual consumer. For large enterprise customers the IT professionals require the most customization and management of a large number of PCs. Their needs are very demanding and unique across organizations.

    Some have said that the an ecosystem is not the best approach that we could do a much better job for customers if we reduce the “surface area” of Windows and support fewer devices, fewer PCs, fewer applications, and less of Windows’ past or legacy. Judging by the variety of views we've seen I think folks desire a lot of choice (just in terms of DPI and monitor size).  Some might say that from an engineering view less surface area is an easier engineering problem (it is by definition), but in reality such a view would result in a radical and ever-shrinking reduction in the choices available for consumers. The reality is engineering is about putting constraints in place and those constraints can also be viewed as assets, which is how we view the breadth of devices, applications, and “history” of Windows. The ecosystem for PCs depends on opportunities for many people to try out many ideas and to explore ideas that might seem a bit crazy early on and then become mainstream down the road. With Windows 7 we are renewing our efforts at readying the ecosystem while also building upon the work done by everyone for Windows Vista.

    The ecosystem is a pretty significant in both the depth and breadth of the parties involved. I thought for the purposes of our dialog on this blog it is worth highlighting this up front. There are always engineering impacts to balancing the needs each of the aspects of the ecosystem. Optimizing entirely along one dimension sometimes seems right in the short term, but over any period of time is a risky practice as the benefits of a stable platform that allows for differentiation is something that seems to benefit many.

    With Windows 7 we committed up front to doing a better job as part of the PC ecosystem.

    Does this post reflect your view of the ecosystem? How could we better describe all those involved in helping to make the PC experience amazing for everyone?

    --Steven

  • Engineering Windows 7

    More Follow up to discussion about High DPI

    • 48 Comments

     

    Excellent!  What a fun discussion we’ve been having on High DPI.  It has been so enriching that Ryan wrote up a summary of even more of the discussion.  Thanks so much!  --Steven

    There have been quite a few comments posted regarding high DPI, along with some lively discussion. Most of what has been said has been good anecdotal examples which are consistent with the data we have collected. For the areas where we didn’t have data, the comments have helped to validate many of our assumptions for this group. It is also clear that there are some areas of this feature which are confusing, and in some cases there is a bit of “myth” around what is ideal, what is possible, and what is there. This follow up post is mostly to summarize what we have heard, and to provide some details around the areas where there has been a bit of confusion.

    Here is a list of our top “assumptions” which have been echoed by the comments posted:

    • Most people adjust the screen resolution either to get larger text, or because it was an accident
    • There is a core of people who know about high DPI and who use it
    • Some people prefer more screen real-estate while others people prefer larger UI
    • Discoverability of the DPI configuration is a concern for some
    • App compat is a common issue, even a “deal breaker” in some cases
    • IE Scaling is one of the top issues listed (see IE8 which fixes many of these!)
    • Lots of complexities/subtleties and it is pretty hard to explain this feature to most people

    There have also been a number of areas where there has been a bit of confusion:

    • Is it true that if everything were vector-based, these problems would all go away?
    • Shouldn’t this just work without developers having to do anything?
    • How is this different from per-application scaling like IE zooming?
    • Should DPI be for calibration or for scaling?

    Most of these topics have been covered to some degree in the comments, but since there has been so much interest, we decided to go into a bit more details around a few of the top issues/concerns.

    Vector Graphics vs. Raster Graphics

    With PCs, there is always the hope of a “silver bullet” technology which solves all problems making life easy for users, developers, and designers across the board. As an example, some of the comments to the original posting suggested that if we just made the OS fully vector based, these scaling problems would go away. It turns out that there are several issues with using vector graphics which are worth explaining.

    The first issue is that oftentimes when an icon gets to be small in size, it is better to use an alternate representation so that the meaning is clearer. Notice the icons below. In this case, the designer has chosen a non-perspective view for the icon when it is rendered at it’s smallest size.

    Example of the same icon treated differently depending on size.

    This is because the designer felt that the information expressed by the icon was clearer with a straight-on view at the smallest size. Here is another example illustrating this point:

    Additional example of icons treated differently as the size changes.

    Of course, this means that the designer must create multiple versions of the source image design, so there is additional complexity. The point here is that there is a tradeoff that must be made and the tradeoff is not always clear.

    Even when one vector source is used, it is common to have size-dependent tweaking to make sure that the result is true to what the designer had in mind. Imagine a vector graphic which has a 1-pixel line at 128x128 that gets scaled down by 1/2 to 64x64. The display has no way of rendering a perfect 1/2 pixel line! In many cases the answer is that the renderer will “round” to a nearby pixel line. However, doing this inherently changes the layout of the sub-elements of the image. And there is the question of, “which pixel line to round to?” If the designer does not hand tune the source material, it will be up to the rendering engine to make this decision, and that can result in undesirable effects. One could say that this should therefore define rules about what elements should be use in a vector, but that only further restricts what concepts can be represented.

    It turns out that even the TrueType fonts which we use in Windows are hand-tuned with size-dependant information in order to make the result as high quality as possible. Most of the TrueType rendering pipeline is based on algorithmic scaling of a vector source, but there are size-dependent, hand-coded “hints” in TrueType which the designer specifies to direct the system how to handle edge cases, such as lines falling between pixel boundaries. Here is a link describing this in more detail: http://blogs.msdn.com/fontblog/archive/2005/10/26/485416.aspx

    It is not even true that vector graphics are necessarily smaller in size (especially for small images). Most designers create graphics using an editor which builds up an image using many layers of drawings and effects. With bitmap based graphics, designers will “flatten” the layers before adding it to a piece of software. Most designers today pay little attention to the size of the layers because the flattening process essentially converts it to a fixed size based on the image resolution. With vector graphics, there is no such flattening technique so designers need to carefully consider the tools that they use and the effects that they add to make sure that their icon is not extremely large. I spent some time with one of our designers who had a vector graphic source for one of our bitmaps in Windows and the file was 622k! Of course that file size is fixed regardless of the resulting resolution, but that huge file flattens into this 23k PNG bitmap.

    Of course, a hand-tuned vector based representation of this could be probably made smaller if the size constraints were part of the design time process. But that would be an additional constraint put on the designer, and one which they would need to learn how to do well.

    How do we help developers?

    For applications that need to carefully control the layout and graphics, or scale the fidelity of the images based on the available resolution, having a way of specifying specific pixel locations for items is important to get the best result. This is as true on the Mac as it is on the PC (see http://developer.apple.com/releasenotes/GraphicsImaging/RN-ResolutionIndependentUI/). There is often a belief that if we just provided the right tools or the right framework then all these problems would go away. We all know that each set of tools and each framework have their own set of tradeoffs (whether that is Win 32, .net, or HTML). While there is no silver bullet, there are things we can do to make writing DPI aware applications easier for developers. As an example, there are two important upcoming ecosystem events in which we will be talking in detail about High DPI. We will also have materials which we will be making available to developers which will help educate them on how to convert existing applications to be DPI aware. The first event is Microsoft Professional Developer Conference, where we will talk about High DPI as part of the talk “Writing your Application to Shine on Modern Graphics Hardware (link)”. The second is the Windows Hardware Engineering Conference, in which we will be discussing high DPI as part of the “High Fidelity Graphics and Media” track (link).

    Help with App Compat Issues

    There have been several posts on app compat and high DPI (for example bluvg’s comment). There have also been comments talking about the complexity and understandability of the High DPI configuration. In some cases the app compat issues can be mitigated by enabling or disabling the automatic scaling feature. This can be changed globally by going to the DPI UI, clicking the button labeled “Custom DPI” and changing the checkbox labeled, “Use Windows XP style DPI scaling”. When this checkbox is unchecked, applications which are not declared to be DPI aware are automatically scaled by the window manager. When it is checked, automatic scaling is disabled globally. It is interesting to note that for DPI settings < 144 DPI, this box is checked by default, and for DPI settings >= 144 it is unchecked by default. In some cases, changing the default settings can result in a better experience depending on the applications that you use and your DPI setting. It is also interesting to note that automatic scaling can be turned off on a per application basis using the Vista Program Compatibility properties. Here is a link for more info on how to do that: http://windowshelp.microsoft.com/Windows/en-US/help/bf416877-c83f-4476-a3da-8ec98dcf5f101033.mspx. (Look at the section for “Disable Display Scaling on high DPI settings”.)

    How is DPI scaling different from per-application scaling (like IE Zoom?)

    A typical application UI is made up of a content window and a frame UI. The frame UI is where the menu items and toolbar buttons are. The content window is the “document view”. For example, in IE the content window is the actual webpage. It turns out the code required to support high DPI scaling for the content windows is the same code required to do the zooming feature. In fact, for the content window, IE8 simply uses the high DPI setting to configure the default zoom factor (see DPI Scaling and Internet Explorer 8 for more details). However, high DPI has the additional side effect of scaling the size of the frame UI. Since most people use the scaling feature to make text larger to be more readable, it makes sense to scale the frame UI as well, since the text in the menu items and toolbar tooltips will also scale. In a sense if there is per-application scaling that is really about the content area, and applications will support that as developers see the customer need. DPI scaling makes the UI elements of all applications render similarly.

    Shouldn’t DPI really be used for calibrating the screen (so “an inch is an inch”)?

    Some have suggested that we should just use high DPI as a way to calibrate the screen so that the physical size of an object is the same regardless of the display. This makes a ton of sense from a logical perspective. The idea would be to calibrate the display so “in inch is an inch”. We thought about doing this, but the problem is that it does not solve the end user need of wanting to have a way to configure the size of the text and the UI. If we then had a separate “global scale” variable, this would mean that application developers would need to pay attention to both metrics, which would add complexity to the developer story. Furthermore, if a user feels that the UI is too small, should it be up to the developer or the user to set the preference of how big the UI should be? In other words if the designer wants the button to be an inch, but the user wants the button to be 1.5 inches to make it easier to use, who should decide? The way the feature works today, it is up to the user to set their preference, but it is up to the application developer to make sure that the user preference is honored.

    Once again, it is really great to see so much interest in high DPI. We certainly have some challenges ahead of us, but in many ways it seems like the ecosystem is ripe for this change. Hopefully this follow up post helped to consolidate some of feedback which we have heard on the previous post and explain some of the complexities of this feature in more detail.

    --Ryan Haveson

  • Engineering Windows 7

    Follow-up on High DPI resolution

    • 73 Comments

    One of the cool results of this dialog is how much interest there is in diving into the details and data behind some of the topics as expressed in the comment and emails.  We’re having fun talking in more depth about these questions and observations.  This post is a follow-up to the comments about high DPI resolution, application compatibility, and the general problems with readability in many situations.  Allow me to introduce a program manager lead on our Desktop Graphics team, Ryan Haveson, who will expand on our discussion of graphics and Windows 7.  –Steven

    When we started windows 7 planning, we looked at customer data for display hardware, and we found something very interesting (and surprising). We found that roughly half of users were not configuring their PC to use the full native screen resolution. Here is a table representing data we obtained from the Windows Feedback Program which Christina talked about in an earlier post.

    Table showing that 55% of those with higher definition monitors lower their resolution.

    We don't have a way of knowing for sure why users adjust their screen resolution down, but many of the comments we’ve seen match our hypothesis that a lot of people do this to because they have difficulty reading default text on high resolutions displays.  With that said, some users probably stumble into this configuration by accident; for example due to a mismatched display driver or an application that changed the resolution for some reason but did not change it back. Regardless of why the screen resolution is lower, the result is blurry text that can significantly increase eye fatigue when reading on a PC screen for a long period of time. For LCD displays, much of the blurriness is caused by the fact that they are made up of fixed pixels. In non-native resolution settings, this means that the system must render fractional pixels across fixed units, causing a blurred effect. Another reason for the relative blurriness is that when the display is not set to native resolution, we can’t properly take advantage of our ClearType text rendering technology , which most people (though not all) prefer. It is interesting to note that the loss of fidelity due to changing screen resolution is less pronounced on a CRT display than on an LCD display largely because CRTs don’t have fixed pixels the way that LCDs do. However, because of the advantages in cost and size, and the popularity of the laptop PC, LCD displays are fast gaining market share in the installed base. Another problem with running in a non-native screen resolution is that many users inadvertently configure the display to a non-native aspect ratio as well. This results in an image that is both blurry and skewed! As you can imagine, this further exacerbates the issues with eye strain.

    Looking beyond text, in these scenarios the resulting fidelity for media is significantly reduced as well. With the configuration that many users have, even if their hardware is capable, they are not able to see native “high def” 720p or 1080p TV content, which corresponds to 1280x720 and 1920x1080 screen resolutions respectively. The PC monitor has traditionally been the “high definition” display device, but without addressing this problem we would be at risk of trailing the TV industry in this distinction. While it is true that only about 10% of users have a truly 1080p capable PC screen today, as these displays continue to come down in price the installed base is likely to continue to grow. And you can bet that there will be another wave of even higher fidelity content in the future which users will want to take advantage of. As an example, when displays get to 400 DPI they will be almost indistinguishable from looking at printed text on paper. Even the current generation of eBook readers with a DPI of ~170 look very much like a piece of paper behind a piece of glass

    From this we see that there is a real end user benefit to tap into here. It turns out that there is existing infrastructure in Windows called “High DPI” which can be used to address this. High DPI is not a new feature for Windows 7, but it was not until Vista that the OS user-interface made significant investments in support for high DPI (beyond the infrastructure present earlier). To try this out in Vista, rt. Click desktop -> personalize and select “Adjust Font Size (DPI)” from the left hand column. Our thinking for Windows 7 was that if we enable high DPI out of the box on capable displays, we will enable users to have a full-fidelity experience and also significantly reduce eye strain for on-screen reading. There is even infrastructure available to us to detect a display’s native DPI so we can do a better job of configuring default settings out of the box. However, doing this will also open up the door to expose some issues with applications which may not be fully compatible with high DPI configurations.

    One of the issues is that for GDI applications to be DPI aware, the developer must write code to scale the window frame, text size, graphical buttons, and layout to match the scaling factor specified by the DPI setting. Applications which do not do this may have some issues. Most of these issues are minor, such as mismatched font sizes, or minor layout artifacts, but some applications have major issues when run at high DPI settings.

    There are some mitigations that we can do in Windows, such as automatic scaling for applications which are not declared DPI aware (see Greg Schechter’s blog on the subject), but even these mitigations have problems. In the case of automatic scaling, applications which are not DPI aware are automatically scaled by the window manager. The text size matches the user preference, but it also introduces a blurry effect for that application’s window as a result. For people who can’t read the small text without the scaling, this is a necessary feature to make the high DPI configuration useful. However, other customers may only be using applications that scale well at high DPI or may be less impacted by mismatched text sizes and may find the resulting blurry effect caused by automatic scaling to be a worse option. Without a way for the OS to detect whether an application is DPI aware on not, we have to pick a default option. It always comes back to the question of weighing the benefits and looking at the tradeoffs. In the long term, the solution is to make sure that applications know how to be resolution independent and are able to scale to fit the desired user preference, which requires support in both our tools and documentation. The challenge for a platform is to figure out how to get there over time and how to produce the best possible experience during the transition.

    Short term vs. long term customer satisfaction

    Using the model of high definition TV, we can see that in the long term it is desirable to have a high fidelity experience. The only problem is that even though the high DPI infrastructure has been around for several windows releases (in fact there is an MSDN article dated 2001 on making applications DPI aware), we were not sure how many applications are actually tested in these configurations. So we were faced with an un-quantified potential short term negative customer impact caused by enabling this feature more broadly. The first thing we did is to quantify the exposure. We did this by performing a test pass with over 1,000 applications in our app compat lab to see how they behave at high DPI settings. The results we found are shown below, which shows the distribution of issues for these 1000 applications.

    One quick thing, when we say “bug” we mean any time software behaves in a manner inconsistent with expectations—so it can be anything from cosmetic to a crash. We categorize the severity of these bugs on a scale from 1 to 4, where Sev 1 is a really bad issue (such as a crash and/or loss of data or functionality) and Sev 4 is an issue which is quite subtle and/or very difficult to reproduce.

    It turns out that most applications perform well at high DPI, and very few applications have major loss of functionality. Of course, it is not the ones that work well which we need to worry about. And if 1% of applications have major issues at high DPI, that could be a significant number. So we took a look at the bugs and put them into categories corresponding to the issue types found. Here is what we came up with:

    Of 1000 applications tested for high DPI compatability, 1% had severity 1 issues, 1% severity 2, 5% serverity 3, and 2% severity 4, with 91% having no issue at all.

    What we found was that one of the most significant issues was with clipped UI. Looking into this deeper, it became apparent that most of these cases were in configurations where the effective screen resolution would be quite low (800x600 or lower). Based on this, we were able to design the configuration UI in such a way that we minimized the number of cases where users would configure such a low effective resolution. One by one we looked at the categories of issues and when possible, we came up with mitigations for each bucket. Of course, the best mitigation is prevention and so High DPI is a major focus for our developer engagement stories for PDC, WinHEC, and other venues coming up.

    Aggregate vs. individual user data

    One thing for us to look at is how many users are taking advantage of high DPI today (Vista/XP). Based on the data we have, only a very small percentage of users are currently enabling the high DPI feature. This could easily be interpreted as a clear end user message that they don’t care about this feature or have a need for this feature. An alternate explanation could be that the lack of adoption is largely because XP and Vista had only limited shell support for high DPI, and the version of IE which shipped on those platforms had significant issues with displaying mismatched font sizes and poorly scaled web pages. Also, we do know anecdotally that there are users who love this feature and have used it even before Vista. Once again, we have to make an interpretation of the data and it is not always crystal clear.

    Timing: is this the right feature for the market in this point in time?

    Fortunately, we don’t have a “chicken and egg” problem. The hardware is already out in the field and in the market, so it is just a matter of the OS taking advantage of it. From a software perspective, most of the top software applications are DPI aware (including browsers with improved zooming, such as IE 8), but there remain a number of applications which may not behave well at high DPI. Another key piece of data is that display resolution for LCD panels is reaching the maximum at standard DPI. For these displays, there is no reason to go beyond 1900x1200 without OS support for high DPI because the text would be too small for anyone to read. Furthermore, this resolution is already capable of playing the highest fidelity video (1080p) as well as 2 megapixel photos. The combination of existing hardware in the field, future opportunity to unlock better experiences, and the fact that the hardware is now blocked on the OS and the software speak to this being the right timing.

    Conclusion

    Looking at customer data helps us identify ways to improve the Windows experience. In this case, we saw clearly that we had an opportunity to help users easily configure their display such that they would enjoy a high fidelity experience for media as well as crisp text rendered at an appropriate size. With that said, anytime we invest in a feature that can potentially impact the ecosystem of Windows applications we want to be careful about bringing forward your investments in software. We also want to make sure that we engage our community of ISVs early and deeply so they can take advantage of the platform work we have done to seamlessly deliver those benefits to their customers. In the meantime, the internal testing we did and the data that we gathered was critically important to helping us make informed decisions along the way. High DPI is a good example of the need for the whole ecosystem to participate in a solution and how we can use the customer data in the field, along with internal testing, to determine the issues people are seeing and to help us select the best course of action.

     

     

    --Ryan

  • Engineering Windows 7

    The Windows Feedback Program

    • 72 Comments

    Introducing Christina Storm who is a program manager on the Windows Customer Engineering feature team working on telemetry. 

    In a previous article Steven has introduced the Windows 7 Feature Teams. I am a program manager working on telemetry on the Windows Customer Engineering team. Our team delivers the Windows Feedback Program, one of several feedback programs in place today that allow us to work directly with customers and make them part of our engineering process.

    The Windows Feedback Program (WFP) has been active for several years during the Windows XP and Windows Vista product cycles, and we are currently ramping up to get all aspects of this program ready for Windows 7. At the core of this program is a large research panel of customers who sign up via our website http://wfp.microsoft.com during open enrollment. Customers choose to be part of a survey program, an automated feedback program or both. They then complete a 20-minute profiling survey, which later allows us to look at their feedback based on their profile. We have customers spanning a wide spectrum of computer knowledge in our program, and we are constantly working on balancing the panel to staff up underrepresented groups. The majority of customers who are spontaneously willing to participate in a feedback program like ours are generally enthusiastic about technology. They are early adopters of consumer electronics, digital devices and new versions of software. In contrast, customers who see the PC as a tool to get a job done tend to be a bit more reluctant to join. And we also need more female participants!

    Customers who participate in the automated feedback program install a data collection tool developed by the Windows Telemetry Team. The privacy agreement of this program describes the data collections our tool performs. Here are a few examples:

    • Windows usage behavior including installed and used applications.
    • File and folder structures on your computer, including number of types of files in folders, such as number of jpg files in the Pictures folder.
    • System-specific information, such as hardware, devices, drivers, and settings installed on your computer.
    • Windows Customer Experience Improvement Program (CEIP) data.

    From the collected data we create reports that are consumed by a large number of Windows feature teams as well as planners and user researchers. This chart below shows the answer to the following question: What is the most common file type that customers who participate in our program store on their PCs and what are the most popular storage locations?

    Graph showing common file types and locations.  The most common file type are .jpg across all typical locations.

    The results help us both with planning for handling the volumes of data customers store on their PCs as well as mimicking real-life scenarios in our test labs by setting up PCs with similar file numbers and file sizes and distribution of files on the PCs.

    These data collections furthermore allow us to create reports based on profiled panelists. The above chart may look different if we created it based on data delivered only by developers and then compare it to data delivered only by gamers, just to name a couple of different profiles that participate in our program. The Windows knowledge level sometimes makes a difference, too. Therefore it is very important to us that customers participate in this program who consider themselves Windows experts as well as customers who don’t enjoy spending too much time with the PC, who just see the PC as a tool to get things done. Based on the data, we may decide to optimize certain functionality for a particular profile.

    In general, we utilize this data to better understand what to improve in the next version of Windows.  Let’s take a look at how the representative sample has their monitors configured.  First what resolutions do customers run with on their PCs?  The following table lists typical resolutions and the distribution based on the Windows Customer Experience Improvement Program, which samples all opt-in PCs (Note the numbers do not add to 100% because not every single resolution is included):

    Distribution of common screen resolutions.  Approximately 46% of customers run with 1600x1200 and 1280x1024.  Approximately 10% of customers run with HD resolution.

    One thing you might notice is that about 10% of consumers are running with HD or greater resolution.  In some of the comments, people were asking if our data represented the “top” or “power users”.  Given this sample size and the number of folks with industry leading resolution I think it is reasonable to conclude that we adequately represent this and all segments.  This sample is a large sample (those that opt-in) of an extremely large dataset (all Windows customers) so is statistically relevant for segmented studies,

    We have found that a large percentage of our program participants lower their display resolution from the highest usable for their display. Looking at the data coming from the Windows Customer Experience Improvement Program to compare to, and noticed a similar trend: over 50% of customers with 1600x1200 screen resolution displays are adjusting their resolution down to 1024x768, likely because they find it uncomfortable to read the tiny text on high resolution displays. The negative effect of this resolution change is the loss of fidelity to the point where reading text in editors and web browsers is difficult. High definition video content also won’t be able to render properly.

    Here is the data just for customers with displays capable of 1600x1200:

    Actual running resolution for customers with 1600x1200 capable displays shows that 68% of customers reduce their actual screen resolution.

    In a future blog post, one of the program managers from the Windows Desktop Graphics team is going to describe what we have done with that information to improve display quality and reading comfort in Windows 7.

    We also frequently use our data to select appropriate participants for a survey. A researcher may be interested in sending out an online survey only to active users of virtual machine applications. We would first determine that group of users by looking at our “application usage” data and then create the list of participants for the researcher. Sometimes we combine automatically collected data with survey responses to analyze the relationship between a customer’s overall satisfaction and their PC configuration.

    At the current point in time, 50% of our panel participants are using Windows XP and 50% are using Windows Vista. We are currently not offering open enrollment. If you are interested in being invited to this program, please send an email to winpanel@microsoft.com indicating “Notify me for enrollment” in the subject line. If you’d like to add a bit of information about yourself, including your Windows knowledge level, that would be much appreciated! We will add you to our request queue and make our best effort to invite you when we have capacity.

    When we release the Windows 7 beta we will also be collecting feedback from this panel and asking for participation from a set of Windows 7 beta users. Our current plans call for signing up for the beta to happen in the standard Microsoft manner on http://connect.microsoft.com. Stay tuned!

    -- Christina Storm

  • Engineering Windows 7

    Reflecting on a few recent threads…

    • 126 Comments

    When we kicked off this blog, the premise was a dialog – a two-way conversation about the engineering of Windows 7.  We couldn’t be happier with the way things have been going in this short time.  As we said we intended to do, we’ve started a discussion about how we build the product and have had a chance to have some back and forth in comments and in posts about topics that are clearly important to you.  To put some numbers on things, I’ve personally received about 400 email messages (and answered quite a few) and all total we have had about 900 English language comments from about 500 different readers (with a few of you > 10 comments).  Early numbers show we have about 10x that latter number in readers+page views.

    A number of folks on the blog have asked for more details about how we build Windows—what’s the feature selection process, the daily build process, globalization, and so on.  And in keeping with our new tradition of seeing the other “side” of an issue, many folks have also said they feel like they have enough of that information and want to know the features.  So in this post I want to offer a perspective on a couple of features that have been talked about a bunch, and also a perspective on talking about features and feature selection.

    We love the response.  We have seen that some topics have created a forum for folks to do a lot of asking for features, and we will do our best to respond in the context of what we set out to do, which is to have a discussion about how Windows 7 is engineered, including how we make choices about what goes in the product.  I admit that it might be tempting (for me) to blog a big long list of features and then say “give us feedback“.  It is tempting because I have seen this in the past and it is a certainly an easy thing to do that might make people feel happier and more involved.  However, there are some challenges with this technique that make these sorts of forums less than satisfying for all of us.  First, it is “reactive” in that it asks you to just react to what you see.  Absent a shared context we won’t be remotely on the same page in terms of motivations, priorities, and so on.  This is especially the case when a feature is early and we aren’t really capable of “marketing” it effectively and telling the story of the feature.  Second, a broad set of anecdotal feedback (that is free text) is not really actionable data and doesn’t capture the dialog and discussion we are having.  Making decisions this way is almost certain to not go well with the “half” of the folks who don’t agree with the decision or prioritization.  And third, there's a tendency to feel that feedback given yields action in that direction.  These are some of the reasons why we have taken the approach of talking about how we are making Windows 7.

    Some have suggested that we publish a list of features and then have a ranking/voting process.  In fact some have gone as far as doing that for us on their own web sites.  Thank you--these are interesting sites and we do look at them.  But I think we can all agree that there is also a challenge that many folks are familiar with which is that a self-selected group provides one type of feedback which is likely to be different than a group that is selected intentionally as being representative.  I was recalling an old episode of Saturday Night Live, “Larry the Lobster”, where for a toll call you could vote to save Larry from the stove or not.  We all know that is a non-scientific poll, but we also don’t even know if it is a non-scientific poll of views of animal rights or of food preferences.  I think the value of voting on specific features goes beyond just entertainment, but we also have to spend the energy making sure we are thinking about the issues within the same context.  We also want any sample of customers we do to be representative of either the broad base of customers or the specific target customer “segment”.

    Thus a big part of this blog is about creating a forum where we hear from each other about what is important and what our relative contexts are that we bring to the discussion.  That’s why we think about this as a dialog—it is not a question and answer, request and response, point and counter-point, or announcement and comment.  Personally, I am genuinely benefiting from the dynamic nature of what we are going to blog about based on those participating in the blog.  So this is much more like a social where we all come to meet and talk, than a business meeting where we each have specific goals or a training class where one party does all the talking. 

    In that spirit, it seems good to continue a conversation about a few points that have come up quite a bit and I think folks have been asking for a point of view on these.  Each is worthy of a post on its own, but I also wanted to offer a point of view about some specific feature requests.  Let’s look at some topics that have come up as we have talked about performance or the overall Windows experience.  Because this is “responding” to comments and input, there is a potential to delve into point/counter-point, I am hoping we can look back at the “context” discussions we have been having before we get too deep in debate.

    Profile-based Setup

    In terms of feature ideas, a number of you have suggested that we offer a way at setup time to configure Windows for a specific scenario.  Some have suggested scenarios such as gaming, casual use, business productivity, web browsing, email, "lightweight usage", and so on.  There is an implication in there that Windows could perform (speed, space, etc.) better if we tune it for a specific scenario along these lines, but in reality this assumption probably won’t pan out in a consistent or general way.  There are many ways to consider this feature—it could be one where we tweak the contents of the Start Menu (something admins do in corporations all the time), or the performance metrics for some low level components (disk block size, tcp/ip frame sizes, etc.) or the level of user interface polish (aka “eye candy” as some have called it), and so on.  We’ve seen scenario or role-based setup as a very popular feature for Windows Server 2008.  In the server environment, however, each of these roles represents a different piece of hardware (likely with different configurations) or perhaps a specific VM on a very beefy machine, and also represent very clearly understood "workloads" (file server, print server, web server). 

    The desktop PC (or laptop) is different because there is only a single PC and the roles are not as well defined.  Only in the rarest cases is that PC dedicated to a single purpose.  And as Mike in product planning blogged, the reality is that we see very few PCs that run only a specific piece of software and in nearly every study we have ever done, just about every PC runs at least one piece of software that other people do not run.  So we should take away from this the difficulty in even labeling a PC as being role specific.  Now there are role-specific times when using a PC, and for that the goal of an OS is to adapt well in the face of changing workloads.  As just one example of this in Windows Vista, consider the work on making the indexer a low priority activity using the new low-priority I/O APIs.  I know some have mentioned that this is “something I always turn off” but the reality is that there is an upfront cost and then the ongoing cost of indexing is indeed very low.  And this is something we have made significant improvements in for Desktop Search 4.0 (released as a download) and in Windows 7.  The reality is that a general purpose OS should adjust to the workloads asked of it.  We know things are not perfect, and we know many of you (particularly gamers) are looking for every single potential ounce of performance.  But we also know that the complexity and fragility introduced by trying to “outsmart” core system services often overshadows the performance improvements we see across the broadest sampling of customers.  There’s a little bit of “mythbusters” we could probably embark on so -- how about sharing the systematic results you have achieved and we can address those in comments?

    Another challenge would be in developing this very taxonomy.  This is something I personally tried hard to do for Office 95 and Office 97.  We thought we could have a setup “wizard” ask you how much you used Word, Excel, PowerPoint, and Access, or a taxonomy that asked you a profession (lawyer, accountant, teacher).  From that we were going to pick not just which applications but which features of the applications we would install.  We consistently ran into two problems.  First, just arriving at descriptors or questions to “categorize” people failed consistently in usability tests—the classic problem when given a spectrum of choices people would peg all of them in the middle or would just “freeze up” feeling that none fit them (people don't generally like labels).  Second, we always had the problem of either multiple users of the same PC or people who would change roles or usage patterns.  It turns out our corporate customers learned this same thing for us and it became routine to “install everything” and thus began an era of installing the full suite of products and then training was used to narrow the usage scenarios. 

    The final challenge has been just how do you present this to customers and when.  This sequence of steps, the out of box experience, or OOBE, is what you go through when you unbox a PC (the overwhelming majority of Windows customers get it this way) or run setup from a DVD (the retail “packaged product” customer).  This leads to the next item which is looking to the OOBE as a place to do performance optimizations.  Trying to solve performance at this step is definitely a challenge and leads to our “context” for the out of box experience.

    Out of Box Experience - “OOBE”

    The OOBE is really the place that customers first experience Windows on a new PC.  As many have read in reviews of competitive (to Windows PCs) products the experience goals most people have relate to “how fast can I get from packing knife to the web”.  For Windows 7 we are working closely with our OEM partners to make sure it is possible to deliver the most streamlined experience possible.  Of course OEMs have a ton of flexibility and differentiation opportunties in what they offer as part of setting up a new PC, and what we want to do is make sure that the “core OS” portion of this is the absolute minimum required to get to the fun of using your PC. 

    By itself, this goal would run counter to introducing a “profiling” or “wizard” help gauge the intended (at time of purchase) uses/usage of a PC.  That doesn’t mean that an OEM could not offer such a profiled experience that could provide a differentiated OOBE experience, but it isn’t one we would ask all customers to go through as part of the “core OS” installation. 

    I recognize many of you as PC enthusiasts have gone through the experience of setting up a Linux PC using one of the varieties of package managers—probably many times just to get one installation working right.  As you’ve seen with these installs (especially as things have recently converged on one particular end-user focused disti), the number of ways you can produce a poorly running system exceeds the number of ways you can produce a fully functional (for your needs) setup  In practice, we know that many components end up depending on many others and ultimately this dependency graph is a challenge to manage and get right, even with a software dependency manager (like Windows Installer).  As a result, we generally see customers benefitting from a broad base of software on the machine so long as that does not have a high cost—developing that install is a part of developing the product, balancing footprint, architectural connections, system reliability, etc.

    So our context for the out of box experience would be that we don’t want to introduce complexity there, where customers are least interested in dealing with it as they want to get to the excitement of using their new PC.  I think of it a bit like the car dealers who won’t hand you the keys to your car until you sit and watch a DVD about the car and then get a guided tour of the car—if you’re like me you’re screaming “give me the keys and let me out of here”.  We think PC buyers are pretty much like that and our research confirms that around the world.

    We also recognize that there are expert users who might want to adjust the running system for any variety of reasons (performance, footprint, surface area, etc.)  We call this the “Turning Windows Features On or Off” which is the next item we’ve heard from you about.

    Windows Features

    If we install the typical installation of Windows as one that is basically all the features in the particular SKU a customer purchased, then what about the customer that wants to tweak what is installed and remove things?  Customers might want to remove some features because they just never use them and don’t want to accidently use them or carry with them the “code” that might run.  Customers might be defining a role for the PC (cash register) and so making sure that specific features are never there.  There are many reasons for this.  For many releases Windows has had the ability to install or uninstall various features that are part of Windows.  In Windows Vista this was made more robust as the features are removed from the running system but also remained available for reuse without the original DVD.  We also made the list of features longer in Windows Vista.

    For Windows 7, many have asked for us to make this list longer and have more features in it.  This is something we are strongly considering for Windows 7 as we think it is consistent with the design goals of “choice and control” that you have seen us talk about here and quite a bit with Internet Explorer 8.0 beta 2.

    Of course we have the same challenge that Linux distributions have which is you can quickly remove things could break other features by being removed, and then you have to have all the complexity of informing the customer of these “dependencies” and ultimately you end up feeling like everything is connected to everything else.  On some OS installations this packaging works reasonably well because there is duplication of features (you pick from several file browsers, several web browsers, several office suites, several GUIs even).  The core Windows OS, while not free from some duplication, does not have this type of configuration.  Rather we ship a platform where customers can add many components as they desire.

    For customers that wish to remove, replace, or just prevent access to Windows components we have several available tools:

    • Set Your Default Programs (or Set Program Access and Defaults).  In Vista these features allow you to set the default programs/handlers by file type or protocol.  This was introduced in Windows XP SP1.  In Vista the SYDP was expanded and we expect all Microsoft software to properly register and employ this mechanism.  So if you want to have a default email program, default handler for GIF, or your choice of web browser this is the user interface to use.  Windows itself respects these defaults for all the file types it manages. 
    • Customizing the start menu or group policy.  For quite some time, corporate admins have been creating “role-based” PCs by customizing the start menu (or even going way back to progman) to only show a specific set of programs.  We see this a lot in internet cafes these days as well.  The SPAD functionality takes this a step further and provides an end-user tool for removing access to installed email programs, web browsers, media players, instant messengers, and virtual machine runtimes. 
    • Removing code.  Sometimes customers just want to remove code.  With small footprint disks many folks have looked to remove more and more of Windows just to fit on SSDs.   I’ve certainly seen some of the tiny Windows installations.  The supported tool for removing code from Windows is to use the “Turn Windows Features on and off” (in Vista) user interface.   There are over 80 features in this tool in premium Vista packages today.

    Many folks want the list of Windows features that can be turned on / off to be longer and there have been many suggestions on the site for things to make available this way.  This is more complex because of the Windows platform—that is many developers rely on various parts of the Windows platform and just “assume” those parts are there.  Whether it is a media player that uses the windows address book, a personal finance package that uses advanced print spooling, or even a brand new browser that relies on advanced networking features.  These are real-world examples of common uses of system APIs that don’t seem readily apparent from the end-user view of the software. 

    Some examples are quite easy to see and you should expect us to do more along these lines, such as the TabletPC components.  I have a PC that is a very small laptop and while it has full tablet functionality it isn’t the best size for doing good ink work for me (I prefer a 12.1” or greater and this PC is a 10” screen).  The tablet code does have a footprint in memory and on the 1GB machine if I go and remove the tablet components the machine does perform better.  This is something I can do today.  Folks have asked about Photo Gallery, Movie Maker, Windows Mail, Windows Calendar…this is good feedback and good things for us to consider for Windows 7. 

    An important point is that a vast majority of things you remove this way consume little or no resources if you are not using them.  So while you can reduce the surface area of the PC you probably don’t make it perform better.  As one example, Windows Mail doesn’t slow you down at all if you don’t have any mail (or news) accounts configured.  And to be certain you could hide access with SPAD or just change the default protocol handler to your favorite mail program. Another example is you can just change the association and never see photogallery launched for images if that is your preference.  That means no memory is taken by these features.

    This was a chance to continue our discussion around how we are learning from our discussion and some specifics that have come up quite a bit.  I hope we are gaining a shared view of how we look at some of the topics folks have brought up. 

    So this turned into a record long post.  Please don’t expect this too often :-)

    --Steven

  • Engineering Windows 7

    Organizing the Windows 7 Project

    • 26 Comments

    Hi Jon DeVaan here.

    Steven wrote about how we organize the engineering team on Windows which is a very important element of how work is done. Another important part is how we organize the engineering project itself.

    I’d like to start with a couple of quick notes. First is that Steven reads and writes about ten times faster than I do, so don’t be too surprised if you see about that distribution of words between the two of us here. (Be assured that between us I am the deep thinker :-). Or maybe I am just jealous.) Second is that we want do want to keep sharing the “how we build Windows 7” topics since that gives us a shared context for when we dive into feature discussion as we get closer to the PDC and WinHEC. We want to discuss how we are engineering Windows 7 including the lessons learned from Longhorn/Vista. All of these realities go into our decision making on Windows 7.

    OK, on to the tawdry bits.

    Steven linked last time to the book Microsoft Secrets, which is an excellent analysis of what I like to call version two of the Microsoft Engineering System. (Version one involved index cards and “floppy net” and you really don’t want to hear about it.) Version two served Microsoft very well for far longer than anyone anticipated, but learning from Windows XP, the truly different security environment that emerged at that time and from Longhorn/Vista, it became clear that it was time for another generational transformation in how we approach engineering our products.

    The lessons from XP revolve around the changed security landscape in our industry. You can learn about how we put our learning into action by looking at the Security Development Lifecycle, which is the set of engineering practices recommended by Microsoft to develop more secure software. We use these practices internally to engineer Windows.

    The comments on this blog show that the quality of a complete system contains many different attributes, each of varying importance to different people, and that people have a wide range of opinions about Vista’s overall quality. I spend a lot of time on core reliability of the OS and in studying the telemetry we collect from real users (only if they opt-in to the Customer Experience Improvement Program) I know that Vista SP1 is just as reliable as XP overall and more reliable in some important ways. The telemetry guided us on what to address in SP1. I was glad to see one way pointed out by people commenting about sleep and resume working better in Vista. I am also excited by the prospect of continuing our efforts (we are) using the telemetry to drive Vista to be the most reliable version of Windows ever. I add to the list of Vista’s qualities successfully cutting security vulnerabilities by just under half compared to XP. This blog is about Windows 7, but you should know that we are working on Windows 7 with a deep understanding of the performance of Windows Vista in the real world.

    In the most important ways, people who have emailed and commented have highlighted opportunities for us to improve the Windows engineering system. Performance, reliability, compatibility, and failing to deliver on new technology promises are popular themes in the comments. One of the best ways we can address these is by better day-to-day management of the engineering of the Windows 7 code base—or the daily build quality. We have taken many concrete steps to improve how we manage the project so that we do much better on this dimension.

    I hope you are reading this and going, “Well, duh!” but my experience with software projects of all sizes and in many organizations tells me this is not as obvious or easily attainable as we wish.

    Daily Build Quality

    Daily quality matters a great deal in a software project because every day you make decisions based on your best understanding of how much work is left. When the average daily build has low quality, it is impossible to know how much work is left, and you make a lot of bad engineering decisions. As the number of contributing engineers increases (because we want to do more), the importance of daily quality rises rapidly because the integration burden increases according to the probability of any single programmer’s error. This problem is more than just not knowing what the number of bugs in the product is. If that were all the trouble caused then at least each developer would have their fate in their own hands. The much more insidious side-effect is when developers lack the confidence to integrate all of the daily changes into their personal work. When this happens there are many bugs, incompatibilities, and other issues that we can’t know because the code changes have never been brought together on any machine.

    I’ve prepared a graph to illustrate the phenomenon using a simple formula predicting the build breaks caused by a 1 in 100 error rate on the part of individual programmers over a spectrum of group sizes (blue line). A one percent error rate is good. If one used a typical rate it would be a little worse than that. I’ve included two other lines showing the build break probability if we cut the average individual error rate by half (red line) and by a tenth (green line). You can see that mechanisms that improve the daily quality of each engineer impacts the overall daily build quality by quite a large amount.

     image

    For a team the size of Windows, it is quite a feat for the daily builds to be reliable.

    Our improvement in Windows 7 leveraged a big improvement in the Vista engineering system, an investment in a common test automation infrastructure across all the feature teams of Windows. (You will see here that there is an inevitable link between the engineering processes themselves and the organization of the team, a link many people don’t recognize.) Using this infrastructure, we can verify the code changes supplied by every feature team before they are merged into the daily build. Inside of the feature team this infrastructure can be used to verify the code changes of all of the programmers every day. You can see in the chart how the average of 40 programmers per feature team balances the build break probability so that inside of a feature team the build breaks relatively infrequently.

    For Windows 7 we have largely succeeded at keeping the build at a high level of quality every day. While we have occasional breaks as we integrate the work of all the developers, the automation allows us to find and repair any issues and issue a high quality build virtually every day. I have been using Windows 7 for my daily life since the start of the project with relatively few difficulties. (I know many folks are anxious to join me in using Windows 7 builds every day—hang in there!)

    For fun I’ve included a couple pictures from our build lab where builds and verification tests for servers and clients are running 24x7:

    clip_image004 clip_image006

    Conclusion

    Whew! That seems like a wind sprint through a deep topic that I spend a lot of time on, but I hope you found it interesting. I hope you start to get the idea that we have been very holistic in thinking through new ways of working and improvements to how we engineer Windows through this example. The ultimate test of our thinking will be the quality of product itself. What is your point of view on this important software engineering issue?

  • Engineering Windows 7

    Product Planning for Windows...where does your feedback really go?

    • 75 Comments

    Ed. Note:  Allow me to introduce Mike Angiulo who leads the Windows PC Ecosystem and Planning team.  Mike’s team works closely with all of our hardware and software partners and leads the engineering team's product planning and research efforts for each new version of Windows.  --Steven 

    In Windows we have a wide variety of mechanisms to learn about our customers and marketplace which all play roles in helping us decide what we build.  From the individual questions that our engineers will answer at WinHEC and PDC to the millions of records in our telemetry systems we have tools for answering almost every kind of question around what you want us to build in Windows and how well it’s all working.  Listening to all of these voices together and building a coherent plan for an entire operating system release is quite a challenge – it can feel like taking a pizza order for a billion of your closest friends!

     

    It should come as no surprise that in order to have a learning organization we need to have an organization that is dedicated to learning.  This is led by our Product Planning team, a group of a couple dozen engineers that is both organized and sits with the program managers, developers and testers in the feature teams.  They work throughout the product cycle to ensure that our vision is compelling and based on a deep understanding of our customer environment and is balanced with the business realities and competitive pressures that are in constant flux.  Over the last two years we’ve had a team of dozens of professional researchers fielding surveys, listening to focus groups, and analyzing telemetry and product usage data leading up to the vision and during the development of Windows 7 – and we’re not done yet.  From our independently run marketing research to reading your feedback on this blog we will continue to refine our product and the way we talk about it to customers and partners alike.  That doesn’t mean that every wish goes answered!  One of the hardest jobs of planning is in turning all of this data into actionable plans for development.  There are three tough tradeoffs that we have been making recently.

     

    First there is what I think of as the ‘taste test challenge.’ Over thirty years ago this meme was introduced in a famous war between two colas.  Remember New Coke?  It was the result of overemphasizing the very initial response to a product versus longer term customer satisfaction.  We face this kind of challenge all the time with Windows – how do we balance the need for the product to be approachable with the need for the product to perform throughout its lifecycle?  Do you want something that just boots as fast as it can or something that helps you get started?  Of course we can take this to either extreme and you can say we have – we went from c:\ to Microsoft Bob in only a matter of a decade.  Finding the balance between a product that is fresh and clean out of the box and continues to perform over time is a continual balance.  We have ethnographers who gather research that in some cases starts even before the point of purchase and continues for months with periodic visits to learn how initial impressions morph into usage patterns over the entire lifecycle of our products.

     

    Second we’re always looking out for missing the ‘trees for the forest.’ By this I mean finding the appropriate balance between aggregate and individual user data.  A classic argument around PCs has always been that a limited subset of actions comprises a large percentage of the use case.  The resulting argument is that a limited function device would be a simpler and more satisfying experience for a large percentage of customers!  Of course this can be shown to be flawed in both the short term and the long term.  Over the long term this ‘common use case’ has changed from typing & printing to consuming and burning CDs and gaming to browsing and will continue to evolve.  Even in the short term we have studied the usage of thousands of machines (from users who opt-in of course) and know that while many of the common usage patterns are in fact common, that nearly every single machine we’ve ever studied had one or more unique applications in use that other machines didn’t share!  This long tail phenomena is very important because if we designed for the “general case” we’d end up satisfying nobody.  This tradeoff between choice and complexity is one that benefits directly from a rigorous approach to studying usage of both the collective and individual and not losing sight of either.

     

    Third is all about timing.  Timing is everything.  We have an ongoing process for learning in a very dynamic market – one that is directly influenced by what we build.  The ultimate goal is to deliver the ultimate in software & hardware experiences to customers – the right products at the right time.  We’ve seen what happens if we wait too long to release software support for a new category (we should have done a better job with an earlier Bluetooth pairing standard experience) and what also happens when we ship software that the rest of the ecosystem isn’t ready for yet.  This problem has the dimension of working to evangelize technologies that we know are coming, track competing standards, watch user scenarios evolve and try to coordinate our software support at the same time.  To call it a moving target isn’t saying enough!  It does though explain why we’re constantly taking feedback, even after any given version of Windows is done.

     

    These three explicit tradeoffs always make for lively conversation – just look at the comments on this blog to date!  Of course being responsive to these articulated needs is a must in a market as dynamic and challenging as ours.  At the same time we have to make the biggest tradeoff of them all – balancing what you’re asking for today with what we think you’ll be asking for tomorrow.  That’s the challenge of defining unarticulated needs.  All technology industries face this tradeoff whether you call it the need to innovate vs. fix or subscribe to the S curve notion of discontinuities.  Why would two successful auto companies, both listening to the same market input, release the first commercial Hummer and first hybrid Prius in the same year?  It wasn’t that 1998 was that confusing, it was that the short term market demands and the long term market needs weren’t obviously aligned.  Both forces were visible but readily dismissed – the need for increased off road capacity to negotiate the crowded suburban mall parking lots and the impending environmental implosion being predicted on college campuses throughout the world.  We face balancing acts like this all the time.  How do we deliver backwards compatibility and future capability one release at a time?  Will the trend towards 64 bit be driven by application scenarios or by 4GB machines selling at retail?

     

    We have input on key tradeoffs.  We have a position on future trends.  That’s usually enough to get started on the next version of the product and we stay connected with customers and partners during throughout development to keep our planning consistent with our initial direction but isn’t enough to know we’re ready to ship.   Really being done has always required some post engineering feedback phase whether it’s a Community Technical Preview, Technology Adoption Program or a traditional public Beta.  The origin of Beta testing and even the current definition of the term aren’t clear.  Some products now seem to be in Beta forever!  We work to find the best possible timing for sharing the product and gathering final feedback.  If we release it too early it’s usually not in any shape to evaluate, especially with respect to performance, security, compatibility and other critical fundamentals.  If we release too late we can’t actually take any of the feedback you give us, and I can’t think of a worse recipe for customer satisfaction than to ask for feedback which gets systematically ignored.  I was just looking at another software “feedback” site where a bunch of the comments just asked the company to “please read this site!”   For Windows 7 we’re going to deliver a Beta that is good enough to experience and leaves us enough time to address areas where we need more refinement.  This blog will be an important part of the process because it will provide enough explanation and content and guidance to help you understand the remaining degrees of freedom, some of the core assumptions that went into each area and will structure our dialogue so that we can listen and respond to as much feedback as you’re willing to give.  Some of this will result in bugs that get fixed, some will result in bugs in drivers or applications that we help our partners fix.  And of course sometimes we’ll just end up with healthy debate – but even in this case we will be talking, we will respond to constructive comments, bugs and ideas and we will both be starting that conversation with more context than ever.  So please do keep your comments coming.  Please participate in the Customer Experience Improvement program.  Give us feedback at WinHEC and PDC and in the newgroups and forums – we’re listening!  

     

    Thanks,

    - Mike

     

  • Engineering Windows 7

    Boot Performance

    • 173 Comments

    Ed. Note: This is our first post from a senior member of the development team. Allow me to introduce Michael Fortin who is one of Microsoft’s Distinguished Engineers and leads the Fundamentals feature team that is in our Core Operating System group. Michael leads performance and reliability efforts across the Windows platform.  --Steven  (PS: Be sure to visit www.microsoft.com/ie and try out the beta 2 release of Internet Explorer 8).

    For Windows 7, we have a dedicated team focused on startup performance, but in reality the effort extends across the entire Windows division and beyond. Our many hardware and software partners are working closely with us and can rightly be considered an extension to the team.

    Startup can be one of three experiences; boot, resume from sleep, or resume from hibernate. Although resume from sleep is the default, and often 2 to 5 seconds based on common hardware and standard software loads, this post is primarily about boot as that experience has been commented on frequently. For Windows 7, a top goal is to significantly increase the number of systems that experience very good boot times. In the lab, a very good system is one that boots in under 15 seconds.

    For a PC to boot fast a number of tasks need to be performed efficiently and with a high degree of parallelism.

    • Files must be read into memory.
    • System services need to be initialized.
    • Devices need to be identified and started.
    • The user’s credentials need to be authenticated for login.
    • The desktop needs to be constructed and displayed.
    • Startup applications need to be launched.

    Because systems and configurations differ, boot times can vary significantly. This is verified by many lab results, but can also be seen in independent analysis, such as that conducted by Ed Bott. Sample data from Ed’s population of systems found that only 35% of boots took less than 30 seconds to give control to the user. Though Ed’s data is from a small population, his data is nicely in line with what we’re observing. Windows Vista SP1 data (below) also indicates that roughly 35% of systems boot in 30 seconds or less, 75% of systems boot in 50 seconds or less. The Vista SP1 data is real world telemetry data. It comes to us from the very large number of systems (millions) where users have chosen to send anonymous data to Microsoft via the Customer Experience Improvement Program.

    Histogram distribution of boot times for Vista SP1 as reported through the Microsoft Customer Experience Improvement Program data.  Paragraph above provides summary of the data presented.

    From our perspective, too few systems consistently boot fast enough and we have to do much better. Obviously the systems that are greater than 60 seconds have something we need to dramatically improve—whether these are devices, networking, or software issues. As you can see there are some systems experiencing very long boot times. One of the things we see in the PC space is this variability of performance—variability arises from the variety of choices, and also the variety of quality of components of any given PC experience. There are also some system maintenance tasks that can contribute to long boot times. If a user opts to install a large software update, the actual updating of the system may occur during the next boot. Our metrics will capture these and unfortunately they can take minutes to complete. Regardless of the cause, a big part of the work we need to do as members of the PC ecosystem is address long boot times.

    In both Ed’s sample and our telemetry data, boot time is meant to reflect when a machine is ready and responsive for the user. It includes logging in to the system and getting to a usable desktop. It is not a perfect metric, but one that does capture the vast majority of issues. On Windows 7 and Vista systems, the metric is captured automatically and stored in the system event log. Ed’s article covers this in depth.

    We realize there are other perceptions that users deem as reflecting boot time, such as when the disk stops, when their apps are fully responsive, or when the start menu and desktop can be used. Also, “Post Boot” time (when applications in the Startup group run and some delayed services execute), the period before Windows boot is initiated, and BIOS time can be significant. In our efforts, we’ve not lost sight of what users consider being representative of boot.

    Before discussing some of our Windows 7 efforts, we’d like to point out there is considerable engagement with our partners underway. In scanning dozens of systems, we’ve found plenty of opportunity for improvement and have made changes. Illustrating that, please consider the following data taken from a real system. As the system arrived to us, the off-the-shelf configuration had a ~45 second boot time. Performing a clean install of Vista SP1 on the same system produced a consistent ~23 second boot time. Of course, being a clean install, there were many fewer processes, services and a slightly different set of drivers (mostly the versions were different). However, we were able to take the off-the-shelf configuration and optimize it to produce a consistent boot time of ~21 seconds, ~2 seconds faster than the clean install because some driver/BIOS changes could be made in the optimized configuration.

    For this same system, it is worth noting the resume from sleep time is approximately 2 seconds, achieving a nearly instant on experience. We do encourage users to choose sleep as an alternative to boot.

    As an example Windows 7 effort, we are working very hard on system services. We aim to dramatically reduce them in number, as well as reduce their CPU, disk and memory demands. Our perspective on this is simple; if a service is not absolutely required, it shouldn’t be starting and a trigger should exist to handle rare conditions so that the service operates only then.

    Of course, services exist to complete user experiences, even rare ones. Consider the case where a new keyboard, mouse or tablet HW is added to the system while it was off. If this new HW isn’t detected and drivers installed to make the HW work during startup, then the user may not be able to enter their credentials and log into the machine. For a given user, this may be a very rare or never encountered event. For a population of 100s of millions of users, this can happen frequently enough to warrant having mechanisms to support it. In Windows 7, we will support this scenario and many others with fewer auto start services because more comprehensive service trigger mechanisms have been created.

    As noted above, device and driver initialization can be a significant contributor as well. In Windows 7, we’ve focused very hard on increasing parallelism of driver initialization. This increased parallelism decreases the likelihood that a few slower devices/drivers will impact the overall boot time.

    In terms of reading files from the disk, Windows 7 has improvements in the “prefetching” logic and mechanisms. Prefetching was introduced way back in Windows XP. Since today’s disks have differing performance characteristics, the scheduling logic has undergone some changes to keep pace and stay efficient. As an example, we are evaluating the prefetcher on today’s solid state storage devices, going so far as to question if is required at all. Ultimately, analysis and performance metrics captured on an individual system will dynamically determine the extent to which we utilize the prefetcher.

    There are improved diagnostic experiences in Windows 7 as well. We aim to quickly identify specific issues on individual systems, and provide help to assist in resolving the issues. We believe this is an appropriate way to inform users about some problems, such as having too many startup applications or the presence of lengthy domain-oriented logon scripts. As many users know, having too many startup applications is often the cause of long boot times. Few users, however, are familiar with implications of having problematic boot or logon scripts. In Windows XP, Vista and in Windows 7, the default behavior for Windows is to log the user into the desktop without waiting for potentially lengthy networking initialization or scripts to run. In corporate environments, however, it is possible for IT organizations to change the default and configure client systems to contact servers across the network. Unfortunately, when configuring clients to run scripts, domain administrators typically do so in a synchronous and blocking fashion. As a result, boot and logon can take minutes if networking timeouts or server authentication issues exist. Additionally, those scripts can run very expensive programs that consume CPU, disk and memory resources.

    In addition to working on Windows 7 specific features and services, we are sharing tools, tests and data with our partners. The tools are available to enthusiasts as well. The tools we use internally to detect and correct boot issues are freely available today here as a part of the Windows Performance Toolkit. While not appropriate for most users, the tools are proving to be very helpful for some.

    One of the topics we want to talk about in the future which we know has been written about a great deal and is the subject of many comments, is the role that additional software beyond the initial Windows installation plays in overall system performance. The sheer breadth and depth of software for Windows means that some software will not have the high quality one would hope, while the vast majority is quite good. Microsoft must continue to provide the tools for developers to write high performance software and the tools for end-users to identify the software on their system that might contribute to performance that isn’t meeting expectations. Windows itself must also continue to improve the defensive tactics it uses to isolate and inform the end-user about software that might contribute to poor performance.

    Another potential future topic pertains to configuration changes a user can make on their own system. Many recommended changes aren’t helpful at all. For instance, we’ve found the vast majority of “registry tweak” recommendations to be bogus. Here’s one of my favorites. If you perform a Live search for “Enable Superfetch on XP”, you’ll get a large set of results. I can assure you, on Windows XP there is no Superfetch functionality and no value in setting the registry key noted on these sites. As with that myth, there are many recommendations pertaining to CPU scheduling, memory management and other configuration changes that aren’t helpful to system performance.

    Startup is one topic on performance. As described in the previous post we want to continue the discussion around this topic. What are some of the elements you’d like to discuss more?

    Michael Fortin

  • Engineering Windows 7

    Windows 7 -- Approach to System Performance

    • 113 Comments

    Many folks have commented and written email about the topic of performance of Windows. The dialog has been wide ranging—folks consistently want performance to improve (of course). As with many topics we will discuss, performance, as absolute and measurable as it might seem, also has a lot of subtlety. There are many elements and many tradeoffs involved in achieving performance that meets everyone’s expectations. We know that even meeting expectations, folks will want even more out of their Windows PCs (and that’s expected). We’ve re-dedicated ourselves to work in this area in Windows 7 (and IE 8). This is a major initiative across each of our feature teams as well as the primary mission of one of our feature teams (Fundamentals). For this post, I just wanted to frame the discussion as we dig into the topic of performance in subsequent posts.  Folks might find this post on IE8 performance relevant along with the beta 2 release of IE 8. 

    Performance is made up of many different elements. We could be talking about response time to a specific request. It might mean how much RAM is “typical” or what CPU customers need. We could be talking about the clock time to launch a program. It could mean boot or standby/resume. It could mean watching CPU activity or disk I/O activity (or lack disk activity). It could mean battery life. It might even mean something as mundane as typical disk footprint after installation. All of these are measures of performance. All of these are systematically tracked during the course of development. We track performance by running a known set of scenarios (there are thousands of these) and developers can run specific scenarios based on exercising more depth or breadth. The following represent some (this is just a partial list) of the metrics we are tracking and while developing Windows 7:

    • Memory usage – How much memory a given scenario allocates during a run. As you know, there is a classic tradeoff in time v. space in computer science and we’re not exempt. We see this tradeoff quite a bit in caches where you can use more memory (or disk space) in order to improve performance or to avoid re-computing something.
    • CPU utilization – Clearly, modern microprocessors offer enormous processing power and with the advent of multiple cores we see the opportunity for more parallelism than ever before. Of course these resources are not free so we measure the CPU utilization across benchmark runs as well. In general, the goal should be to keep the CPU utilization low as that improves multi-user scenarios as well as reduces power consumption.
    • Disk I/O – While hard drives have improved substantially in performance we still must do everything we can do minimize the amount that Windows itself does in terms of reading and writing to disk (including paging of course). This is an area receiving special attention for Windows 7 with the advent of solid state storage devices that have dramatically different “characteristics”.
    • Boot, Shutdown, Standby/Resume – All of these are the source of a great deal of focus for Windows 7. We recognize these can never be fast enough. For these topics the collaboration with the PC manufacturers and hardware makers plays a vital role in making sure that the times we see in a lab (or the performance you might see in a “clean install”) are reflected when you buy a new PC.
    • Base system – We do a great deal to measure and tune the base system. By this we mean the resource utilization of the base system before additional software is loaded. This system forms the “platform” that defines what all developers can count on and defines the system requirements for a reasonable experience. A common request here is to kick something out of the base system and then use it “on demand”. This tradeoff is one we work on quite a bit, but we want to be careful to avoid the situation where the vast majority of customers face the “on demand” loading of something which might reduce perceived performance of common scenarios.
    • Disk footprint – While not directly related to runtime performance, many folks see the footprint of the OS as indicative of the perceived performance. We have some specific goals around this metric and will dive into the details soon as well. We’ll also take some time to explain \Windows\WinSxS as it is often the subject of much discussion on technet and msdn! Here rather than runtime tradeoffs we see convenience tradeoffs for things like on disk device drivers, assistance content, optional Windows components, as well as diagnostics and logging information.

    We have criteria that we apply at the end of our milestones and before we go to beta and we won’t ship without broadly meeting these criteria. Sometimes these criteria are micro-benchmarks (page faults, processor utilization, working set, gamer frame rates) and other times they are more scenario based and measure time to complete a task (clock time, mouse clicks). We do these measurements on a variety of hardware platforms (32-bit or 64-bit; 1, 2, 4GB of RAM; 5400 to 7200 RPM or solid-state disks; a variety of processors, etc.) Because of the inherent tradeoffs in some architectural approaches, we often introduce conditional code that depends on the type of hardware on which Windows is running.

    On the one hand, performance should be straight forward—use less, do less, have less. As long as you have less of everything performance should improve. At the extreme that is certainly the case. But as we have seen from the comments, one person’s must-have is another person’s must-not-have. We see this a lot with what some on have called “eye candy”—we get many requests to make the base user interface “more fun” with animations and graphics (“like those found on competing products”) while at the same time some say “get rid of graphics and go back to Windows 2000”. Windows is enormously flexible and provides many ways to tune the experience. We heard lots on this forum about providing specific versions of Windows customized for different audiences, while we also heard quite a bit about the need to reduce the number of versions of Windows. However, there are limits to what we can provide and at the same time provide a reliable “platform” that customers and developers can count on and is robust and manageable for a broad set of customers. But of course within a known context (within your home or within a business running a known set of software) it will always be possible to take advantage of the customization and management tools Windows has to offer to tune the experience. The ability to have choice and control what goes on in your PC is of paramount importance to us and you will see us continue to focus on these attributes with Windows 7.

    By far the biggest challenge in delivering a great PC experience relative to performance is that customers keep using their PCs to do more and more things and rightfully expect to do these things on the PC they own by just adding more and more software. While it is definitely the case that Windows itself adds functionality, we work hard to pick features that we believe benefit the broadest set of customers. At the same time, a big part of Windows 7 will be to continue to support choice and control over what takes place in Windows with respect to the software that is provided, what the default handlers are for file types and protocols, and providing a platform that makes it easy for end-users to personalize their computing experience.

    Finally, it is worth considering real world versus idealized settings. In order to develop Windows we run our benchmarks in a lab setting that allows us to track specifically the code we add and the impact that has. We also work closely with the PC Manufacturers and assist them in benchmarking their systems as they leave the factory. And for true real-world performance, the Microsoft Customer Experience Improvement Program provides us (anonymous, private, opt-in) data on how machines are really doing. We will refer to this data quite a bit over the next months as it forms a basis for us to talk about how things are really working, rather than using anecdotes or less reliable forms of information.

    In our next post we will look at startup and boot performance, and given the interest we will certainly have more to say about the topic of performance.

    --Steven

  • Engineering Windows 7

    Measuring the scale of a release

    • 108 Comments

    Thanks for all the feedback that we have been getting. That much of it is positive is certainly appreciated. I’ve been answering mails as best I can and along with members of the team we’ve been having the discussion in the comments. Everyone has done a great job sharing their views on specifics, wishes, and requests. I love getting these mails and reading the comments. It is fantastic. I just want to make sure folks know I can’t answer each one! What we are going to do is look to the emails and comments as a way of suggesting posts we should write.  The team overall appreciate the warm reception from all those that have joined us--we know we have lots of energetic discussions ahead of us and we're genuinely happy to start.

    With this post, I am hoping to continue the dialog on the way we think “inside the Win7 team” so to speak—in a sense this is about expanding the team a bit and bringing you into some more of the discussions we have about planning a release. This conversation about major or minor releases is very much like the one I have with my boss as we start planning :-)

    When we started planning the release, the first thing some might think we have to decide is if Windows 7 (client) would be a “major release” or not. I put that in quotes because it turns out this isn’t really something you decide nor is it something with a single answer. The magnitude of a release is as much about your perspective on the features as it is about the features themselves. One could even ask if being declared a major release is a compliment or not. As engineers planning a product we decide up front the percentage of our development team will that work on the release and the extent of our schedule—with the result in hand customers each decide for themselves if the release is “major”, though of course we like to have an opinion. On the server blog we talked about the schedule and we shared our opinion of the scale of the releases of Windows 7 client and server.

    Our goal is about building an awesome release of Windows 7.

    Across all customers, there is always a view that a major release is one that has features that are really the ones for me. A minor release is one that doesn’t have anything for me. It should then be pretty easy to plan a major release—just make sure it exactly the right features for everyone (and given the focus on performance, it can’t have any extra features, even if other people want them)! As engineers we all know such a design process is really impossible, especially because more often than not any two customers can be found to want exactly opposite features. In fact as I type this I received sequential emails one saying “[N]obody cares about touch screen nonsense” and the other saying “[Win7 needs] more advanced/robust ‘touch’ features”. When you just get unstructured and unsolicited input you see these opposites quite a bit. I’m sure folks are noticing this on the blog comments as well.

    Let’s explore the spectrum of release magnitude across a couple of (but not all) different types of customers: end-users, developers, partners, IT professionals, and influentials.

    End-users are generally the most straight-forward in terms of deciding how big a release is going to be. For an end-user a release is a big deal if they want to go out and buy an upgrade or buy a new PC. We could call that a major release. Seems simple enough and a major release is good for everyone. On the other hand, one could also imagine that a release is really cool and people want to buy it, but they also want to use their existing PC and the release requires more memory, updated drivers that might not be available, or maybe some specific hardware to be fully realized. Then it seems that a major release goes from a positive to a bit of an under-taking and thus loses some of its luster. Of course we all know that what folks really want is all the things they want that runs on the hardware they want—then that is a great product to get (whether it is major or not).

    Developers look at a release through a different lens. Obviously for developers a release is a major one if there are new APIs and capabilities to take advantage of in their software—again straight-forward enough. It could also be the case that a previous release had a lot of new APIs and folks are just getting familiar with using them and so what they really want is to round out the APIs and maybe improve performance. So one might suspect that the first release is a major release and the second type is a minor release. But if you look at the history of software products, it is often these “minor” releases that themselves become the major releases – Windows 3.1, Office 4.2, or even Windows XP SP2. In each of these cases, the target for developers became the “minor” release but in the eyes of the market that was the “major” release. The reason developers want to use new APIs is to differentiate their products or focus their energies on domain expertise they bring to the table, not just call new APIs for the sake of calling them. In that sense, a release might be a major one if it just happens to free up enough time for an ISV that they bet on the new APIs because they can focus on some things that are a major deal to them.

    Partners represent the broad set of folks who create PCs, hardware, and the infrastructure we think of as the ecosystem that Windows is part of. Partners tend to think about a major release in terms of the opportunity it creates and thus a major release might be one with a lot of change and thus it affords the opportunity to provide new hardware and infrastructure to customers. On the other hand, incompatibilities with the past might be viewed in a less than positive light if it means a partner needs to stop moving forward and revisit past work to bring it up to the required compatibility with a new release of Windows. If they choose, for any number of reasons, not to do that work then the release might be viewed as a minor one because of the lack of ecosystem support. So again we see that a big change can be viewed through the lens of a major or a minor release.

    IT professionals are often characterized as conservative by nature and thus take a conservative view of change. Due to the business focused nature of the role, the evaluation of any software product is going to take place in the context of a return on investment. So for an IT professional a major release would be one that delivers significant business value. This business value could be defined as a major investment in deployment and management of the software for example. Yet for end-users or developers, these very same features might not even be interesting let alone worthy of being a major or minor release.

    Influentials are all the folks who are in the business of providing advice, analysis, and viewpoints on the software we make. These folks often look at releases through the metric of “change”. Big changes equal major release. A big change can be a “re-architecture” as we saw in the transition from Windows 9x to Windows 2000—even though these products looked the same there was tons of change to talk about under the hood. So for reviewers and analysts it was definitely a major release. Big changes can also be big changes in the user-interface because that drives lots of discussion and it is easy to show all the change. Yet for each of these, this definition of major can also be viewed as a less than positive attribute. Re-architecture means potential incompatibilities. New user-interface can mean learning and moving from the familiar.

    We’ve seen a lot of comments and I have gotten a lot of email talking about re-architecting Windows as a symbol of a major release. We’ve also gotten a lot of feedback about how a major release is one that breaks with supporting the past. If I could generalize, folks are usually implying that if we do things like that then a number of other major benefits will follow—re-architecting leads to better performance, breaking with the past leads to using less memory. It is always tricky to debate those points because we are comparing a known state to a state where we fix all the things we know to fix, but we don’t yet know what we might introduce, break, or otherwise not fix. So rather than define a major release relative to the implementation, I think it makes sense define the success of the release relative to the benefits of whatever implementation is chosen.  We will definitely continue to pick up on this part of the discussion--there's a lot of dialog to have.

    The key is always a balance. We can have big changes for all customers if we prepare all the necessary folks to work through the change. We can have small changes have a big impact if they are the right changes at the right time, and those will get recorded over time as a major release.

    We’ve talked about the timing and the way we structure the team, so you have a sense for the “inputs” into the project. If we listened well and focused our efforts correctly, then each type of customers will find things that make the product worthwhile. And if we do our job at effectively communicating the product, then even the things that could be “problems” are seen in the broader context of an ecosystem where everyone collectively benefits when a few people benefit significantly.

    From our perspective, we dedicated our full engineering team and a significant schedule to building the Windows 7 client OS. That makes it a major undertaking by any definition. We intend for Windows 7 to be an awesome release.

    I hope this helped to see that perspective is everything when it comes to deciding how big a release is for each type of customer.

    --Steven

  • Engineering Windows 7

    The Windows 7 Team

    • 165 Comments

    Thanks to everyone who provided comments and sent me mail. I definitely appreciate the discussion we have kicked off. There’s also a ton of energy in our hallways as this blog started. It seems like a good thing to do to start off is sort of an introduction to the Windows development team. This post provides an overview of the team that is represented by this blog.

    Before diving into the main topic, let’s talk a bit more about what to expect from this blog. First a few words on the comments and emails I’ve received. I’ve received a ton—most of the weekend was spent reading emails and comments. There are definitely some themes. I would say by and large the reception has been very warm and we definitely appreciate that. The most frequent request was to discuss Windows performance and/or just “make Windows faster”. There’s a lot to this topic so we expect to talk about this quite a bit over the next months. There are many specific requests—often representing all possible sides of an issue such as some folks saying “please get rid of (or don’t do) <x>” and then other folks saying “whatever you do it is really important to keep (or do) <x>”. A big part of this blog for me personally is having the discussion about the multiple facets of any given issue. Even something that sounds as binary as performance proves to have many subtle elements. For example, some folks suggested that the best thing for boot performance is to not start anything until idle time and others suggested that the delay loading feels like it slows them down and still others have suggested that the best approach is to provide a startup manager that pushes everyone to choose what to start up. All of these have merit worth discussing and also demonstrate the subtlety and complexity of even the most straight forward request.

    Second, much to the surprise of both Jon and I a number of folks questioned the “authenticity” of the post. A few even suggested that the posts are being “ghost written” or that this blog is some sort of ploy. I am typing this directly in Windows Live Writer and hitting publish. This blog is the real deal—typos, mistakes, and all. There’s no intermediary or vetting of the posts. We have folks on the team who will be contributing, but we’re not having any posts written by anyone other than who signs it. We will us one user name for all the posts since that keeps the blog security and ownership clear, but posts will be signed by the person that hit publish. (If I participate in the comments I will use my msdn name, steven_sinofsky.)

    And third, what frequency should folks expect and when do we get to the “features of Windows 7”. When we wrote that we would post “regularly” we meant that we don’t have a schedule or calendar of posts and we don’t want to commit to an artificial frequency which generally seems inconsistent with blogging. We do expect to follow a pattern similar to what you have become familiar with on the IEBlog. FWIW, on my internal blog no one has yet accused me of not contributing enough. :-)

    As we said in the introductory post we think it will be good to talk about the engineering of Windows 7 (the “how”) and the first step is establishing who the engineers are that do the engineering before we dive into the product itself (the “why” and “what”).

    So let’s meet the team...

    It is pretty easy to think of the Windows team as one group or one entity, and then occasionally one specific person comes to represent the team—perhaps she gave a talk at a conference, wrote a book or article folks become familiar with, or maybe he has a blog. Within Microsoft, the Windows product is really a product of the whole company with people across all the development groups contributing in some form or another. The Windows engineering team “proper” is jointly managed by Jon and me. Jon manages the core operating system, which is, among many things, the kernel, device infrastructure, networking, and the engineering tools and system (all of which are both client and server). I am part of the Windows client experience team which develops, among many things, the shell and desktop, graphics, and media support. One other significant part of the Windows product is the Windows Media Center which is a key contribution managed along with all of Microsoft’s TV support (IPTV, extenders, etc.).

    There’s a lot to building an org structure for a large team, but the most important part is planning the work of the team. This planning is integral to realizing our goal of improving the overall consistency and “togetherness” for Windows 7. So rather than think of one big org, or two teams, we say that the Windows 7 engineering team is made up of about 25 different feature teams.

    A feature team represents those that own a specific part of Windows 7—the code, features, quality, and overall development. The feature teams represent the locus of work and coordination across the team. This also provides a much more manageable size—feature teams fit in meeting spaces, can go to movies, and so on. On average a feature team is about 40 developers, but there are a variety of team sizes. There are two parts to a feature team: what the team works on and who makes up a team.

    Windows 7’s feature teams sound a lot like parts of Windows with which you are familiar. Because of the platform elements of Windows we have many teams that have remained fairly constant over several releases, whereas some teams are brand new or represent relatively new areas composed of some new code and the code that formed the basis of the team. Some teams do lots of work for Server (such as the VM work) and some might have big deliverables outside of Windows 7 (such as Internet Explorer).

    In general a feature team encompasses ownership of combination of architectural components and scenarios across Windows. “Feature” is always a tricky word since some folks think of feature as one element in the user-interface and others think of the feature as a traditional architectural component (say TCP/IP). Our approach is to balance across scenarios and architecture such that we have the right level of end-to-end coverage and the right parts of the architecture. One thing we do try to avoid is separating the “plumbing” from the “user interface” so that teams do have end-to-end ownership of work (as an example of that, “Find and Organize” builds both the indexer and the user interface for search). Some of the main feature teams for Windows 7 include (alphabetically):

    • Applets and Gadgets
    • Assistance and Support Technologies
    • Core User Experience
    • Customer Engineering and Telemetry
    • Deployment and Component Platform
    • Desktop Graphics
    • Devices and Media
    • Devices and Storage
    • Documents and Printing
    • Engineering System and Tools
    • File System
    • Find and Organize
    • Fundamentals
    • Internet Explorer (including IE 8 down-level)
    • International
    • Kernel & VM
    • Media Center
    • Networking - Core
    • Networking - Enterprise
    • Networking - Wireless
    • Security
    • User Interface Platform
    • Windows App Platform

    I think most of these names are intuitive enough for the purposes of this post—as we post more the members of the team will identify which feature team they are on. This gives you an idea of the subsystems of Windows and how we break down a significant project into meaningful teams. Of course throughout the project we are coordinating and building features across teams. This is a matter of practice because you often want to engineer the code in one set of layers for efficiency and performance (say bottom up), but end-users might experience it across layers, and IT pros might want to manage a desktop from the top-down. I admit sometimes this is a little bit too much of an insider view as you can’t see where some interesting things “live”. For example, the tablet and inking functionality is in our User Interface Platform team along with speech recognition, multi-touch and accessibility. The reason for this is the architectural need to share the infrastructure for all mechanisms of “input” even if any one person might not cross all those layers. This way when we design the APIs for managing input, developers will see the benefits of all the modes of user interaction through one set of APIs.

    The other aspect of our feature teams is the exact composition. A feature team represents three core engineering disciplines of software development engineers (sde or dev), software development engineers in test (sdet or test, sorry but I haven’t written a job description externally), and program managers (pm). Having all three of these engineering disciplines is a unique aspect of Microsoft that has even caught the attention of some researchers. In my old blog I described dev and pm which I linked to above (I still owe a similar post on SDET!).

    The shortest version of these roles is dev is responsible for the architecture and code, pm is responsible for the feature set and specification, and test is responsible for validation and the ultimate advocate for the customer experience. Everyone is responsible for quality and performance, each bringing their perspective to the work. For any given feature, each of dev, test, and pm work as a team of peers (both literally and conceptually). This is a key “balance of power” in terms of how we work and makes sure that we take a balanced approach to developing Windows 7. Organizationally, we are structured such that devs work for devs, sdets work for sdets, and pm works for pm. That is we are organized by these “engineering functions”. This allows for two optimizations—the focus on expertise in both domain and discipline and also the ability to make sure we are not building the product in silos, but focused on the product as a whole.

    We talk about these three disciplines together because we create feature teams with n developers, n testers, and 1/2n program managers. This ratio is pretty constant across the team. On average a feature team is about 40 developers across the Windows 7 project.

    We also have core members of our engineering team that work across the entire product:

    • Content Development – the writers and editors that create the online assistance, web site, SDK documents, and deployment documents.
    • Product Planning – responsible for the customer research and learning that informs the selection of features. Product Planning also coordinates the work we do with partners across the ecosystem in terms of partnering through the design and development of the release.
    • Product Design – develops the overall interaction model, graphical language, and design language for Windows 7
    • Research and Usability – creates field and lab studies that show how existing products and proposed feature perform with customers

    Some have said that the Windows team is just too big and that it has reached a size that causes engineering problems. At the same time, I might point out that just looking at the comments there is a pretty significant demand for a broad set of features and changes to Windows. It takes a set of people to build Windows and it is a big project. The way that I look at this is that our job is to have the Windows team be the right size—that sounds cliché but I mean by that is that the team is neither too large nor too small, but is effectively managed so that the work of the team reflects the size of the team and you see the project as having the benefits we articulate. I’m reminded of a scene from Amadeus where the Emperor suggests that the Marriage of Figaro contains “too many notes” to which Mozart proclaims “there are just as many notes, Majesty, as are required, neither more nor less.” Upon the Emperor suggesting that Mozart remove a few notes, Mozart simply asks “which few did you have in mind?” Of course the people on the team represent the way we get feature requests implemented and develop end to end scenarios, so the challenge is to have the right team and the right structure to maximize the ability to get those done—neither too many nor too few.

    I promised myself no post would be longer than 4 pages and I am getting close. The comments are great and are helping us to shape future posts. I hope this post starts to develop some additional shared context.

    --Steven

  • Engineering Windows 7

    Welcome to Engineering Windows 7

    • 347 Comments

    Welcome to our first post on a new blog from Microsoft—the Engineering Windows 7 blog, or E7 for short. E7 is hosted by the two senior engineering managers for the Windows 7 product, Jon DeVaan and Steven Sinofsky. Jon and Steven, along with members of the engineering team will post, comment, and participate in this blog.

    Beginning with this post together we are going to start looking forward towards the “Windows 7” project. We know there are tons of questions about the specifics of the project and strong desire to know what’s in store for the next major release of Windows. Believe us, we are just as excited to start talking about the release. Over the past 18 months since Windows Vista’s broad availability, the team has been hard at work creating the next Windows product.

    The audience of enthusiasts, bloggers, and those that are the most passionate about Windows represent the folks we are dedicating this blog to. With this blog we’re opening up a two-way discussion about how we are making Windows 7. Windows has all the challenges of every large scale software project—picking features, designing them, developing them, and delivering them with high quality. Windows has an added challenge of doing so for an extraordinarily diverse set of customers. As a team and as individuals on the team we continue to be humbled by this responsibility.

    We strongly believe that success for Windows 7 includes an open and honest, and two-way, discussion about how we balance all of these interests and deliver software on the scale of Windows. We promise and will deliver such a dialog with this blog.

    Planning a product like Windows involves systematic learning from customers of all types. In terms of planning the release we’ve been working with a wide variety of customers and partners (PC makers, hardware developers, enterprise customers, developers, and more) since the start of the project. We also continue our broad consumer learning through telemetry (Customer Experience Improvement Program), usability studies, and more. One area this blog will soon explore is all the different ways we learn from customers and the marketplace that inform the release.

    We have two significant events for developers and the overall ecosystem around Windows this fall. The Professional Developers Conference (PDC) on October 27 and the Windows Hardware Engineering Conference (WinHEC) the following week both represent the first venues where we will provide in-depth technical information about Windows 7. This blog will provide context over the next 2+ months with regular posts about the behind the scenes development of the release and continue through the release of the product.

    In leading up to this blog we have seen a lot of discussion in blogs about what Microsoft might be trying to accomplish by maintaining a little bit more control over the communication around Windows 7 (some might say that this is a significant understatement). We, as a team, definitely learned some lessons about “disclosure” and how we can all too easily get ahead of ourselves in talking about features before our understanding of them is solid. Our intent with Windows 7 and the pre-release communication is to make sure that we have a reasonable degree of confidence in what we talk about when we do talk. Again, top of mind for us is the responsibility we feel to make sure we are not stressing priorities, churning resource allocations, or causing strategic confusion among the tens of thousands of partners and customers who care deeply and have much invested in the evolution of Windows.

    Related to disclosure is the idea of how we make sure not to set expectations around the release that end up disappointing you—features that don’t make it, claims that don’t stick, or support we don’t provide. Starting from the first days of developing Windows 7, we have committed as a team to “promise and deliver”. That’s our goal—share with you what we’re going to get done, why we’re doing it, and deliver it with high quality and on time.

    We’re excited about this blog. As active bloggers on Microsoft’s intranet we are both looking forward to turning our attention and blogging energies towards the community outside Microsoft. We know the ins and outs of blogging and expect to have fun, provide great information, and also make a few mistakes. We know we’ll misspeak or what we say will be heard differently than we intended. We’re not worried. All we ask is that we have a dialog based on mutual respect and the shared goal of making a great release of Windows 7.

    Our intent is to post “regularly”. We’ll watch the comments and we will definitely participate both in comments and potentially in follow-up posts as required. We will make sure that members of the Windows 7 development team represent themselves as such as well. While we want to keep the dialog out in the open, please feel free to use email to steven.sinofsky@microsoft.com should you wish to. In particular, email is a good way to suggest topics we might have a chance to discuss on the blog.

    With that, we conclude our welcome post and ask you to stay tuned and join us in this dialog about the engineering of Windows 7.

    Steven and Jon

    Please note the availability of this blog in several other languages via the links on the nav pane. These posts are also created by members of our development team and we welcome dialog on these sites as well. We will continue to expand the list in other languages based on feedback.

     

  • Engineering Windows 7

    Guidelines on Comments

    • 103 Comments

    As the community participates in the E7 blog, we want to offer some guidelines on how we are going handle comments in general.  Our primary goal is for this to be a place for open discussion about Windows Engineering, so we don’t want to have lots of overhead and process.

    We love comments.  We know everyone on the Windows team will be watching for comments and is looking forward to the dialog.  We will work to make sure that Microsoft employees represents themselves as such, especially if they work on Windows.

    Things we want to see in comments:

    • Lots of good interesting responses on Windows and the posts on E7Blog
    • Keep it on topic
    • Keep it respectful
    • Keep it fun

    Things that will get comments edited/deleted:

    • Offensive or abusive language or behavior
    • Misrepresentation (i.e., claiming to be somebody you're not) - if you don't want to use your real name, that's fine, as long as your "handle" isn't offensive, abusive, or misrepresentative

    • Blog-spam of any kind

    We hope these rules will keep the discussion lively and on topic. 

    Steven and Jon

Page 3 of 3 (67 items) 123