Notes on comments.
Welcome to our blog dedicated to the engineering of Microsoft Windows 7
Larry is very appreciative of the reception and comments his post received. Thank you! It is worth noting that we’ve surpassed over 2000 comments and I’ve received and equal amount of email. I am trying to reply as often as I can!
We’re 10 days from the PDC and so we might take a short break from the blog while we practice our demos of Windows 7…we’ll keep an eye on comments for sure and maybe a post or two on the way. :-)
Let's move "up" in the dev process and look at how we come up with what is in a release and how we think about taking a feature from an idea to feature.
As we’ve posted on various engineering challenges we’ve often distilled the discussion down to a few decisions, often between two options (make a feature optional or not, add a window management feature one of two ways, etc.) Yet this doesn’t quite get to the challenge of where does the product definition begin and how do we take an idea and turn it into a feature. Most choices in engineering Windows are not between two choices, but the myriad of considerations, variables, and possibilities we have before we even get to just a couple of options. This post looks a bit at the path from an idea to a feature.
A common thread we’ve seen in the feedback is to make “everything customizable and everything optional” (not a direct quote of course). Of course, by virtue of providing a platform we aim to offer the utmost in extensibility and customization by writing to the APIs we provide. There is an engineering reality that customization and extensibility have their cost—performance, complexity, and forward compatibility come to mind. One way to consider this is that if a feature has two “modes” (often enable the new feature or enable the old feature) in one release, then in a follow-up release if the feature is changed it potentially has four modes (old+old, old+new, new+old, new+new), and then down the road 8 modes, and so on. The complexity of providing a stable and consistent platform comes with the cost that we aren’t always able to “hook” everything and do have to make practical choices about how a feature should work, in an effort to plan for the future. Designing a feature is also about making choices, tough choices. At the same time we also want to provide a great experience at the core operating system functions of launching programs, managing windows, working with files, and using a variety of peripherals--to name just a few things Windows does. This experience should be one that meets the needs of the broadest set of people across different skill levels and different uses of PCs, and also providing mechanisms to personalize with user interface and to customize with code. Every release we plan is a blending of fixing things that just don’t work like we all had hoped and developing new solutions to both old and new problems, a blending of features and extensibility, and a blending of better support for existing hardware and support for new hardware.
This post is jointly written by Samuel Moreau the manager of the user experience design team for the Windows Experience, Brad Weed, Director of User Experience Design and Research for Windows and Windows Live, and Julie Larson-Green, the VP of Program Management for the Windows Experience. With the number of comments that describe a specific feature idea, we thought it would be good to give you an overview of how we approach the overall design process and how ideas such as the ones you mention flow into our process. Also for those of you attending the PDC, Sam will be leading a session on the design principles of Windows 7. –Steven
In general, we follow a reasonably well-understood approach to product design, but that doesn’t make it easy or “automatic”. Often this is referred to as a "design funnel" where ideas go from concept to prototype to implementation and then refinement. By reading the various design ideas in the comments of Chaitanya’s post on “Starting, Launching and Switching”, you can see how difficult it can be to arrive at a refined feature design. In those comments you can find equally valid, yet somewhat opposite points of view. Additionally, you can also find comments that I would paraphrase as saying “do it all”. It is the design process that allows us to work through the problem to get from idea to feature in the context of an overall product that is Windows.
From a product design perspective, the challenge of building Windows is the breadth of unique usage of just a single product. In a sense, one of the magic elements of software is that it is “soft” and so you can provide all the functionality to all customers with little incremental cost and little difference in “raw materials” (many comments have consistently suggested we have everything available along with options to choose components in use and we have talked about minimizing the cost when components and features are not used even if they are available). And at the same time, there is a broad benefit to developers when they can know a priori that a given PC has a common set of functions and can take advantage of specific APIs that are known to be there and known to behave a specific way--the platform. This benefit of course accrues to individuals too as you can walk up to any PC and not only have a familiar user experience, but if you want to do your own work, use a specific device, or run a certain program on the PC you can also do that. This breadth of functionality is a key part of the value of a Windows PC. Yet it also poses a pretty good design challenge. Learning, understanding, and acting on the broad set of inputs into designing Windows is an incredibly fun and challenging part of building Windows.
As Larry pointed out the design and feature selection happens takes place in his part of the organization (not way up here!). There’s another discussion we will have in a future post about arriving at the themes of the overall release and how we develop the overall approach to a release so that the features fit together and form a coherent whole and we address customer needs in an end-to-end fashion.
We have a group of product designers that are responsible for the overall interaction design of Windows, the sequence and visualization of Windows. Program Managers across the team work with product designers as they write the specifications. Along with designers we have UX Researchers who own the testing and validation of the designs as we’ve talked about before. The key thing is that we apply a full range of skills to develop a feature while making sure that ownership is clear and end-to-end design is clear. The one thing we are not is a product where there is “one person” in charge of everything. Some might find that to be a source of potential problems and others might say that a product that serves so many people with such a breadth of features could not be represented by a single point of view (whether that is development, testing, design, marketing, etc.). We work to make sure engineers are in charge of engineering, that the product has a clear definition that we are all working towards implementing and that product definition represents the goals across all the disciplines it takes to deliver Windows to customers around the world. And most importantly, with Windows 7 we are making renewed effort at "end to end" design.
Let’s look at the major phases of product design in Engineering Windows. What we’ll talk about is of course generalized and doesn’t apply to each specific incident. One thing we always say internally is that we’re a learning organization—so no process is perfect or done and we are always looking to make it better as we move through each and every iteration of building Windows.
Throughout this post when we say “we” what this really means is the individuals of each discipline (dev, test, pm, design) working together—there’s no big feature or design committee.
We get an idea from somewhere of something to do—it could be big (build UX to support a new input method such as touch), wild (change the entire UI paradigm to use 3D), or an improvement / refinement of an existing feature (multi-monitor support), as some examples. There is no shortage of creative ideas, because frankly, that is the easy part. Ideas flow in from all corners of the ecosystem, ourselves included. We’ve talked a lot about comments and feedback from this blog and that is certainly one form of input. Product reviews, enterprise customers, customer support lines, PC makers, hardware and software developers, blogs, newsgroups, MVPs, and many others have similar input streams into the team.
The key is that working on Windows is really a constant stream of inputs. We start with a framework for the release that says what goals and scenarios we wish to make easier, better, faster. And with that program management builds up candidate ideas—that is ideas that will make their way through features. The job of getting a feature “baked” enough falls to program management and they do this work by working across the product and working with design, development, and testing (as Larry described).
With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.
Given any idea, the first step is to understand what data we have “surrounding” the idea. Sometimes the idea itself comes to us in a data-centric manner (customer support incidents) or other times it is anecdotal (a blog).
The first place we look is to see what data do we have based on real world usage that would support the development of a hypothesis, refute or support the conventional wisdom, or just shed some light on the problem. The point is that the feature starts its journey by adding more perspectives to the input.
Essentially, we need an objective view that illuminates the hypothesis. We gather this data from multiple sources including end users, customers, partners, and in various forms such as instrumentation, research, usability studies, competitive products, direct customer feedback, and product support.
As many (including us) have pointed out, telemetry data has limitations. First, it can never tell you what a person might have been trying to do—it only tells you what they did. Through usability, research, and observation, we are able to get more at the intent. For example, the way we talked about high dpi and how the telemetry showed one thing but the intent was different (and the impact of those choices was unintended). The best way to see this is to remember that a person using a PC is not interesting in “learning to use a PC” but is trying to get their own work done (or their own playtime). And when faced with a “problem” the only solutions available are the buttons and menu commands right there—the full solution set is the existing software. Our job is to get to the root of the problem and then either expand the solution set or just make the problem go away altogether.
What about unarticulated needs? The data plus intent shows the “known world” and “known solution space”, but one role we have is to be forward thinking and consider needs or desires that are not clearly articulated by those who do not have the full time job to consider all the potential solution spaces. The solution space could potentially be much broader than readily apparent from the existing and running product—it might involve a rearchitecture, new hardware, or an invention of a new user interface.
A great example of this was mentioned in one of the comments on the taskbar post. The comment (paraphrasing) indicated that the order of icons on the taskbar matters so sometimes he/she would simply close all the open programs and then restart them just so the programs were in the preferred order on the taskbar. Here the data would look like an odd sequence of launch/exit/launch/exit/launch/lauch. And only through other means would we learn why someone would be doing that, and for the most part if you just walked up without any context and said “how can we make Windows easier” it isn’t likely this would bubble up to the top of the list of “requests”. Thus we see a number of neat things in this one example—we see how the data would not show the intent, the “request” would not be at the top of any list, and the solution might take any number of forms, and yet if solved correctly could be a pretty useful feature. Above all, in hindsight this is one of those “problems” that seems extraordinarily obvious to solve (and I am sure many of you are saying—“you should have just asked me!”) So we also learn the lesson that no matter what data and information we gather or what design we’re talking about, someone always noticed or suggested it :-).
The next step is where we propose a clear hypothesis – “people would benefit from rearranging icons on the taskbar because positional memory across different sessions will reduce the time to switch applications and provide a stronger sense of control and mastery of Windows”.
What is our hypothesis (in a scientific sort of way) as to what opportunity exists or what problem we would solve, and what the solution would look like, or why does the problem exist? Part of designing the feature is to think through the problem—why does it exist—and then propose the benefit that would come from solving the problem. It is important that we have a view of the benefit in the context of the proposed solution. It is always easy to motivate a change because it feels better or because something is broken so a new thing has to be better, but it is very important that we have a strong motivation for why something will benefit customers.
Another key part about the hypothesis is to understand the conventional wisdom around this area, especially as it relates to the target customer segment (end-user, enthusiast, PC maker, etc.) The conventional wisdom covers both the understanding of how/why a feature is a specific way today and also if there is a community view of how something should be solved. There are many historic examples where the conventional wisdom was very strong and that was something that had to be considered in the design or something that had to be considered knowing the design was not going to take this into account—a famous example is the role of keyboard shortcuts in menus that the “DOS” world felt would be required (because not every PC had a mouse) but on the Mac were “unnecessary” because there was always a mouse. Conventional wisdom in the DOS world was that a mouse was optional.
For any hypothesis, there are numerous design alternatives. It is at this stage where we cast a broad net to explore various options. We sketch, write scenarios, story board, do wireframes and generate prototypes in varying fidelity. Along the way we are working to identify not just the “best answer” but to tease out the heart and soul of the problem and use the divergent perspectives to feed into the next step of validation.
This is a really fun part of the design process. If you walk our hallways you might see all sorts of alternatives in posters on the walls, or you might catch a program manager or designer with a variety of functional prototypes (PowerPoint is a great UI design tool for scenarios and click-thrus that balance time to create with fidelity, and Visio is pretty cool for this as well) or our designers often mock up very realistic prototypes we can thoroughly test in the lab.
With a pile of options in front of us we then take the next step of interpreting our own opinions, usability test data and external (to the team) feedback. This is the area where we end up in conversations that, as an example, could go something like this… “Option ‘A’ is better at elevating the discovery of a new feature, but option ‘B’ has a stronger sense of integration into the overall user experience”.
As we all know, at the micro level you can often find a perfect solution to a specific problem. But when you consider the macro level you start to see the pros and cons of any given solution. It is why we have to be very careful not to fall into the trap of a “tests”. The trap here is that it is not often possible to test a feature within the full context of usage, but only within the context of a specific set of scenarios. You can’t test how a feature relates to all potential scenarios or usages while also getting rich feedback on intent. This is why designing tests and interpreting the results is such a key part of the overall UX effort led by our researchers.
A mathematic way of looking at this is the “local min” versus a “global min”. A local min is one you find if you happen to start optimizing at the wrong spot on the curve. A good example of this in software is when faced with a usability challenge you develop a new control or new UI widget. It seems perfectly rational and often will test very well, especially if the task asked of the subject is to wiggle the widget appropriately. However, what we’re after is a global optimization where one can see that the potential costs (in code, quality, and usability) of another widget by erase any potential benefits gained by introducing a new widget. Much has been written about the role of decision theory as it relates to choosing between options, but our challenge with design is the preponderance of qualitative elements.
Ultimately we must pick a design and that choice is informed by the full spectrum of data, qualitative and quantitative.
Given a choice for a design and an understanding of how to implement it and the cost, there is still one more choice—should we do the feature at all. It sounds strange that we would go through all this work and then still maybe not build a specific feature. But like a movie director that shoots a scene that ends up on the cutting room floor, sometimes the design didn’t pan out as we had hoped, sometimes we were not able to develop an implementation plan within reason, or sometimes there were other ideas that just seemed better. And this is all before we get to the implementation, which as Larry pointed out has challenges as well.
We have two tools we use to assist us in prioritizing features and designs. First is the product plan—the plan says at a high level what we “require” the product to achieve in terms of scenarios, business goals, schedule, and so on. Most of the time features don’t make it all the way through prototyping and testing because they just aren’t going to be consistent with the overall goals of the release. These goals are important otherwise a product doesn’t “hang together” and runs the risk of feeling like a “bunch of features”. These high level goals inform us quite a bit in terms of what code we touch and what scenarios we consider for a release.
And second we have the “principles of design” for the release. These principles represent the language or vocabulary we use. These represent the core values—we often think of the design principles as anthropomorphizing the product—“if Windows were a person then it would be…”. This is the topic of Sam’s talk at the PDC.
As mentioned in the introduction, it isn’t possible to do everything twice. We do have to decide. This could be a whole series of posts—customization, compatibility modes, and so on. We definitely hear folks on these topics and always do tons of work to enable both “tweaking” and “staying put” and at the same time we need to balance these goals with the goals of providing a robust and performant platform and also moving the OS forward. Some of us were involved in Office 2007 and there is a fun case study done by Harvard Business School [note fee associated with retrieving the full text] about the decision to (or not to) provide a “compatibility mode” for Office 2007. This was a choice that was difficult at the time and a few folks have even mentioned it in some comments.
Finally, we build and iterate to refine the chosen solution. Inevitably there are new discoveries in the implementation phase and we adjust accordingly. As we integrate the solution into its place in Windows, that discovery continues. The beta period is a good example of how we continue to expand and learn from usage and feedback. In a Windows beta we are particularly interested in compatibility and real-world performance as those are two aspects of the design that are difficult to validate without the breadth of usage we can get if we do a great beta.
It is important to keep in mind that we follow intensely all the feedback we receive from all forms—reviews, blogs, and of course all the telemetry about how the product is used (realizing that the beta is a select group of people).
One of the things we hope to do with the blog, as you might have seen on the IE 8 Blog, is to discuss the evolution of the product in real-time. We’re getting close to this transition and are looking forward to talking more about the design choices we made!
-- Sam, Brad, and Julie
This blog has been created to share useful information. Thanks and greetings!
I have been using a Windows 7 alpha for a while
and I am impresswd with a lot of it's features.
The Software kernel mode protection that is built in is very impressive, I hope that you can further improve this to help to defeat reverse engineering of Microsoft Products.
I would also like to see a downgrade of minimum requirements to pre-Vista standards. From this side memory leaks are non-existant and I have found software compatibility greatly improved over Vista.
This is a Great OS and like the UNIX like shell, its going to be real intresting testing further builds & look forward to the final retail release.
Lorne L. Reap
I am very dissapointed with this blog. I think that the disparity between what is being achieved and what I originally expected is vast. I suspect it started too late.
I thought that this was a genuine attempt to change the windows development process and produce a product that would not suffer from the clumsiness and bad manners of the Vista versions, but it hs turned into an exercise at self-justification.
I get the impression that what is happening here is a harvesting of criticims, and an early attempt to defuse them or to try out reasonable justifications.
It is clear that what we are going to get with 7 is what we were going to get all along, Vista with knobs on. It is also clear from, at least the initial few sets of comments, that is exactly what people did not want.
It seems clear that ideas like dropping the registry or developing the OS independently of the application bundle have simply no chance of being adopted. Nor is there any intention to provide a virtual XP box with the final version to allow simplified legacy support.
I am very disappointed by the tone of most of the recent postings, which have been far from collaborative and have had a whiff of defensiveness about them.
I followed the link to the 'fun' reference at Harvard and discovered I would have to pay $6.25 to read it.
That's a bit annoying, really, and rather bad manners.
@bobharvey -- I assure you that Microsoft receives no compensation. It is a published work that charges much like a book (and used that way in a classroom). There isn't a freely available version of this publication. I am sorry if that was not an appropriate citation. I should have made this clear in the link and will do so in a revision.
I appreciate your note regarding your view of the posts.
One thing I would mention is that we have not yet discussed in detail the features or details of the release. As we talked about in the first post, this first purpose of the blog was to discuss the "how" of engineering Windows 7.
One thing that we have all seen is that it is tough to say "exactly what people want" as we've seen many sides of many issues. Many suggestions have been made, both in response to the specific posts and in an unsolicited form.
On just one of your specific feature suggestions, depending on your definition of "application bundle" we spoke about this in the blog and this has also been something we have said would be the case as has been reported widely.
We will keep trying to meet your expectations on the tone.
where you have taken the M1? Official or P2P ?
You realize that your requests are on a m1 code ?
From the competition Microsoft should learn one thing , this
AKA : Marketing and Hype
I enjoyed this post. There are a lot of people who have a much fuzzier idea of how features are conceived, developed, and implemented. I had not realized (before reading this) how much that fuzziness contributes to the many 'MS should do X' things people out there are saying.
I was one of those people with a too-fuzzy idea of how you do it. I probably still am. But I appreciate the insight and I hope for more like this.
I will appreciate a post from a MS expert on VM's as many comments were made on the idea of virtual boxes for backward compatibility etc. I cant even start imagining all the problems. My take on it:
Currently 100% of windows programs is , well 100% windows programs, so 100% will need to be launched in virtual machine. So a brand new OS will just be a XP/Vista launcher. Millions of hours of software engineering working on the current API. Only brand new programs will be done for a brand new OS. Most companies don't have capital to redo software for fun, and if they do, it will take more than a few weeks. My PC have such a launcher, called a BIOS.
A virtual machine in a server room, booting once in a blue moon running a few specific pre-defined services is fine I suppose, but constantly as I work and open different programs etc? No thanks.
And then, I can already "hear" the complaints about disk footprint and memory usage and slow program start-up and MS engineers so ans so stupid...
I kind of understand that one day virtual BOX for backward compatibility will be a good solution, but not with today's average of the shelf grandma-PC with maybe 2 or 4 Gig ram?
One thing that really amazes me is why there's no Windows Vista or Server 2008 logo when either OS is booting. I have both Vista and Server 2008 installed and some times I get confused which OS is booting. There should be a logo of the OS just like in XP when booting. There's no way we can say which OS is booting by just looking at the boot screen!! And the common controls of Vista comes no where close to Mac OS X. And remove that ugly Vista basic theme and replace it with Aero like theme which we get to see when installing Vista. It's much better than that ugly Vista basic theme
While it is difficult to make everything customizable and everything optional, it is certainly easy to preserve existing UI and features and build incrementally upon those to create a new release that is a superset of the earlier one. In that respect, Vista went a different way and tried to reinvent the wheel; it is not a superset of Windows XP which itself was a superset of all previous Windows versions. Vista changed the baseline eXPerience that we all enjoyed in Windows XP. It removed common properties by default, opting instead to place new, unfamiliar territories before the Windows user. Huge changes resulted in an experience similar to that of moving to an entirely new OS. And I believe very strongly this is the main reason why Vista has not been overwhelmingly received.
@bobharvey Do you really think that a single user's reply to a blog could make any difference to W7? (Especially when there is nothing wrong with registry :-)) What were you expecting? I do agree, though, that a blog like this one should have appeared much earlier. If this blog showed up as soon as it became obvious that Vista was going to become a marketing failure, some of the damage could have been averted.
To respond to the current post - there is one aspect of customization that would not come at an increased cost - to drop all the bundled applications from Wordpad to Windows DVD Maker and let the user install them or used them through web interface, when needed.
And please stop using the word "Windows" in the title of every bundled application. It's like writing the word "coffee" in front of every item on Starbuck's menu.
One more thing - I sorely miss the ability to customize the ribbon in Office 2007 applications. Why do I have to buy a separate application to be able to do something that was integral to every previous Office suite?
"Especially when there is nothing wrong with registry"
There is plenty wrong with the registry, in the same way that there's something wrong with just writing into a random file in the C:\ root. What's more, plenty of people at MS are aware of it, but it's not easy to fix, if you want to preserve backwards compatibility. :)
Just a few problems with the registry:
It lumps user-, system- and application settings together in one big mix. What do I do if I want to back up my settings for a single application? What if I want to grab all my user settings, and apply them to another OS? I can't. Even Microsoft's Windows update installation is just a big complex script hardcoded to read specific values from the registry, and applying them to the new system.
Then there's the problem that programs can (and do) poke around at each other's settings. And the fact that it's impossible to maintain There's no sane way to figure out what changes a specific application has made to the registry, or whether it cleaned up after you uninstalled it.
The registry is one of the largest remaining WTF's in Windows. Of course, it's also virtually impossible to remove, because almost every existing application depends on it. But pretending that there's "nothing wrong with it" is just silly.
The registry is not very good we now, but i think a registry based on a SQL Database will make a clean registry very easy. If an App will be Remove in the "Add or Remove Software" of Windows, just drop the Table with the App Informations and its good...no old registry keys, no registry cleanup tools needed.