Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

From Idea to Feature: A view from Design

From Idea to Feature: A view from Design

  • Comments 67

Larry is very appreciative of the reception and comments his post received. Thank you!  It is worth noting that we’ve surpassed over 2000 comments and I’ve received and equal amount of email. I am trying to reply as often as I can! 

We’re 10 days from the PDC and so we might take a short break from the blog while we practice our demos of Windows 7…we’ll keep an eye on comments for sure and maybe a post or two on the way.  :-)

Let's move "up" in the dev process and look at how we come up with what is in a release and how we think about taking a feature from an idea to feature. 

As we’ve posted on various engineering challenges we’ve often distilled the discussion down to a few decisions, often between two options (make a feature optional or not, add a window management feature one of two ways, etc.) Yet this doesn’t quite get to the challenge of where does the product definition begin and how do we take an idea and turn it into a feature. Most choices in engineering Windows are not between two choices, but the myriad of considerations, variables, and possibilities we have before we even get to just a couple of options. This post looks a bit at the path from an idea to a feature.

A common thread we’ve seen in the feedback is to make “everything customizable and everything optional” (not a direct quote of course). Of course, by virtue of providing a platform we aim to offer the utmost in extensibility and customization by writing to the APIs we provide. There is an engineering reality that customization and extensibility have their cost—performance, complexity, and forward compatibility come to mind. One way to consider this is that if a feature has two “modes” (often enable the new feature or enable the old feature) in one release, then in a follow-up release if the feature is changed it potentially has four modes (old+old, old+new, new+old, new+new), and then down the road 8 modes, and so on. The complexity of providing a stable and consistent platform comes with the cost that we aren’t always able to “hook” everything and do have to make practical choices about how a feature should work, in an effort to plan for the future. Designing a feature is also about making choices, tough choices. At the same time we also want to provide a great experience at the core operating system functions of launching programs, managing windows, working with files, and using a variety of peripherals--to name just a few things Windows does. This experience should be one that meets the needs of the broadest set of people across different skill levels and different uses of PCs, and also providing mechanisms to personalize with user interface and to customize with code. Every release we plan is a blending of fixing things that just don’t work like we all had hoped and developing new solutions to both old and new problems, a blending of features and extensibility, and a blending of better support for existing hardware and support for new hardware.

This post is jointly written by Samuel Moreau the manager of the user experience design team for the Windows Experience, Brad Weed, Director of User Experience Design and Research for Windows and Windows Live, and Julie Larson-Green, the VP of Program Management for the Windows Experience. With the number of comments that describe a specific feature idea, we thought it would be good to give you an overview of how we approach the overall design process and how ideas such as the ones you mention flow into our process. Also for those of you attending the PDC, Sam will be leading a session on the design principles of Windows 7. –Steven

Designing Windows – from idea to feature

In general, we follow a reasonably well-understood approach to product design, but that doesn’t make it easy or “automatic”. Often this is referred to as a "design funnel" where ideas go from concept to prototype to implementation and then refinement.  By reading the various design ideas in the comments of Chaitanya’s post on “Starting, Launching and Switching”, you can see how difficult it can be to arrive at a refined feature design. In those comments you can find equally valid, yet somewhat opposite points of view. Additionally, you can also find comments that I would paraphrase as saying “do it all”. It is the design process that allows us to work through the problem to get from idea to feature in the context of an overall product that is Windows.

From a product design perspective, the challenge of building Windows is the breadth of unique usage of just a single product. In a sense, one of the magic elements of software is that it is “soft” and so you can provide all the functionality to all customers with little incremental cost and little difference in “raw materials” (many comments have consistently suggested we have everything available along with options to choose components in use and we have talked about minimizing the cost when components and features are not used even if they are available). And at the same time, there is a broad benefit to developers when they can know a priori that a given PC has a common set of functions and can take advantage of specific APIs that are known to be there and known to behave a specific way--the platform. This benefit of course accrues to individuals too as you can walk up to any PC and not only have a familiar user experience, but if you want to do your own work, use a specific device, or run a certain program on the PC you can also do that. This breadth of functionality is a key part of the value of a Windows PC. Yet it also poses a pretty good design challenge. Learning, understanding, and acting on the broad set of inputs into designing Windows is an incredibly fun and challenging part of building Windows.

As Larry pointed out the design and feature selection happens takes place in his part of the organization (not way up here!). There’s another discussion we will have in a future post about arriving at the themes of the overall release and how we develop the overall approach to a release so that the features fit together and form a coherent whole and we address customer needs in an end-to-end fashion.

We have a group of product designers that are responsible for the overall interaction design of Windows, the sequence and visualization of Windows. Program Managers across the team work with product designers as they write the specifications. Along with designers we have UX Researchers who own the testing and validation of the designs as we’ve talked about before. The key thing is that we apply a full range of skills to develop a feature while making sure that ownership is clear and end-to-end design is clear. The one thing we are not is a product where there is “one person” in charge of everything. Some might find that to be a source of potential problems and others might say that a product that serves so many people with such a breadth of features could not be represented by a single point of view (whether that is development, testing, design, marketing, etc.). We work to make sure engineers are in charge of engineering, that the product has a clear definition that we are all working towards implementing and that product definition represents the goals across all the disciplines it takes to deliver Windows to customers around the world.  And most importantly, with Windows 7 we are making renewed effort at "end to end" design.

Let’s look at the major phases of product design in Engineering Windows. What we’ll talk about is of course generalized and doesn’t apply to each specific incident. One thing we always say internally is that we’re a learning organization—so no process is perfect or done and we are always looking to make it better as we move through each and every iteration of building Windows.

Throughout this post when we say “we” what this really means is the individuals of each discipline (dev, test, pm, design) working together—there’s no big feature or design committee.

Pick the question or get an idea

We get an idea from somewhere of something to do—it could be big (build UX to support a new input method such as touch), wild (change the entire UI paradigm to use 3D), or an improvement / refinement of an existing feature (multi-monitor support), as some examples. There is no shortage of creative ideas, because frankly, that is the easy part. Ideas flow in from all corners of the ecosystem, ourselves included. We’ve talked a lot about comments and feedback from this blog and that is certainly one form of input. Product reviews, enterprise customers, customer support lines, PC makers, hardware and software developers, blogs, newsgroups, MVPs, and many others have similar input streams into the team.

The key is that working on Windows is really a constant stream of inputs. We start with a framework for the release that says what goals and scenarios we wish to make easier, better, faster. And with that program management builds up candidate ideas—that is ideas that will make their way through features. The job of getting a feature “baked” enough falls to program management and they do this work by working across the product and working with design, development, and testing (as Larry described).

With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.

Gather information and data

Given any idea, the first step is to understand what data we have “surrounding” the idea. Sometimes the idea itself comes to us in a data-centric manner (customer support incidents) or other times it is anecdotal (a blog).

The first place we look is to see what data do we have based on real world usage that would support the development of a hypothesis, refute or support the conventional wisdom, or just shed some light on the problem.  The point is that the feature starts its journey by adding more perspectives to the input.

Essentially, we need an objective view that illuminates the hypothesis. We gather this data from multiple sources including end users, customers, partners, and in various forms such as instrumentation, research, usability studies, competitive products, direct customer feedback, and product support.

As many (including us) have pointed out, telemetry data has limitations. First, it can never tell you what a person might have been trying to do—it only tells you what they did. Through usability, research, and observation, we are able to get more at the intent.  For example, the way we talked about high dpi and how the telemetry showed one thing but the intent was different (and the impact of those choices was unintended). The best way to see this is to remember that a person using a PC is not interesting in “learning to use a PC” but is trying to get their own work done (or their own playtime). And when faced with a “problem” the only solutions available are the buttons and menu commands right there—the full solution set is the existing software. Our job is to get to the root of the problem and then either expand the solution set or just make the problem go away altogether.

What about unarticulated needs?  The data plus intent shows the “known world” and “known solution space”, but one role we have is to be forward thinking and consider needs or desires that are not clearly articulated by those who do not have the full time job to consider all the potential solution spaces. The solution space could potentially be much broader than readily apparent from the existing and running product—it might involve a rearchitecture, new hardware, or an invention of a new user interface.

A great example of this was mentioned in one of the comments on the taskbar post. The comment (paraphrasing) indicated that the order of icons on the taskbar matters so sometimes he/she would simply close all the open programs and then restart them just so the programs were in the preferred order on the taskbar. Here the data would look like an odd sequence of launch/exit/launch/exit/launch/lauch. And only through other means would we learn why someone would be doing that, and for the most part if you just walked up without any context and said “how can we make Windows easier” it isn’t likely this would bubble up to the top of the list of “requests”. Thus we see a number of neat things in this one example—we see how the data would not show the intent, the “request” would not be at the top of any list, and the solution might take any number of forms, and yet if solved correctly could be a pretty useful feature. Above all, in hindsight this is one of those “problems” that seems extraordinarily obvious to solve (and I am sure many of you are saying—“you should have just asked me!”) So we also learn the lesson that no matter what data and information we gather or what design we’re talking about, someone always noticed or suggested it :-).


The next step is where we propose a clear hypothesis – “people would benefit from rearranging icons on the taskbar because positional memory across different sessions will reduce the time to switch applications and provide a stronger sense of control and mastery of Windows”.

What is our hypothesis (in a scientific sort of way) as to what opportunity exists or what problem we would solve, and what the solution would look like, or why does the problem exist?  Part of designing the feature is to think through the problem—why does it exist—and then propose the benefit that would come from solving the problem. It is important that we have a view of the benefit in the context of the proposed solution. It is always easy to motivate a change because it feels better or because something is broken so a new thing has to be better, but it is very important that we have a strong motivation for why something will benefit customers.

Another key part about the hypothesis is to understand the conventional wisdom around this area, especially as it relates to the target customer segment (end-user, enthusiast, PC maker, etc.) The conventional wisdom covers both the understanding of how/why a feature is a specific way today and also if there is a community view of how something should be solved. There are many historic examples where the conventional wisdom was very strong and that was something that had to be considered in the design or something that had to be considered knowing the design was not going to take this into account—a famous example is the role of keyboard shortcuts in menus that the “DOS” world felt would be required (because not every PC had a mouse) but on the Mac were “unnecessary” because there was always a mouse. Conventional wisdom in the DOS world was that a mouse was optional.


For any hypothesis, there are numerous design alternatives. It is at this stage where we cast a broad net to explore various options. We sketch, write scenarios, story board, do wireframes and generate prototypes in varying fidelity. Along the way we are working to identify not just the “best answer” but to tease out the heart and soul of the problem and use the divergent perspectives to feed into the next step of validation.

This is a really fun part of the design process. If you walk our hallways you might see all sorts of alternatives in posters on the walls, or you might catch a program manager or designer with a variety of functional prototypes (PowerPoint is a great UI design tool for scenarios and click-thrus that balance time to create with fidelity, and Visio is pretty cool for this as well) or our designers often mock up very realistic prototypes we can thoroughly test in the lab.

Interpret and Validate

With a pile of options in front of us we then take the next step of interpreting our own opinions, usability test data and external (to the team) feedback. This is the area where we end up in conversations that, as an example, could go something like this… “Option ‘A’ is better at elevating the discovery of a new feature, but option ‘B’ has a stronger sense of integration into the overall user experience”.

As we all know, at the micro level you can often find a perfect solution to a specific problem. But when you consider the macro level you start to see the pros and cons of any given solution. It is why we have to be very careful not to fall into the trap of a “tests”. The trap here is that it is not often possible to test a feature within the full context of usage, but only within the context of a specific set of scenarios. You can’t test how a feature relates to all potential scenarios or usages while also getting rich feedback on intent. This is why designing tests and interpreting the results is such a key part of the overall UX effort led by our researchers.

A mathematic way of looking at this is the “local min” versus a “global min”. A local min is one you find if you happen to start optimizing at the wrong spot on the curve. A good example of this in software is when faced with a usability challenge you develop a new control or new UI widget. It seems perfectly rational and often will test very well, especially if the task asked of the subject is to wiggle the widget appropriately. However, what we’re after is a global optimization where one can see that the potential costs (in code, quality, and usability) of another widget by erase any potential benefits gained by introducing a new widget. Much has been written about the role of decision theory as it relates to choosing between options, but our challenge with design is the preponderance of qualitative elements.


Ultimately we must pick a design and that choice is informed by the full spectrum of data, qualitative and quantitative.

Given a choice for a design and an understanding of how to implement it and the cost, there is still one more choice—should we do the feature at all. It sounds strange that we would go through all this work and then still maybe not build a specific feature. But like a movie director that shoots a scene that ends up on the cutting room floor, sometimes the design didn’t pan out as we had hoped, sometimes we were not able to develop an implementation plan within reason, or sometimes there were other ideas that just seemed better. And this is all before we get to the implementation, which as Larry pointed out has challenges as well.

We have two tools we use to assist us in prioritizing features and designs. First is the product plan—the plan says at a high level what we “require” the product to achieve in terms of scenarios, business goals, schedule, and so on. Most of the time features don’t make it all the way through prototyping and testing because they just aren’t going to be consistent with the overall goals of the release. These goals are important otherwise a product doesn’t “hang together” and runs the risk of feeling like a “bunch of features”. These high level goals inform us quite a bit in terms of what code we touch and what scenarios we consider for a release.

And second we have the “principles of design” for the release. These principles represent the language or vocabulary we use. These represent the core values—we often think of the design principles as anthropomorphizing the product—“if Windows were a person then it would be…”. This is the topic of Sam’s talk at the PDC.

As mentioned in the introduction, it isn’t possible to do everything twice. We do have to decide. This could be a whole series of posts—customization, compatibility modes, and so on. We definitely hear folks on these topics and always do tons of work to enable both “tweaking” and “staying put” and at the same time we need to balance these goals with the goals of providing a robust and performant platform and also moving the OS forward. Some of us were involved in Office 2007 and there is a fun case study done by Harvard Business School [note fee associated with retrieving the full text] about the decision to (or not to) provide a “compatibility mode” for Office 2007. This was a choice that was difficult at the time and a few folks have even mentioned it in some comments.

Implement and Integrate

Finally, we build and iterate to refine the chosen solution. Inevitably there are new discoveries in the implementation phase and we adjust accordingly. As we integrate the solution into its place in Windows, that discovery continues. The beta period is a good example of how we continue to expand and learn from usage and feedback. In a Windows beta we are particularly interested in compatibility and real-world performance as those are two aspects of the design that are difficult to validate without the breadth of usage we can get if we do a great beta.

It is important to keep in mind that we follow intensely all the feedback we receive from all forms—reviews, blogs, and of course all the telemetry about how the product is used (realizing that the beta is a select group of people).

One of the things we hope to do with the blog, as you might have seen on the IE 8 Blog, is to discuss the evolution of the product in real-time. We’re getting close to this transition and are looking forward to talking more about the design choices we made!

-- Sam, Brad, and Julie

Leave a Comment
  • Please add 8 and 6 and type the answer here:
  • Post
  • I would really like to see a post about the different product editions of Windows.

    Here's my idea for Windows 7 product editions:

    1. Scrap the Home Basic edition.

    2. Remaining product editions: Windows 7 Home Edition (sort of like Vista Home Premium), Windows 7 Small Business, Windows 7 Enterprise, Windows 7 Ultimate.

    3. Change the way features are distributed between editions. Previous Versions and Complete PC Backup are NOT business features only. They should be in ALL Windows 7 versions.

    I'll have more thoughts when you make a post on this.

    Thanks and keep up the great work!

  • Yup I agree with WindowsFanboy, just remove that Home Basic edition :)

  • intresting,

  • @Don Reba,

    I know, what is currently implemented in Windows (yes, I'm not only newbie user) - what are policies and solutions. And what I can see:

    majority of software leaves some entries in Registry after uninstalling. System contains more and more useless info, it's going slower and slower. All applications can have access to settings from other one (yes, yes, I know, that it's possible to limit it, but normally it's not done). Let's think, who is happy from it:

    1. antyvirus software creators

    2. virus & trojans creators

    3. creators of Registry cleaners

    Many antyvirus packages are asking user, if he agree for saving some data into this or other key. Normal users click "yes" without understading.

    In the same time main role of Registry (central database settings for all apps) is not visible anymore. See FireFox and other...

    Now let's say, that something like proposed will be implement. Some parts of logical Registry will be still shared, but many will be separated. It will be easier to uninstall applications, system will not be slower after months from installation, etc. etc.

    Now let's say, that you have d:\install.exe. There is checked MD5 for it and there is created new virtual directory for it. install.exe and code from the same physical directory and physical directories below (for example from d:\directory\) are writing files to new virtual directory (not to real program files and windows directory). when install.exe is "installing" something, it's put into virtual directory. when there are shortcuts created, they're put after converting in real system. when you run later files from this virtual directory (for example from shortcut), they have access to virtual directory, not to real program files and windows directory. in other words:

    when you run install.exe

    there is created

    c:\new program files\directory 1


    c:\new program files\directory 1\program files

    c:\new program files\directory 1\windows

    when you run something for example from

    c:\new program files\directory 1\program files

    it can't read files from

    c:\new program files\directory 2

    Simple, effective, already available in 3rd party products. Big shame, that company with so many programmers can't reimplement it (simply copy it, nothing more).

    I understand, that we're going into politics. Microsoft needs success fast and nobody will agree with critic about solutions, which will be sold soon.

    But please think about it. Windows is changing from version to version and breaking some Registry cleaners or similar software will be small price for giving more real security.

    When system will have few clearly separated parts like:

    1. core

    2. runtime for various apps (MS-DOS, win32, .net, etc.)

    3. applications (using virtual directories like proposed earlier) like IE, WMP, etc.

    it will be easier to control, upgrade and extend it.

  • 3 version PLS.

    1) WIndows 7 Clouds

    2) Windows 7 Ultimate   ( All PC+Multitouch+Umpc)

    3) Windows 7 for OLD PC and UMPC (no touch)

  • few more words:

    when you run something for example from

    c:\new program files\directory 1\program files

    it thinks, that it's run from real c:\program files and thinks, that

    c:\new program files\directory 1\windows

    is system directory.

  • Allthought it didn't talk about concrete things like previous posts, I enjoyed reading these last two ones. What I read was much what I imagined and what I expected from Microsoft.

    To those who complain that this blog doesn't address the real problems (of Vista), I'm sure that the w7 team has already undertaken fixing on these issues a long time ago.

    Things like "too many useless services/processes", "too much resources used/required", "noisy hard drive", "poor performance/efficiency", "mammouth hard drive footprint" etc

    These issues were too obvious to be ignored and to be not set as a TOP PRIORITY in the developement of w7. But there are things that eveybody knows for a year and nothing indicates that Steven and his colleagues doesn't know about them yet.

    I personaly takes for granted that these issues will be fixed in w7. (If not nobody would understand why there is a new version at all.)

    Registry:: I don't think that the registry is a bad idea, but it has been very badly used and mistreated for a very long time. Since its beginning it was full of bloat and that only worsened by the years. Microsoft started this bad habit followed by all the famous software vendors.

    It would be stupid to eliinate it but it should be used differently.

    Registry should be used only for settings which can be changed. It should keep only the settings which are identifiable and comprehensible by an advanced user. All invariable stuffs should be kept in hard coded libraries or separate datas files. It shouldn't be used by softwares to store datas. It should be a config interface not a data base.

    Like with the Program File folder there should be one location where an application would be allowed to add and modify things. Applications should never be able to modify IE settings for example (except with admin/UAC aproval). Every newly installed program would be allocated one folder in Program Files and one registry key with a limited number of subkey. That discipline would already simplify a lot of things.

    There are too many softwares in circulation to allow everybody to write anything anywhere in the registry or on the HDD.

  • There are several misconceptions reoccuring over and over again:

    1. MinWin is just a stripped-down version of Windows. This became possible because MS modularized Windows. That means MinWin = regular windows kernel with less stuff around it (GUI, ...). It's not a new system. It's the same kernel that powers Vista and Server 2008. Basically it's Vista without bells and whistles.

    2. A "bloated" registry doesn't slow down the system. If you don't believe me just create random values all over the registry and benchmark your system again. The german computer magazine c't did this some time ago with XP, 100'000 additional keys & values and there was no measurable performance impact. And since 2000 Windows does no longer map the whole registry into RAM - that means you don't loose valuable resources on unnecessary keys.

    3. It's not possible to create a perfectly secure system in practice. You'd need intelligent users as well, because they're part of the sytem. And AFAIK MS doesn't ship intelligent users with their operating systems. Like everyone else.

    4. Is not possible to uninstall an application perfectly when not every single application play by the rules. This has to do with dependencies between applications. I can think of several scenarios where any "solution" I can come up with fails. If you can't have a look at the problems people have with package managers or when trying to uninstall OSX applications.

    5. Backwards compatibility doesn't hurt performance (that much). A processor cycle here and there, but that's it basically. The only reason why Microsoft would want to move Windows compatibility to a VM could be easier development and testing. But that'd be a huge undertaking - far more work than just adding some hacks for old misbehaving applications. And, above all, it's just not necessary: Ever had a look at .NET? This is a great way to transition away from the (sometimes clumsy) Win API to something new & shiny.

  • /yet another post that's entirely off-topic

    >>I would suggest there be a Windows Dev blog.

    For what purpose? Canibalizing Vista sales?

    >>Here's my idea for Windows 7 product editions:

    Having two simply named and clearly defined versions, Home and Professional, was just fine back in the day. I'm not sure what possessed the people at Microsoft to ship the rediculous numbers of SKUs they did. That said, Microsoft probably doesn't care for anyone else's ideas in this department and I know I'm tired of hearing suggestions for editions. Let it die people, they probably got the message.

    >>Intresting, *link to softpedia article about W7 UI*

    There's no reason to believe anything until it's officially confirmed. If you're the skeptical type, there's no reason to believe the official confirmation until you've had hands on time with W7. Depending on where you go W7 is supposed to include a VM for appcompat and WinFS... and let's not get into that. Honestly the word WinFS should never be spoken again. Ever.

    >>It is indeed ok. User settings are data and should not be removed on uninstallation.

    >>Why would you think otherwise?

    Some users might try reinstalling a program to undo a setting change they did (*gasp* I hid the menu bar in Word! :P) or just /maybe/ because they no longer want the application. I just hate the thought of applications leaving their junk on my computer. Ideally after uninstallation a program would leave no trace of itself on a user's computer, not a single file, folder, or key. Uninstallation is just terrible on Windows with all the stuff that's left behind. the other day I uninstalled a program that left 200MB of stuff in it's directory. 200MB. Not even in user files, just 200MB of junk. It's no wonder people say Windows bloats up over time, their just accumulating junk. This isn't Microsoft's fault though, you can't necessarily blame them for what other companies do. TEMP folders may add to this but I don't know what Vista's policy on purging them is as I can't help myself but run through those daily... sad, I know.

    >>I personaly takes for granted that these issues will be fixed in w7.

    >>(If not nobody would understand why there is a new version at all.)

    What's to stop them from releasing without a good reason for a vast majority of end-users to upgrade? XP and Vista both ran into this issue and I fully expect W7 to run into it as well. It'll launch and Microsoft will parade it around as the fastest, most secure, reliable, and easiest to use version of Windows yet, like always . It's not entirely Microsoft's fault, but XP, as well as Vista to an extent, is good enough to do most of what people want to do: browse the web and listen to music. That and Microsoft hasn't included anything with new versions worth getting excited about in XP and Vista for end-users. Luckily for Microsoft, most people will just buy whatever is pre-installed on their PC.

    A small note: Despite what seems like Microsoft's best efforts (performance AND usability regressions?) I've been using Vista for ten months and quite like it. Well, it's been a long post with nothing positive to say... sorry.

    /even more offtopic:

    I moved my HDD SATA cord up one port yesterday and it seems Vista is creating duplicated thumbnails for the thumbnail cache, which grew 50% during use essentially overnight. Just thought I'd put it out there.

  • marcinw,

    > majority of software leaves some entries in Registry after uninstalling. System contains more and more useless info, it's going slower and slower.

    There is no reason why the number of records in the registry should noticeably affect performance. In NTFS, B+ tree indices scale very well for large numbers of files.

    > All applications can have access to settings from other one (yes, yes, I know, that it's possible to limit it, but normally it's not done).

    Every application you run has full access to everything you have access to, whether the registry is used or not. Attempts to deny this create such monsters as UAC, which is probably accountable for the majority of problems with Vista.


    > Ideally after uninstallation a program would leave no trace of itself on a user's computer, not a single file, folder, or key.

    You seem to suggest Word should delete all your documents if you uninstall it. User settings are like documents in several ways. For one, a person might carry his configuration files on a flash drive to use on different computers. For another, the person uninstalling the program is likely to not have access to other users' settings.

  • Talking about the way Microsoft deals with features there are a few annoying things that are either difficult to catch or intentionally applied.

    For example, the way we see folders is strange and inconsistent.

    The directory tree in the registry either starts from desktop or from a drive or ...

    This way there are more than one entries for the same folder, as in the Shell bagsMRU key (it only holds 400 entries).

    This reflects on the way we see folders and sniffing doesn't do a good job either.

    To make things worse, the save pop up window changes from program to program and this changes the way we see folders, or corrupts it.

    One feature I miss form Vista is to restore corrupted system repository files. System restore, sfc /scannow or the Vista DVD can't restore certain files.

    Another annoying feature is the way some programs force their presence even if we don't want them. For example, it is hard to disable the defender and it's updates in case we have another antivirus.

    Then the Snipping tool goes along with the tablet PC functionality and so on.

    Finally it's a matter of trust.

  • There is a this that I really miss about Vista. In XP, only the window that you are working with is lighten up, not all the other winows in the background. In Vista it`s hard to see whitch window you are working with. Also WMP are covering the whole window (exept for a 1mm) and when I am going to exit it, I hit the window in the background.

  • @Don Reba, d_e and Fredledingue,

    I don't blame Microsoft for all problems with WIndows platform. But I know, that good system should not allow for wrong application behaviors. Separating applications and their Registries (like proposed) will resolve thousands of current problems and it will not make system more difficult for users.

    Let me remind issue of SMB too. Many people were complaining about performance issues (in some cases it was used 2 - 3% of bandwidth !), everything was OK for Microsoft. We have not too popular Vista and voila ! There is new SMB version, which works up to 45 times faster.

    I don't know details of tests showing, that adding 100 MB don't decrease performance. But for me as programmer it's impossible, that such operation done on keys often searched will not change performance - it's physically impossible.

    I know, that more and more programs are using own Registry replacements. Second: another Registry role (saving settings and info about currently installed applications) is not actual too. Why do stay with it "like is" ?

    And once again: removing application should remove it and of course not documents created with it (like some of you suggested). Nobody was speaking about different behaviour...

  • @Don Reba, d_e and Fredledingue,

    I don't blame Microsoft for all problems with WIndows platform. But I know, that good system should not allow for wrong application behaviors. Separating applications and their Registries (like proposed) will resolve thousands of current problems and it will not make system more difficult for users.

    Let me remind issue of SMB too. Many people were complaining about performance issues (in some cases there was used 2 - 3% of bandwidth !), everything was OK for Microsoft. We have not too popular Vista and voila ! There is new SMB version, which works up to 45 times faster.

    I don't know details of tests showing, that adding 100 MB doesn't decrease performance. But for me as programmer it's impossible, that such operation done on keys often searched will not change performance - it's physically impossible.

    I know, that more and more programs are using own Registry replacements. Second: another Registry role (saving settings and info about currently installed applications) is not actual too. Why do stay with it "like is" ?

    And once again: removing application should remove it and of course not documents created with it (like some of you suggested). Nobody was speaking about different behavior...

  • The real process...

    Pick the question or get an idea

    Gather information and data



    Interpret and Validate


    Send it off to 2nd/3rd world country to be implemented


    Interpret and Validate



    Chop features


    Apply Marketing Spin

    A software development process is only as good as its weakest link.  Design doesn't matter when it gets ignored or is misunderstood by those implementing it.  

    Sadly the quality of the products released by Microsoft peaked in about 2003.  I'd suggest looking back to how you structured things before then and compare that to what you're doing now.

Page 4 of 5 (67 items) 12345