Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

From Idea to Feature: A view from Design

From Idea to Feature: A view from Design

  • Comments 67

Larry is very appreciative of the reception and comments his post received. Thank you!  It is worth noting that we’ve surpassed over 2000 comments and I’ve received and equal amount of email. I am trying to reply as often as I can! 

We’re 10 days from the PDC and so we might take a short break from the blog while we practice our demos of Windows 7…we’ll keep an eye on comments for sure and maybe a post or two on the way.  :-)

Let's move "up" in the dev process and look at how we come up with what is in a release and how we think about taking a feature from an idea to feature. 

As we’ve posted on various engineering challenges we’ve often distilled the discussion down to a few decisions, often between two options (make a feature optional or not, add a window management feature one of two ways, etc.) Yet this doesn’t quite get to the challenge of where does the product definition begin and how do we take an idea and turn it into a feature. Most choices in engineering Windows are not between two choices, but the myriad of considerations, variables, and possibilities we have before we even get to just a couple of options. This post looks a bit at the path from an idea to a feature.

A common thread we’ve seen in the feedback is to make “everything customizable and everything optional” (not a direct quote of course). Of course, by virtue of providing a platform we aim to offer the utmost in extensibility and customization by writing to the APIs we provide. There is an engineering reality that customization and extensibility have their cost—performance, complexity, and forward compatibility come to mind. One way to consider this is that if a feature has two “modes” (often enable the new feature or enable the old feature) in one release, then in a follow-up release if the feature is changed it potentially has four modes (old+old, old+new, new+old, new+new), and then down the road 8 modes, and so on. The complexity of providing a stable and consistent platform comes with the cost that we aren’t always able to “hook” everything and do have to make practical choices about how a feature should work, in an effort to plan for the future. Designing a feature is also about making choices, tough choices. At the same time we also want to provide a great experience at the core operating system functions of launching programs, managing windows, working with files, and using a variety of peripherals--to name just a few things Windows does. This experience should be one that meets the needs of the broadest set of people across different skill levels and different uses of PCs, and also providing mechanisms to personalize with user interface and to customize with code. Every release we plan is a blending of fixing things that just don’t work like we all had hoped and developing new solutions to both old and new problems, a blending of features and extensibility, and a blending of better support for existing hardware and support for new hardware.

This post is jointly written by Samuel Moreau the manager of the user experience design team for the Windows Experience, Brad Weed, Director of User Experience Design and Research for Windows and Windows Live, and Julie Larson-Green, the VP of Program Management for the Windows Experience. With the number of comments that describe a specific feature idea, we thought it would be good to give you an overview of how we approach the overall design process and how ideas such as the ones you mention flow into our process. Also for those of you attending the PDC, Sam will be leading a session on the design principles of Windows 7. –Steven

Designing Windows – from idea to feature

In general, we follow a reasonably well-understood approach to product design, but that doesn’t make it easy or “automatic”. Often this is referred to as a "design funnel" where ideas go from concept to prototype to implementation and then refinement.  By reading the various design ideas in the comments of Chaitanya’s post on “Starting, Launching and Switching”, you can see how difficult it can be to arrive at a refined feature design. In those comments you can find equally valid, yet somewhat opposite points of view. Additionally, you can also find comments that I would paraphrase as saying “do it all”. It is the design process that allows us to work through the problem to get from idea to feature in the context of an overall product that is Windows.

From a product design perspective, the challenge of building Windows is the breadth of unique usage of just a single product. In a sense, one of the magic elements of software is that it is “soft” and so you can provide all the functionality to all customers with little incremental cost and little difference in “raw materials” (many comments have consistently suggested we have everything available along with options to choose components in use and we have talked about minimizing the cost when components and features are not used even if they are available). And at the same time, there is a broad benefit to developers when they can know a priori that a given PC has a common set of functions and can take advantage of specific APIs that are known to be there and known to behave a specific way--the platform. This benefit of course accrues to individuals too as you can walk up to any PC and not only have a familiar user experience, but if you want to do your own work, use a specific device, or run a certain program on the PC you can also do that. This breadth of functionality is a key part of the value of a Windows PC. Yet it also poses a pretty good design challenge. Learning, understanding, and acting on the broad set of inputs into designing Windows is an incredibly fun and challenging part of building Windows.

As Larry pointed out the design and feature selection happens takes place in his part of the organization (not way up here!). There’s another discussion we will have in a future post about arriving at the themes of the overall release and how we develop the overall approach to a release so that the features fit together and form a coherent whole and we address customer needs in an end-to-end fashion.

We have a group of product designers that are responsible for the overall interaction design of Windows, the sequence and visualization of Windows. Program Managers across the team work with product designers as they write the specifications. Along with designers we have UX Researchers who own the testing and validation of the designs as we’ve talked about before. The key thing is that we apply a full range of skills to develop a feature while making sure that ownership is clear and end-to-end design is clear. The one thing we are not is a product where there is “one person” in charge of everything. Some might find that to be a source of potential problems and others might say that a product that serves so many people with such a breadth of features could not be represented by a single point of view (whether that is development, testing, design, marketing, etc.). We work to make sure engineers are in charge of engineering, that the product has a clear definition that we are all working towards implementing and that product definition represents the goals across all the disciplines it takes to deliver Windows to customers around the world.  And most importantly, with Windows 7 we are making renewed effort at "end to end" design.

Let’s look at the major phases of product design in Engineering Windows. What we’ll talk about is of course generalized and doesn’t apply to each specific incident. One thing we always say internally is that we’re a learning organization—so no process is perfect or done and we are always looking to make it better as we move through each and every iteration of building Windows.

Throughout this post when we say “we” what this really means is the individuals of each discipline (dev, test, pm, design) working together—there’s no big feature or design committee.

Pick the question or get an idea

We get an idea from somewhere of something to do—it could be big (build UX to support a new input method such as touch), wild (change the entire UI paradigm to use 3D), or an improvement / refinement of an existing feature (multi-monitor support), as some examples. There is no shortage of creative ideas, because frankly, that is the easy part. Ideas flow in from all corners of the ecosystem, ourselves included. We’ve talked a lot about comments and feedback from this blog and that is certainly one form of input. Product reviews, enterprise customers, customer support lines, PC makers, hardware and software developers, blogs, newsgroups, MVPs, and many others have similar input streams into the team.

The key is that working on Windows is really a constant stream of inputs. We start with a framework for the release that says what goals and scenarios we wish to make easier, better, faster. And with that program management builds up candidate ideas—that is ideas that will make their way through features. The job of getting a feature “baked” enough falls to program management and they do this work by working across the product and working with design, development, and testing (as Larry described).

With regard to where ideas come from, what we like to say is that the job of program management is not to have all the great ideas but to make sure all the great ideas are ultimately picked. The best program managers make sure the best ideas get done, no matter where they come from.

Gather information and data

Given any idea, the first step is to understand what data we have “surrounding” the idea. Sometimes the idea itself comes to us in a data-centric manner (customer support incidents) or other times it is anecdotal (a blog).

The first place we look is to see what data do we have based on real world usage that would support the development of a hypothesis, refute or support the conventional wisdom, or just shed some light on the problem.  The point is that the feature starts its journey by adding more perspectives to the input.

Essentially, we need an objective view that illuminates the hypothesis. We gather this data from multiple sources including end users, customers, partners, and in various forms such as instrumentation, research, usability studies, competitive products, direct customer feedback, and product support.

As many (including us) have pointed out, telemetry data has limitations. First, it can never tell you what a person might have been trying to do—it only tells you what they did. Through usability, research, and observation, we are able to get more at the intent.  For example, the way we talked about high dpi and how the telemetry showed one thing but the intent was different (and the impact of those choices was unintended). The best way to see this is to remember that a person using a PC is not interesting in “learning to use a PC” but is trying to get their own work done (or their own playtime). And when faced with a “problem” the only solutions available are the buttons and menu commands right there—the full solution set is the existing software. Our job is to get to the root of the problem and then either expand the solution set or just make the problem go away altogether.

What about unarticulated needs?  The data plus intent shows the “known world” and “known solution space”, but one role we have is to be forward thinking and consider needs or desires that are not clearly articulated by those who do not have the full time job to consider all the potential solution spaces. The solution space could potentially be much broader than readily apparent from the existing and running product—it might involve a rearchitecture, new hardware, or an invention of a new user interface.

A great example of this was mentioned in one of the comments on the taskbar post. The comment (paraphrasing) indicated that the order of icons on the taskbar matters so sometimes he/she would simply close all the open programs and then restart them just so the programs were in the preferred order on the taskbar. Here the data would look like an odd sequence of launch/exit/launch/exit/launch/lauch. And only through other means would we learn why someone would be doing that, and for the most part if you just walked up without any context and said “how can we make Windows easier” it isn’t likely this would bubble up to the top of the list of “requests”. Thus we see a number of neat things in this one example—we see how the data would not show the intent, the “request” would not be at the top of any list, and the solution might take any number of forms, and yet if solved correctly could be a pretty useful feature. Above all, in hindsight this is one of those “problems” that seems extraordinarily obvious to solve (and I am sure many of you are saying—“you should have just asked me!”) So we also learn the lesson that no matter what data and information we gather or what design we’re talking about, someone always noticed or suggested it :-).

Hypothesize

The next step is where we propose a clear hypothesis – “people would benefit from rearranging icons on the taskbar because positional memory across different sessions will reduce the time to switch applications and provide a stronger sense of control and mastery of Windows”.

What is our hypothesis (in a scientific sort of way) as to what opportunity exists or what problem we would solve, and what the solution would look like, or why does the problem exist?  Part of designing the feature is to think through the problem—why does it exist—and then propose the benefit that would come from solving the problem. It is important that we have a view of the benefit in the context of the proposed solution. It is always easy to motivate a change because it feels better or because something is broken so a new thing has to be better, but it is very important that we have a strong motivation for why something will benefit customers.

Another key part about the hypothesis is to understand the conventional wisdom around this area, especially as it relates to the target customer segment (end-user, enthusiast, PC maker, etc.) The conventional wisdom covers both the understanding of how/why a feature is a specific way today and also if there is a community view of how something should be solved. There are many historic examples where the conventional wisdom was very strong and that was something that had to be considered in the design or something that had to be considered knowing the design was not going to take this into account—a famous example is the role of keyboard shortcuts in menus that the “DOS” world felt would be required (because not every PC had a mouse) but on the Mac were “unnecessary” because there was always a mouse. Conventional wisdom in the DOS world was that a mouse was optional.

Experiment

For any hypothesis, there are numerous design alternatives. It is at this stage where we cast a broad net to explore various options. We sketch, write scenarios, story board, do wireframes and generate prototypes in varying fidelity. Along the way we are working to identify not just the “best answer” but to tease out the heart and soul of the problem and use the divergent perspectives to feed into the next step of validation.

This is a really fun part of the design process. If you walk our hallways you might see all sorts of alternatives in posters on the walls, or you might catch a program manager or designer with a variety of functional prototypes (PowerPoint is a great UI design tool for scenarios and click-thrus that balance time to create with fidelity, and Visio is pretty cool for this as well) or our designers often mock up very realistic prototypes we can thoroughly test in the lab.

Interpret and Validate

With a pile of options in front of us we then take the next step of interpreting our own opinions, usability test data and external (to the team) feedback. This is the area where we end up in conversations that, as an example, could go something like this… “Option ‘A’ is better at elevating the discovery of a new feature, but option ‘B’ has a stronger sense of integration into the overall user experience”.

As we all know, at the micro level you can often find a perfect solution to a specific problem. But when you consider the macro level you start to see the pros and cons of any given solution. It is why we have to be very careful not to fall into the trap of a “tests”. The trap here is that it is not often possible to test a feature within the full context of usage, but only within the context of a specific set of scenarios. You can’t test how a feature relates to all potential scenarios or usages while also getting rich feedback on intent. This is why designing tests and interpreting the results is such a key part of the overall UX effort led by our researchers.

A mathematic way of looking at this is the “local min” versus a “global min”. A local min is one you find if you happen to start optimizing at the wrong spot on the curve. A good example of this in software is when faced with a usability challenge you develop a new control or new UI widget. It seems perfectly rational and often will test very well, especially if the task asked of the subject is to wiggle the widget appropriately. However, what we’re after is a global optimization where one can see that the potential costs (in code, quality, and usability) of another widget by erase any potential benefits gained by introducing a new widget. Much has been written about the role of decision theory as it relates to choosing between options, but our challenge with design is the preponderance of qualitative elements.

Choosing

Ultimately we must pick a design and that choice is informed by the full spectrum of data, qualitative and quantitative.

Given a choice for a design and an understanding of how to implement it and the cost, there is still one more choice—should we do the feature at all. It sounds strange that we would go through all this work and then still maybe not build a specific feature. But like a movie director that shoots a scene that ends up on the cutting room floor, sometimes the design didn’t pan out as we had hoped, sometimes we were not able to develop an implementation plan within reason, or sometimes there were other ideas that just seemed better. And this is all before we get to the implementation, which as Larry pointed out has challenges as well.

We have two tools we use to assist us in prioritizing features and designs. First is the product plan—the plan says at a high level what we “require” the product to achieve in terms of scenarios, business goals, schedule, and so on. Most of the time features don’t make it all the way through prototyping and testing because they just aren’t going to be consistent with the overall goals of the release. These goals are important otherwise a product doesn’t “hang together” and runs the risk of feeling like a “bunch of features”. These high level goals inform us quite a bit in terms of what code we touch and what scenarios we consider for a release.

And second we have the “principles of design” for the release. These principles represent the language or vocabulary we use. These represent the core values—we often think of the design principles as anthropomorphizing the product—“if Windows were a person then it would be…”. This is the topic of Sam’s talk at the PDC.

As mentioned in the introduction, it isn’t possible to do everything twice. We do have to decide. This could be a whole series of posts—customization, compatibility modes, and so on. We definitely hear folks on these topics and always do tons of work to enable both “tweaking” and “staying put” and at the same time we need to balance these goals with the goals of providing a robust and performant platform and also moving the OS forward. Some of us were involved in Office 2007 and there is a fun case study done by Harvard Business School [note fee associated with retrieving the full text] about the decision to (or not to) provide a “compatibility mode” for Office 2007. This was a choice that was difficult at the time and a few folks have even mentioned it in some comments.

Implement and Integrate

Finally, we build and iterate to refine the chosen solution. Inevitably there are new discoveries in the implementation phase and we adjust accordingly. As we integrate the solution into its place in Windows, that discovery continues. The beta period is a good example of how we continue to expand and learn from usage and feedback. In a Windows beta we are particularly interested in compatibility and real-world performance as those are two aspects of the design that are difficult to validate without the breadth of usage we can get if we do a great beta.

It is important to keep in mind that we follow intensely all the feedback we receive from all forms—reviews, blogs, and of course all the telemetry about how the product is used (realizing that the beta is a select group of people).

One of the things we hope to do with the blog, as you might have seen on the IE 8 Blog, is to discuss the evolution of the product in real-time. We’re getting close to this transition and are looking forward to talking more about the design choices we made!

-- Sam, Brad, and Julie

Leave a Comment
  • Please add 2 and 6 and type the answer here:
  • Post
  • I see registry a little like a security against illegal copy but I know there is wrong in some aspect. But in every solution there is problems. Separate the registry system from OS make it more visible, then security failure, but easier to manage.

    I think "correct me if I'm wrong" Microsoft white programs not only for user but to make sure software developpers feel secure about writing software that nobody can hack. I there is too much security lack in the OS, nobody will write soft for it because it's too dangerous.

    There must be a balance between developper security and user freedom. I will never want develop software on an OS knowing everyone can hack it, and I'm not the only one that think like this.

  • One more thing, thank you talking about some aspect I talked about in my doc. I can regconize some of my lines.

  • > Do you really think that a single

    > user's reply to a blog could make any

    > difference to W7? (Especially when there is

    > nothing wrong with registry :-)) What were

    > you expecting?

    all users can say: our voices will not change anything.

    and what will they see in some moment ? that even "hello world" application will cost a lot, will be very big and will not work as expected...

    generally, when there is more critic, it will be more difficult to say for company - hey, this system is big success...

    shared Registry is big problem in Windows. Install MS Office - thousands of entries, add few apps - many other. try to uninstall them and many entries will be still there. is it OK ? I don't think so...

    solution is very easy - when applications are using API for saving, reading, changing Registry, their keys should go into separate physical files (and this is good, because for example IE will not have access to WMP settings). very easy to implement. only some parts of Registry will be shared (for example info about devices or extensions handling). when application X will want to see settings from application Y, will need special user permission.

    the same, when application tries to save something to system directory, it should go into virtual directory for example in Program Files. no more writing to real system directory or winsxs.

    both solutions are kind of virtualization. already implemented (see sandboxie for example), don't need a lot of memory. How many current apps will work with it ? 90% ?

    In my opinion, it could make, that people will see - hey, this system gives much better security. no need of advertising it. and please don't say, that it's difficult or something. WIndows 7 need to add such solutions. People don't want to see another super hiper mega prefetcher or system restore. They need real security. we were byuing new application versions during migration from win3.11 to 95, the same with migration from 9x to NT, we can make it again. But give us such good reasons :)

  • > I will appreciate a post from a MS expert on

    > VM's as many comments were made on the idea of

    > virtual boxes for backward compatibility etc.

    > I cant even start imagining all the problems

    there are many virtualization types. I agree, that in some even running Notepad will need 200 MB of RAM. But in some not - when Microsoft will implement for example something like in Sanboxie, you will even probably not notify, that application is somehow virtualized :)

  • @marcinw

    If "keys should go into separate physical files", that's mean these keys are easier to find, then easier to implant to another system or else causing hacking. But your base idea is good, the only thing left is to keep the original registry system for security use, but again, there is less keys, then less search, then security lack.

    Then what? Or all registry are separated on different files to be easier to find, or keep the same registry system for more security, or mix the two systems to have a base security system.

    And with Linux that ignore all Windows security, everything in Windows is visible.

  • Thanks for the post on the design process.  It certainly is a lot of work and consideration that goes into determining what gets implemented, and how.

    Out of curiosity, are there any numbers on where the various features come from?  IE, percentages on features that are the result of user telemetry and auto-reported data, features that were first inspired by direct user input, those that were a useability issue for people on the team, and those that were just some idea out of nowhere.  Data like that, while not of particular import, would be fun and neat to see.

  • @simmans,

    100% of users need less mess in Windows system - more separating applications from each other and system core. And similar solutions (like proposed by me or other) must be finally implemented. This is critical. Without it Win7 will be only another NT based system, nothing more (and people like me will rather search alternative for it).

    For 10% of users, who want to protect Registry files (or other, which will replace it) against reading them from Linux for example - they can use already existing solutions like encrypting partitions. Microsoft already discovered this wheel.

  • Partition level of encrypting registry data, I like it. The only restrict is stoping to use NTFS file system.

    What about HTFS? I just see it in Vista RCs. I must go see what new there is with it.

  • Some ideas for the Design and Engineering teams that should be implemented in the production loop of the Windows 7 Operating System.

    First of all I am happy with the install speed of Windows 7, fast expansion of installed features is important for the End User, my first install of Vista took three hours where as my experience with this Alpha version of Windows 7 took only 35 minutes from the insertion of the DVD too First Boot, I am using a Dell Inspiron 8000 with 799mhz processor/512megs memory, I am running all the components with what I consider to be impressive performance.

    One feature that should be taken into consideration is what features the End User wants to install from the onset of installation.

    Also when dealing with Internet connectivity there could be an IPV6 type of connectivity wizard to initialize a wireless connection during the setup phase for those of us that have opted out of wired networks.

    Also in the case of catastrophic failure, a recovery facility that allows the production of a recovery DVD/CD that the End User could produce at first boot, also in the instance of corruption or failure of systems files an easily accessed shell to replace corrupted files before the logon screen it should be a GUI  and intuitive and also internet connective to pull user files from an online storage facility that Bit-Locker was intended for …most people do not know about selective replacement of corrupt files by using another computer to modify the disk content. But at this point I feel that the recovery counsel is too convoluted for the average user.

    I know it is also easy to back-engineer systems files and key features to work in other environments! It is evident that key features should have stronger pointers to the Kernel to defeat back-engineering. I found that this has been implemented in Windows 7 to a stronger degree that previously used but I have made several easily done hacks to make the features work else where.

    The performance loop at first boot is self defeating; let people choose if they want slower performance and more eye candy, this Kernel can handle it even in my old beast of a computer… I have implemented Glass on this card with XP Pro and also Red hat 9/Fedora Core 7.

    Now off on a tangent, you must implement better port emulation for New Computers. I use CCD Cameras for imaging and some still use serial/parallel ports to hook-in to the system. New Computers are cheap, Scientific equipment is a little different but needs for support from Windows as an OS.

    Driver installs for old stuff can be accomplished through a wizard that takes drivers and bundles them in the new driver installation facility.

    Thanks for letting Developers not working for MS some feedback in this process.

    Lorne L. Reap

    lornereap@hotmail.com

    p.s.

    @Domenico

    Not really the forum for this,be polite to all in the developement loop.

    Lorne

  • First of all,as the vista's successor name announced,again everyone's hopes from Microsoft increased.

    The Popularity of windows operating system(Win7) can be seen through by just typing "Windows 7" in Google Search!!!

    This is proof that why people(including all)are discussing about Windows 7 all over the web.I am watching every blog(about win7) very closely.Each is discussing some pros & cons points.Many have labelled that microsoft popularity is decreasing. Unfortunately 90% of the blogs are comparing Windows 7 to Vista.

    Here I will try to point of some summary of these Blogs:

    Features lack in Vista:

    ->Lack of microsoft's ambitious project WinFs.

    ->absence of Minwin.

    ->Xp's security flaws occured during Vista'a early stages.

    ->Failed to please Geeks.

    ->Marketing Stratergy.

    ->Virtualisation.

    ->Cutting Many features those r promised in final release.

    ->compatibility problems.

    ->Hardware requirement.

    ->No special software is written for Vista!!!!

    ->Of course Xp's tough competition!!!!!!

                       Though we all knew these problems.

    Expectations from Windows 7 are to be said:

    ->WinFs must be included.

    ->MinWin kernel.

    ->Please to geeks.

    ->Top notch performance ,security ,features.

    ->Different Marketing Stratergy.

    ->Eye candy UI

    ->Customization.

    ->Better UAC ,multi-moniter taskbar support.

    ->Welcome to photo gallery,email r cutting.

    ->Better compatibility etc.

        There r a lot of expectations from Win7.

    Most are telling that Windows 7 is Make or Break for Microsoft.Some may think Win7 is saviuor for Microsoft.Though above all points knew to Microsoft,

    I again point out that.

    What should be remaining is that just give a reason to people to upgrade from XP to Windows 7.We all are

    hoping Win7 will definitely please us.

    This is a great chance for Steven(and microsoft) to show their critics that still Microsoft has wind in it's wrest.

    We are all with u & try to fulfill folks's expectations .Please Steven show your amazing work through Windows 7 to all of us.

    Thank You.

    - Mayur Prayag

  • @Mayur Prayag  -- well that's a tall order!  All I can say for sure that the team is amazing and everyone is working super hard.  Next week at the PDC is the first look at the Windows 7 project and we're all excited to reach this milestone.

    One week to go!

    --Steven

  • Digital signing is a new feature in Vista.

    Even opentype fonts have digital signatures.

    Are you going to restrict end users from using any unsigned programs and fonts in the future, even if they created them?

    Where could such decisions come from?

  • Well, the comment about registry was meant tongue-in-cheek, as the smiley attests. :-)

  • I read this blog, and I think that core problem with Win design lay in section of borders and titles. Well, I saw problem as like the creative team work to hard to make something that will be WOW, but as a result we have some solutions that almost isn’t right (for example, aero interface is nice, but what is it for?). First of all, Win have borders that are too big. For me, 1pt is enough (or, exteme version will be without borders, with rounded corners of each window). Second, title bar is useless. What I mean? Did you have any experience with new Photoshop CS4? They take advantage use of this space and make it more meaningfully. In this line, you can find all functions, and it’s OK for use: you have more active space to use in window. Second idea is about how active windows will show on desktop. Of course, tha main idea is that active window be in a centar of action, with inactive windows still on there, but more like in 3D ambience. When I open, for example, Computer, this window will show up in a middle of my desktop. When I open some other window, previous window will go back, become smaller (not too small), with smooth animation (transition), and in the middle of desktop you can find your new opened window. Thus, Flip 3D will be more useful: rotation between all open windows will be like carousel, in front of you insted in some “second dimension”. Ofcourse, when you look at your desktop, you will see most of all open windows or just part of them, which will be defined with number off all open windows at the time. All windows that are minimized, wouldn’t be in this preview, so, Flip 3D will have specific rolle as usual. This organisation isn’t too important. Redesign titile bar and borders, and it will be much more better and more functional.

  • I hope and we all hope we get better coded Windows , light weight and fast performance OS with no stupid freeze ups and low on intelligence on different scenarios of new program install , uninstall and compatibility.

    Process explorer should be made into task manager so it would have more settings and options like restart application, see vendor and stuff... and get a look at what are running on system startup. Some kind on boottime defragment should be made for optimization and pagefile correct placement.

    Programs should not make files into system folders and documents and settings because during uninstall they are hard do keep on track of the junk files left after uninstall of your apps. Vista had lack of driver updating as well because it doesn't offer you latest ATI or NVIDIA or REALTEK HD Audio drivers whats really bad for people who don't know the reasons for bad game performance. Server computer should get warnings when their PCs need some kind of update or computers  in schools will stay outdated. UI and themes should have option for customization and  forced verify before install.

    Hope Vistas core doesn't have much boundaries of new way of software's and computer behavior for Windows 7 development.

    English isn't my native language and I hope these suggestions get acknowledged and considered and we could get friendly windows environment for everyone.

    Will be waiting do get hands on WIn7 do try what you have cooking so far :P

Page 2 of 5 (67 items) 12345