These postings are provided "AS IS" with no warranties, and confer no rights. Use of included code samples are subject to the terms specified at Microsoft -
Notes on comments.
Welcome to our blog dedicated to the engineering of Microsoft Windows 7
One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. --Steven
In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.
Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.
This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.
In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.
Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:
Both rules have reasonably simply understood rationale:
File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk. Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.
Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.
So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:
The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.
Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.
Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often. Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation. Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness. Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.
So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.
The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:
Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.
In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed. In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit. In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!
Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.
The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:
In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively. Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:
In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time. Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:
Among the other changes under the hood in Windows 7 are the following:
Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :-).
Rajeev and Matt
Many posts start with a thank you and I want to start this post with an extra special thank you on behalf of the entire Windows team for all the installs and usage we are seeing of the Windows 7 Beta. We’ve had millions of installations of Windows 7 from which we are receiving telemetry, which is simply incredible. And from those who click on the “Send Feedback” button we are receiving detailed bug reports and of course many suggestions. There is simply no way we could move from Beta through Final Release of Windows 7 without this type of breadth coverage and engagement from you in the development cycle. There’s been such an incredible response, with many folks even blogging about how they have moved to using Windows 7 Beta on all their machines and have been super happy. The question we get most often is “if the Beta expires in August what will I do—I don’t want to return to my old [sic] operating system.” For a Beta release, that is quite a complement and we’re very appreciative of such a kind response.
This post is about the path from where we are today, Beta, to our RTM (Release To Manufacturing), building on the discussion of this topic that started at the PDC. This post is in no way an announcement of a ship date, change in plans, or change in our previously described process, but rather it provides additional detail and a forward looking view of the path to RTM and General Availability. The motivation for this, in addition to the high level of interest in Windows 7, is that we’re now seeing how releasing Windows is not something that Microsoft does “solo”, but rather is something that we do as one part of the overall PC ecosystem. Obviously we have a big responsibility to do our part, one we take very seriously of course. The last stages of a Windows release are a partnership across the entire ecosystem working to make sure that the incredible variety of choices you have for PCs, software, and peripherals work together to bring you a complete and satisfying Windows 7 experience.
The next milestone for the development of Windows 7 is the Release Candidate or “RC”. Historically the Release Candidate has signaled “we’re pretty close and we want people to start testing the release, especially because all the features are done.” As we have said before, with Windows 7 we chose a slightly different approach which we were clear up front about and are all now experiencing together and out in the open. The Pre-Beta from the PDC was a release where we said it was substantially API complete and even for the areas that were not in the release we detailed the APIs and experience in the sessions at the PDC. At that time we announced that the Beta test in early 2009 would be both API and feature complete, widely available, and would be the only Beta test. We continued this dialog with our hardware partners at WinHEC. We also said that many ecosystem partners including PC makers, software vendors, hardware makers will, as has been the case, continue to receive interim builds on a regular basis. This is where we stand today. We’ve released the feature complete Beta and have made it available broadly around the world (though we know folks have requested even more languages). As a development team we’re doing just what many of you do, which is choosing to run the Beta full time on many PCs at home and work (personally I have at least 9 different machines running it full time) and we’re running it on many thousands of individual’s machines inside Microsoft, and thousands of machines in our labs as well.
All the folks running the Beta are actively contributing to fixing it. We’re getting performance telemetry, application compatibility data, usage information, and details on device requirements among other areas. This data is very structured and very actionable. We have very high-bandwidth relationships with partners and good tools to help each other to deliver a great experience. One thing you might be seeing is that hardware and software vendors might be trying out updated drivers / software enhanced for Windows 7. For example, many of the anti-virus vendors already have released compatibility packs or updates that are automatically applied to your running installation. You might notice, for example, that many GPU chipsets are being recognized and Windows 7 downloads the updated WDDM 1.1 drivers. While the Windows Vista drivers work as expected, the new 1.1 drivers provide enhanced performance and a reduced memory footprint, which can make a big difference on 1GB shared memory machines. You might insert a device and receive a recently updated version of a driver as I did for a Logitech QuickCam. Another example some of you might have seen is that the Beta requires a an updated version of Skype software currently in testing. When you go to install the old version you get an error message and then the problem and solutions user interface kicks in and you are redirected to the Beta site. This type of error handling is deployed in real time as we learn more and as the ecosystem builds out support. It is only because of our partnerships across the ecosystem that such efforts are possible, even during the Beta.
Of course, it is worth reiterating that our design point is that devices and software that work on Windows Vista and are still supported by the manufacturer will work on Windows 7 with the same software. There are classes of software and devices that are Windows-version specific for a variety of reasons, as we have talked about, and we continue to work together to deliver great solutions for Windows 7. The ability to provide people the vast array of choices and the openness of the Windows platform make all of this a massive undertaking. We continue to work to improve this while also making sure we provide the opportunities for choice and differentiation that are critical to the health and variety of the overall ecosystem. This data and the work we’re doing together with partners is the critical work going on now to reach the Release Candidate phase.
We’re also looking carefully at all the quality metrics we gather during the Beta. We investigate crashes, hangs, app compat issues, and also real-world performance of key scenarios. A very significant portion of our effort from Beta to RC is focused on exclusively on quality and performance. We want to fix bugs experienced by customers in real usage as well as our broad base of test suites and automation. A key part of this work is to fix the bugs that people really encounter and we do so by focusing our efforts on the data we receive to drive the ordering and priority of which bugs to fix. As Internet Explorer has moved to Release Candidate, we’ve seen this at work and also read about it on IE Blog.
Of course the other work we’re doing is refining the final product based on all the real-world usage and feedback. We’ve received a lot of verbatim feedback regarding the user experience—whether that is default settings, keyboard shortcuts, or desired options to name a few things. Needless to say just working through, structuring, and “tallying” this feedback is a massive undertaking and we have folks dedicated to doing just that. At the peak we were receiving one “Send Feedback” note every 15 seconds! As we’ve talked about in this blog, we receive a lot of feedback where we must weigh the opinions we receive because we hear from all sides of an issue—that’s to be expected and really the core design challenge. We also receive feedback where we thought something was straight forward or would work fine, but in practice needed some tuning and refinement. Over the next weeks we’ll be blogging about some of these specific changes to the product. These changes are part of the process and part of the time we have scheduled between Beta and RC.
So right now, every day we are researching issues, resolving them, and making sure those resolutions did not cause regressions (in performance, behavior, compatibility, or reliability). The path to Release Candidate is all about getting the product to a known and shippable state both from an internal and external (Beta usage and partner ecosystem readiness) standpoint.
We will then provide the Release Candidate as a refresh for the Beta. We expect, based on our experience with the Beta, a broad set of folks to be pretty interested in trying it out.
With the RC, this process of feedback based on telemetry then repeats itself. However at this milestone we will be very selective about what changes we make between the Release Candidate and the final product, and very clear in communicating them. We will act on the most critical issues. The point of the Release Candidate is to make sure everyone is ready for the release and that there is time between the Release Candidate and our release to PC makers and manufacturing to validate all the work that has gone on since the pre-Beta. Again, we expect very few changes to the code. We often “joke” that this is the point of lowest productivity for the development team because we all come to work focused on the product but we write almost no code. That’s the way it has to be—the ship is on the launch pad and all the tools are put away in the toolbox to be used only in case of the most critical issues.
As stated up front, this is a partnership and the main thing going on during this phase of the project is really about ecosystem readiness. PC makers, software vendors, hardware makers all have their own lead times. The time to prepare new products, new configurations, software updates, and all the collateral that goes with that means that Windows 7 cannot hit the streets (so to speak) until everyone has time to be ready together. Think of all those web sites, download pages, how-to articles, training materials, and peripheral packages that need to be created—this takes time and knowing that the Release Candidate is the final code that we’re all testing out in the open is reassuring for the ecosystem. Our goal is that by being deliberate, predictable, and reliable, the full PC experience is available to customers.
We also continue to build out our compatibility lists, starting with logo products, so that our http://www.microsoft.com/windows/compatibility site is a good resource for people starting with availability. All of these come together with the PC makers creating complete “images” of Windows 7 PCs, including the full software, hardware, and driver loads. This is sort of a rehearsal for the next steps.
At that point the product is ready for release and that’s just what we will do. We might even follow that up with a bit of a celebration!
There’s one extra step which is what we call General Availability or GA. This step is really the time it takes literally to “fill the channel” with Windows PCs that are pre-loaded with Windows 7 and stock the stores (online or in-person) with software. We know many folks would like us to make the RTM software available right away for download, but this release will follow our more established pattern. GA also allows us time to complete the localization and ready Windows for a truly worldwide delivery in a relatively small window of time, a smaller window for Windows 7 than any previous release. It is worth noting that the Release Candidate will continue to function long enough so no one should worry and everyone should feel free to keep running the Release Candidate.
So to summarize briefly:
The obvious question is that we know the Pre-Beta was October 28, 2008, and the Beta was January 7th, so when is the Release Candidate and RTM? The answer is forthcoming. We are currently evaluating the feedback and telemetry and working to develop a robust schedule that gets us the right level of quality in a predictable manner. Believe me, we know many people want to know more specifics. We’re on a good path and we’re making progress. We are taking a quality-based approach to completing the product and won’t be driven by imposed deadlines. We have internal metrics and milestones and our partners continue to get builds routinely so even when we reach RC, we are doing so together as partners. And it relies, rather significantly, on all of you testing the Beta and our partners who are helping us get to the finish line together.
Shipping Windows, as we hoped this shows, is really an industry-wide partnership. As we talked about in our first post, we’re promising to deliver the best release of Windows we possibly can and that’s our goal. Together, and with a little bit more patience, we’ll achieve that goal.
We continue to be humbled by the response to Windows 7 and are heads down on delivering a product that continues to meet your needs and the needs of our whole industry.
--Steven on behalf of the Windows 7 team
Happy New Year! The following post continues our discussion of fundamentals with a focus on power management. Power Management (or energy efficiency) is something that every contributor to the PC Ecosystem must always address—the energy efficiency of a running PC is limited by the weakest component. In engineering Windows 7 we had an explicit focus on the energy usage patterns of the running system and will continue to work with hardware and software makers to realize the collective benefit of all of this work. While we talk about the balancing of needs in every area, energy consumption is probably the most easily visualized—when we test running systems we connect them to power meters and watch a very clear number change as we run tests. (If you’ve seen the film Apollo 13 then you’ve seen a similar (albeit much more mission critical) struggle with a power budget.) This post is by Dean DeWhitt in program management team on our Kernel team. --Steven PS: Quite a few of us are at CES this week!
Energy efficiency is one of the most active topics in modern computing today. As evidence, consider that processor and chipset vendors are marketing products on “performance per watt”, instead of just processor clock frequency and benchmark performance. Perhaps you have seen a press release for one of the many industry consortiums focused on “Green Computing”--reducing the power consumption and environmental impact of computing. Finally, battery life continues to be a major purchasing and usability factor for mobile PCs. These related energy efficiency efforts in the PC industry result in an ever-increasing interest in how Windows manages power.
In engineering Windows 7, our goal is to deliver the capabilities and features users want from a Windows PC while reducing power consumption over previous releases. Windows already provides a rich set of energy saving features, including the ability to turn off the display and automatically put the system to sleep when the user is not interacting with the computer. For Windows 7, we are building upon the investments in these areas by extending the existing capabilities and focusing on reducing power consumption when the system is idle. Although Windows is responsible for managing the power state of many devices, including the processor, hard drive and display, the remaining devices and software running on the computer have just as much (if not more) impact on power consumption and battery life. This is a challenge for everyone contributing to the PC experience.
When we talk about energy efficiency and power consumption, we like to break down the problem area into 3 main components:
Realizing great energy efficiency from a Windows PC requires efforts in each of these areas. A problem with any single component in any area can have a significant impact on power consumption. Thus approaching energy efficiency from a platform approach and paying special attention to each component on the platform is required.
The base hardware platform is really dictated by the system manufacturer. The customer gets the ultimate choice when they buy a system—the customer can buy a system with ultra-efficient hardware components or can buy a system with components that favor performance over power consumption. There are desktop and mobile PCs in all kinds of form factors, with varying capabilities and power consumption levels. Some mobile PCs have a normal 3 or 6-cell battery, while others have an extended 9-cell battery or another external battery that can be added to the computer. The challenge for Windows is to be energy efficient across the wide range of hardware in the Windows ecosystem. Looking at a modern laptop, here is where the power goes:
Desktops will have a similar power distribution although higher in watts. The display is a large amount of the energy consumed in using your desktop PC as well.
The Windows operating system can have just as big an influence as any other component in the platform. In engineering Windows 7 our goal is to make sure Windows provides a great foundation and energy saving opportunities within the operating system starting with configuration of power policy settings.
The first place most users encounter Windows power management is through Power Options in control panel, or the battery meter on a mobile PC. For as long as Windows has had power management, Windows has had power schemes or power plans. The power plans allow you to easily change from one set of power settings to another, depending on your preferences.
Within a power plan, you can change a variety of Windows power-saving features, including inactivity timers for turning off the display, automatically putting the system to sleep or even creating a new custom power plan for the exact settings you want. The display and sleep idle features are very important for power savings and battery life. As above, the display can consume approximately 40% of the power budget on the typical mobile PC and anywhere from 30-100+ Watts on a desktop PC.
PC OEMs, especially makers of laptops, will often develop a custom set of power schemes that work to take advantage of differentiated hardware and unique software available on a specific model. So often you will see power schemes that carry the name of your PC OEM in the title. These have been developed by the OEM who is just as committed to energy efficiency.
Quick tips: The easiest way to save power on a desktop PC is reduce the display idle timeout to something very aggressive, such as 2 or 5 minutes. If you have a screen saver enabled, disable it to allow the display to turn off. On a mobile PC, the easiest way to extend battery life is to reduce the brightness of the display in addition. Also note that many of the new all-in-one machines use laptop components and thus from a power management perspective look like laptops.
Windows manages the processor performance and changes it dynamically based on the current usage to provide performance boost when required and conserve power based on the current workload. For example, when the system is mostly idle, such as when I’m typing this blog post, there is no need to be running the processor in the maximum performance mode, instead the processor voltage and frequency can be reduced to a lower value to save power. Similarly, the hard disk drive and a variety of other devices can be placed in low-power modes or turned off completely to save power when not in use.
For Windows 7, we’re refining the user experiences for power management, focusing on reducing idle power consumption and supporting new device power modes.
There are two reasons to optimize idle power consumption on the system. First there are various times throughout the day when the PC is idle and the more the system gets to idle and stays idle, the less power it uses. Second, idle power consumption is the ‘base’ power consumption for all other workloads. A system which consumes 15W at Idle will consume additional power over the idle power consumption while is use for other workloads. By reducing the idle power consumption on the platform we will improve most other scenarios as well.
The first step in reducing idle power is optimizing the amount of processor, memory and disk utilization. Reducing processor utilization is the most important, because the processor has a wide range of power consumption. When truly idle, the processor power consumption can be as low as 100-300mW. But, when fully busy, the processor can consume up to 35W. This large range means that even small amounts of processor activity can have a significant impact on overall power consumption and battery life. There are several areas of investment in Windows 7 that help reduce processor utilization and thereby enabling longer periods of time where the processor can enter into low power modes. One of these investments is in the area of services that are running on the platform and having those services only start when they are required referred to as “Trigger-Start”. While these services are efficient and have minimal impact by themselves, the additive effect of several services can add up. We are looking at smart ways to manage these services both within Windows but making our investments in this area extensible for others who are writing services to take advantage of this infrastructure. (Also note these are the same features that contribute to improvements in boot time as well).
To further help reduce idle power, we are focusing on core processor power management improvements. Windows scales processor performance based on the current amount of utilization, and making sure Windows only increases processor performance when absolutely required can have a big impact on power consumption.
We have made several investments in the area of device power management including enhancements to USB device classes to enable selective suspend across a broad range of devices including audio, biometrics, scanners, and smart cards. These investments available in Windows 7 enable more energy efficient PC designs. We have also invested in improvements to power management for networking devices, both wired and wireless.
While many of our investments in the core infrastructure improves energy efficiency across several scenarios, in Windows 7 we also focused on several key customer scenarios to identify resource utilization improvements to extend battery life on mobile platforms. One of these scenarios that we identified was media playback. The optimizations for DVD playback include reducing processor and graphics utilization, audio improvements, and optical disk drive enhancements. These improvements are already paying off and showing significant increase in battery life across a broad range of mobile platforms which we demonstrated at the WinHEC conference.
Graphics devices, USB devices, device drivers, background services and installed applications are all extensions to Windows. Large improvements in power consumption and energy efficiency can be realized by improving the efficiency of platform extensions.
For example, consider a single USB device that does not support Selective Suspend. That USB device itself may have very low power consumption (e.g., a fingerprint reader), but until that device enters the suspend state, the processor and chipset must poll the device at a very high frequency to see if there is new data. That polling prevents the processor from entering low power idle states, and on a typical business-class notebook reduces battery life by 20-25%.
Devices are not the only area that require efforts for great energy efficiency. Application and service software can also have a big impact on power consumption. Take for example an application that increases the platform timer resolution using the timeBeginPeriod API. The platform timer tick resolution will be increased and the processor will not be able to efficiently use low power idle modes. We have observed a single application that keeps the timer resolution increased to 1ms can have up to a 10% impact on battery life on a typical notebook PC.
We’re committed to helping improve the energy efficiency of Windows platform extensions by working closely with our partners. The strategy we’re employing is to provide rich tools to identify energy efficiency problems in hardware and software. For Windows 7, we’ve added a new inbox utility that provides an HTML report of energy efficiency issues—a “Top 10” checklist of power problems. If you want to try it out on Windows 7, run powercfg /energy at an elevated command prompt. Be sure to close any open applications and documents before running powercfg—this utility is designed to find energy efficiency problems when the system is idle. powercfg with the /energy parameter can detect USB devices that are not suspending and applications that have increased the platform timer resolution.
For more advanced analysis, we have provided the Windows Performance Toolkit. The Performance Toolkit http://www.microsoft.com/whdc/system/sysperf/perftools.mspx makes it very easy for software developers to observe the resource utilization of their applications, resolve performance bottlenecks and identify issues impacting energy efficiency.
So far, we have been talking about how to save power while the PC is ON. But, there are power savings to gain by entering low power modes when the PC is not in use. Many users simply Shut Down their computer when it is not in use, yet others use Sleep and sometimes Hibernate on mobile PCs. Windows features each of these power-saving modes so you can choose the right mode for how you use the system:
Using an example desktop PC, we measured power consumption for Sleep, Hibernate, Shut Down and the basic ON state, with just the desktop shown and no open programs. We also measured resume latency—the amount of time to get the system back to the ON state.
The chart makes it pretty clear why we focus on Sleep reliability and performance, and encourage most people to use it when they are not using their computer. Sleep consumes nearly the same amount of power as Shut Down, but resumes the system in less than 2 seconds, instead of going through the boot process. You can see that boot takes a significant amount of power so when considering whether to turn off your machine to save power or to put it into a low power state, think about how long your machine will be out of use. Nevertheless, as we’ve talked about in previous blogs boot (and shutdown) are obviously very important performance scenarios as we engineer Windows 7.
We are committed to continuously improving the energy efficiency of Windows PCs, and have made significant improvements to core platform power management for Windows 7, as well as tools to identify where power is consumed. We still have more work to do, and look forward to our upcoming Beta release and monitoring incoming CEIP telemetry for energy efficiency and power management results. Of course we continue to work very closely with the other members of the ecosystem as we all have much to contribute to energy efficiency—from the manufacturing, usage, and end of life of a PC, software, and peripherals.
We’re busy going through tons of telemetry from the many people that have downloaded and installed the Windows 7 beta around the world. We’re super excited to see the excitement around kicking the tires. Since most folks on the beta are well-versed in the hardware they use and very tuned into the choices they make, we’ve received a few questions about the Windows Experience Index (WEI) in Windows 7 and how that has been changed and improved in Windows 7 to take into account new hardware available for each of the major classes in the metric. In this post Michael Fortin returns to dive into the engineering details of the WEI.
The WEI was introduced in Windows Vista to provide one means across PCs to measure the relative performance of key hardware components. Like any index or benchmark, it is best used as a relative measure and should not be used to compare one measure to another. Unlike many other measures, the WEI merely measures the relative capability of components. The WEI only runs for a short time and does not measure the interactions of components under a software load, but rather characteristics or your hardware. As such it does not (nor cannot) measure how a system will perform under the your own usage scenarios. Thus the WEI does not measure performance of a system, but merely the relative hardware capabilities when running Windows 7.
We do want to caution folks in trying to generalize an “absolute” WEI as necessary for a given individual. We each have different tolerances or more importantly expectations for how a PC should perform and the same WEI might mean very different things to different individuals. To personalize this, I do about 90% of my work on a PC with a WEI of 2.0, primarily driven by the relatively low score for the gaming graphics component on my very low cost laptop. I run Outlook (with ~2GB of email), Internet Explorer (with a dozen tabs), Excel (with longs list of people on the development team), PowerPoint, Messenger (with video), and often I am running one of several LOB applications written in .NET. I feel with this type of workload and a PC with Windows 7 and that WEI my own brain and fingers continues to be my “bottleneck”. At the other end of the spectrum is my holiday gift machine which is a 25” all-in-one with a WEI of 5.1 (though still limited by gaming graphics, with subscores of 7.2, 7.2, 6.2, 5.1, 5.9). This machine runs Windows 7 64-bit and I definitely don’t keep it very busy even though I run MediaCenter in a window all the time, have a bunch of desktop gadgets, and run the PC as our print server (I use about 25% of available RAM and the CPU almost never gets above 10%).
The overall Windows Experience Index (WEI) is defined to be the lowest of the five top-level WEI subscores, where each subscore is computed using a set of rules and a suite of system assessment tests. The five areas scored in Windows 7 are the same as they were in Vista and include:
Though the scoring areas are the same, the ranges have changed. In Vista, the WEI scores ranged from 1.0 to 5.9. In Windows 7, the range has been extended upward to 7.9. The scoring rules for devices have also changed from Vista to reflect experience and feedback comparing closely rated devices with differing quality of actual use (i.e. to make the rating more indicative of actual use.) We know during the beta some folks have noticed that the score changed (relative to Vista) for one or more components in their system and this tuning, which we will describe here, is responsible for the change.
For a given score range, we hope our customers will be able to utilize some general guidelines to help understand the experiences a particular PC can be expected to deliver well, relatively speaking. These Vista-era general guidelines for systems in the 1.0, 2.0, 3.0, 4.0 and 5.0 ranges still apply to Windows 7. But, as noted above, Windows 7 has added levels 6.0 and 7.0; meaning 7.9 is the maximum score possible. These new levels were designed to capture the rather substantial improvements we are seeing in key technologies as they enter the mainstream, such as solid state disks, multi-core processors, and higher end graphics adapters. Additionally, the amount of memory in a system is a determining factor.
For these new levels, we’re working to add guidelines for each level. As an example for gaming users, we expect systems with gaming graphics scores in the 6.0 to 6.9 range to support DX10 graphics and deliver good frames rates at typical screen resolutions (like 40-50 frames per second at 1280x1024). In the range of 7.0 to 7.9, we would expect higher frame rates at even higher screen resolutions. Obviously, the specifics of each game have much to do with this and the WEI scores are also meant to help game developers decide how best to scale their experience on a given system. Graphics is an area where there is both the widest variety of scores readily available in hardwaren and also the widest breadth of expectations. The extremes at which CAD, HD video, photography, and gamers push graphics compared to the average business user or a consumer (doing many of these same things as an avocation rather than vocation) is significant.
Of course, adding new levels doesn’t explain why a Vista system or component that used to score 4.0 or higher is now obtaining a score of 2.9. In most cases, large score drops will be due to the addition of some new disk tests in Windows 7 as that is where we’ve seen both interesting real world learning and substantial changes in the hardware landscape.
With respect to disk scores, as discussed in our recent post on Windows Performance, we’ve been developing a comprehensive performance feedback loop for quite some time. With that loop, we’ve been able to capture thousands of detailed traces covering periods of time where the computer’s current user indicated an application, or Windows, was experiencing severe responsiveness problems. In analyzing these traces we saw a connection to disk I/O and we often found typical 4KB disk reads to take longer than expected, much, much longer in fact (10x to 30x). Instead of taking 10s of milliseconds to complete, we’d often find sequences where individual disk reads took many hundreds of milliseconds to finish. When sequences of these accumulate, higher level application responsiveness can suffer dramatically.
With the problem recognized, we synthesized many of the I/O sequences and undertook a large study on many, many disk drives, including solid state drives. While we did find a good number of drives to be excellent, we unfortunately also found many to have significant challenges under this type of load, which based on telemetry is rather common. In particular, we found the first generation of solid state drives to be broadly challenged when confronted with these commonly seen client I/O sequences.
An example problematic sequence consists of a series of sequential and random I/Os intermixed with one or more flushes. During these sequences, many of the random writes complete in unrealistically short periods of time (say 500 microseconds). Very short I/O completion times indicate caching; the actual work of moving the bits to spinning media, or to flash cells, is postponed. After a period of returning success very quickly, a backlog of deferred work is built up. What happens next is different from drive to drive. Some drives continue to consistently respond to reads as expected, no matter the earlier issued and postponed writes/flushes, which yields good performance and no perceived problems for the person using the PC. Some drives, however, reads are often held off for very lengthy periods as the drives apparently attempt to clear their backlog of work and this results in a perceived “blocking” state or almost a “locked system”. To validate this, on some systems, we replaced poor performing disks with known good disks and observed dramatically improved performance. In a few cases, updating the drive’s firmware was sufficient to very noticeably improve responsiveness.
To reflect this real world learning, in the Windows 7 Beta code, we have capped scores for drives which appear to exhibit the problematic behavior (during the scoring) and are using our feedback system to send back information to us to further evaluate these results. Scores of 1.9, 2.0, 2.9 and 3.0 for the system disk are possible because of our current capping rules. Internally, we feel confident in the beta disk assessment and these caps based on the data we have observed so far. Of course, we expect to learn from data coming from the broader beta population and from feedback and conversations we have with drive manufacturers.
For those obtaining low disk scores but are otherwise satisfied with the performance, we aren’t recommending any action (Of course the WEI is not a tool to recommend hardware changes of any kind). It is entirely possible that the sequence of I/Os being issued for your common workload and applications isn’t encountering the issues we are noting. As we’ve said, the WEI is a metric but only you can apply that metric to your computing needs.
Earlier, I made note of the fact that our new levels, 6 and 7, were added to recognize the improved experiences one might have with newer hardware, particularly SSDs, graphics adapters, and multi-core processors. With respect to SSDs, the focus of the newer tests is on random I/O rates and their avoidance of the long latency issues noted above. As a note, the tests don’t specifically check to see if the underlying storage device is an SSD or not. We run them no matter the device type and any device capable of sustaining very high random I/O rates will score well.
For graphics adapters, both DX9 and DX10 assessments can be run now. In Vista, the tests were specific to DX9. To obtain scores in the 6 or 7 ranges, a graphics adapter must obtain very good performance scores, support DX10 and the driver must be a WDDM 1.1 driver (which you might have noticed are being downloaded in beta during the Windows 7 beta). For WDDM 1.0 drivers, only the DX9 assessments will be run, thus capping the overall score at 5.9.
For multi-core processors, both single threaded and multi-threaded scenarios are run. With levels 6 and 7, we aim to indicate that these systems will be rarely CPU bound for typical use and quite suitable for demanding processing tasks and multi-tasking. As examples, we anticipate many quad core processors will be able to score in the high 6 to low 7 ranges, and 8 core systems to be able to approach 7.9. The scoring has taken into account the very latest micro-processors available.
For many key hardware partners, we’ve of course made available additional details on the changes and why they were made. We continue to actively work with them to incorporate appropriate feedback.
There’s been a ton of interest in how we have improved user account control (UAC) and so we thought we’d offer a quick update for folks. We know most of you have discovered this and picked a setting that works for you, and we're happy with the feedback we've seen. This just goes into the details on the choice of defaults. --Steven
In an earlier blog post we discussed the why of UAC and its implications for Windows, the ecosystem, and our customers. We also talked about what we needed to do moving forward to address the data and feedback we’ve received. This blog post will provide additional detail on our response and what you can expect to see in the upcoming beta build in early 2009.
As mentioned in our previous post, and your comments supported this, the goals for UAC are good and important ones. User Account Control was created with the intention of putting you in control of your system, reducing cost of ownership over time, and improving the software ecosystem. It is important not to abandon these goals. Instead, we want to address feedback we’ve received and build on the telemetry we have using those to improve the overall experience without losing sight of the goals with which we agree.
For those of you using 6801 you have started to see the benefits of prompt reduction and our new and improved dialog designs. You also have seen our efforts to give the user greater control of their system – the new UAC Control Panel. The administrator now has more control over the level of notification received from UAC. Look for the UAC Control Panel to appear in Start Search, Action Center, Getting Started, and even directly from the UAC prompt itself. Of course, the familiar ways to access it from Vista are still present.
Figure 1: UAC Control Panel
The UAC Control Panel enables you to choose between four different settings:
We know from the feedback we’ve received that our customers are looking for a better balance of control versus the amount of notifications they see. As we mentioned in our last post we have a large number of admin (aka developer) customers looking for this balance, our data shows us that most machines (75%) run with a single account with full admin privileges.
Figure 2. Percentage of machines (server excluded) with one or more user accounts from January 2008 to June 2008.
For the in-box default, we are focusing on these customers, and we have chosen number 2, “Notify me only when programs try to make changes to my computer”. This setting does not prompt when you change Windows settings (control panels, etc.), but instead enables you to focus on administrative changes being requested by non-Windows applications (like installing new software). For people who want greater control in changing Windows settings frequently, without the additional notifications, this setting results in fewer overall prompts and enables customers to zero in on the key remaining notifications that they do see.
This default setting provides the right degree of change notification that a broad range of customers’ desire. At the same time we’ve made it easy and readily discoverable for the administrator to adjust the setting to provide more or fewer notifications via the new control panel (and policy). As with all of our default choices we will continue to closely monitor the feedback and data that come in through beta before finalizing for ship.
--UAC, Kernel, and Security program managers
As most folks (finally) get the beta and start to set aside some time to install and try out Windows 7, we thought it would be a good idea to start to talk about how we support devices through testing and work across the PC ecosystem. This is a big undertaking and one that we take very seriously. As we talked about at the PDC, this is also an area where we learned some things which we want to apply to Engineering Windows 7. While this is a massive effort across the entire Windows organization, Grant George, the VP of Test for the Windows Experience, is taking the lead in authoring this post. We think this is a deep topic and I know folks want to know more so consider this a kick-off for more to come down the road. –Steven
One of the most important responsibilities in a release of Windows is our support of, and compatibility with, all of the devices and their associated drivers that our users have. The abstraction layer in Windows to connect software and hardware is a crucial part of the operating system. That layer is surfaced through our driver model, which provides the interface for all of our partners in the multi-faceted hardware ecosystem. Windows supports a vast range of devices today – audio devices (speakers, headsets…), display devices (monitors…), print, fax and scan devices, connectivity to digital cameras, portable media devices of all shapes, sizes and functions, and more. Windows is an open platform for companies across the globe who develop and deliver these devices to the marketplace and our users – and our job is to make sure we understand that ecosystem and those choices and verify those devices and drivers work well for our customers – which includes partnering with those device providers throughout the engineering of Windows7.
Drivers provide the interface between a device and the Windows operating system – and are citizens of the WDM (Windows Driver Model). WDM was initially created as an intermediary layer of kernel mode drivers to ease the authoring of drivers for Windows. There are different types of drivers. Class drivers (which are hardware device drivers that supports an array of devices of a similar hardware class where hardware manufacturers make their products compatible with standard protocols for interaction with the operating system) and device-specific drivers (provided by the device manufacturer for a specific device and sometimes a specific version of that device) are the two most common.
Support for our hardware partners comes in the form of the Windows Driver Kit (WDK) and for certification, the Windows Logo Kit (WLK). The WDK enables the development of device drivers and as of Vista replaced the previous Windows Driver Development Kit (DDK). The WDK contains all of the DDK components plus Windows Driver Foundation (WDF) and the Installable File System kit (IFS). The Driver Test Manager (DTM) is another component here, but is separate from the WDK. The Windows Logo Kit (WLK) aids in certifying devices for Windows (it contains automated tests as well as a run-time framework for those tests). These tests are run and passed by our hardware vendor partners in order to use the Microsoft “Designed for Windows™” logo on devices. This certification process helps us and our hardware partners ensure a specific level of quality and compatibility for devices interacting with the Windows operating system. Hardware devices and drivers that pass the logo kits tests qualify for the Windows logo, driver distribution on Windows Update, and can be referenced in the online Windows Marketplace.
With Windows 7 we have modified driver model validation, new and legacy device testing, and driver testing. Compared to Vista, we now place much more emphasis on validating the driver platform and verifying legacy devices and their associated drivers throughout our product engineering cycle. Data based on installed base for each device represents an integral part of testing, and we gather this data from a variety of sources including the voluntary, opt-in, anonymous telemetry in addition to sources such as sales data and IHV roadmaps. We have centralized and standardized the testing mechanics of the lab approach to this area of the product in a way that yields much earlier issue/bug discovery than in past releases. We have also ramped up our efforts to communicate platform or interface changes earlier with our external hardware partners to help them ensure their test cycles align with our schedule. In addition, we draw a more robust correlation between the real-world usage data, including recent trends, and prominence of each device and the prioritization it is given in our test labs. This is especially important for new and emerging devices that will come to market right before and just after we release Windows 7 to our customers.
Another important element in bringing a high quality experience to our Windows 7 users in device and driver connectivity and capability is the staging of our overall engineering process in Windows 7. For this release all of our engineering teams have followed a well structured and staged development process. The development/coding of new features and capabilities in Windows 7 was broken out in to 3 distinct phases (milestones) with dedicated integration and stabilization time at the end of each of these three coding phases. This included ensuring our code base remained highly stable throughout the development of Windows 7 and that our device and driver test validation was a constant part of those milestones. Larry discussed this in his post as some might recall. Program Managers, Developers and Testers all worked in super close partnership throughout the coding phases. Our work with external partners – especially our device manufacturer partners – was also enhanced through early forums we provided for them to learn about the changes in Windows 7 and also work closely with us on validation. Much more focus has been put on planning and then executing - planning the work and then working the plan. Our belief is that this yields much more predictability to developing and delivering our new features in Windows 7 both from a feature content and overall schedule standpoint. We recognize that this raised the bar on how our external partners see us execute and deliver on that plan when we say we will, but we also hope it increases their confidence in how they engage with us in validating the device experience during our development and delivery of Windows 7.
Our program management team helps us drive device market share analysis. Most of their data comes from our Customer Experience Improvement Program. This gives us data on the actual hardware in use across our customer base. For example there are over 16,000 unique 4-part hardware IDs for display devices alone. Like many things, we understand that it only takes a single device not functioning well to degrade an overall Windows experience or upgrade—we definitely want to re-enforce this shared understanding.
New devices typically have a small initial user base, but the driver will often be mostly new code (or the first time a code-base has seen a new device). As the device enters the mainstream, market share grows and most manufacturers continue to develop and improve their drivers. This is where for our customers, and our own testing, it’s important to always have the latest drivers for a given device.
Over a device’s lifetime, we work closely with our external device partners and represent as faithfully as possible in our test labs, a prioritized way of ensuring old and new devices continue to work well with Windows. By paying very close attention to trends in the market place across our device classes, we can make guided decisions in the context of these areas:
Another benefit of close market tracking is creating an equivalence-based view of a device family.
We use the notion of equivalence classes to help us define and prioritize our hardware (device) test matrix. Creating equivalence classes involves grouping things into sets based on equivalent properties across related devices. For example, imagine if we worked for a chemical company and it was our job to test a car polish additive on actual automobiles. Given a fixed test budget, we would want to maximize the number of makes and models we test our product on. We begin by analyzing the current market space so we can make the best choices for our test matrix.
Let’s say the first test car we analyze is a blue 2003 Ford Mustang. We also know that the same blue paint is used on all of Ford’s 2003 and 2004 models and is also used on all of Mazda’s 2005 models. This means our first automobile represents several entries in our table based on equivalence:
Now let’s look at a silver 2001 Mercedes C240. We know that Mercedes and Chrysler have a relationship and upon further investigation we find Chrysler used the same silver paint on their 2006 through 2009 models. Now our equivalence class based test matrix looks like this:
By carefully analyzing each actual automobile, we have established an equivalence relationship that we can leverage to maximize implicit test coverage. Testing one make and model is theoretically equivalent to testing many. Of course we recognize in the real world different companies might use different techniques for applying paint, as one variable, so there are subtleties that require additional information to property class attributes for testing purposes.
Testing computer devices is very similar. Even though there are thousands of different devices on the market, many of them share major components, are die-shrinks of a previous revision, or differ only in terms of memory, clock-rate, pixel count, connector, or even the type of heat sink. Take for example display devices. There are over 16,000 display devices on the market. But the equivalence view reveals that 90% of the market is represented by about 60 different GPUs. By adding a few more to a carefully constructed test matrix based on equivalence it is possible to represent over 99% of all GPUs. Driver writers also leverage equivalence by targeting drivers at a range of hardware. Driver install packages indicate devices they support via hardware IDs.
All modern computer devices are assigned a unique hardware ID based on the device vendor, type, and class. Most IDs (PCI, PC Card, USB, and IEEE 1394 devices) are assigned by the industry standards body associated with that device type.
Let’s look at the device ID of my display adapter:
If I visit PCI-SIG (the standards body associated with all PCI device ID assignment) and do a search on 10DE, I’m told I this is an NVidia PCI ID. If I look further on my system in
I can find NVidia drivers (folders that start with nv_lh). If I open one of the driver .INF files on my machine I see this tell-tale line:
NVIDIA_G92.DEV_0611.1 = "NVIDIA GeForce 8800 GT”
NVIDIA_G92.DEV_0611.1 = "NVIDIA GeForce 8800 GT”
Further inspection of the driver .INF file tells me that the same G92 GPU is used for all of these devices:
A bit of online research reveals other interesting information: “The 8800 GT, codenamed G92, was released on October 29, 2007. The card is the first to transition to 65 nm process, and supports PCI-Express 2.0. It has a single-slot cooler as opposed to the double slot cooler on the 8800 GTS and GTX, and uses less power than GTS and GTX due to its 65 nm process.” -WikiPedia
So in theory, if I was to run a test on my display adapter, there’s a good chance I’d get the same results as I would on any of these other related devices.
One of our primary goals for Windows 7 is compatibility with all Vista certified drivers and to ensure that people have a seamless upgrade experience. This breaks down into several requirements that guide how we test:
One question we are asked about quite a bit is the availability of drivers. There are three primary reasons drivers end up looking for folks: clean installation of Windows, attaching device to a new computer, wanting the updated driver. We definitely recognize that for the readers of this blog, both as enthusiasts and often the support/IT infrastructure for corporations, friends, and families, that the ability to acquire drivers and reliably update machines is something of a “hobby” we all love to hate. We all want the latest and greatest—no more and no less.
A clean installation is one we are all definitely valuing during the beta phase of Windows 7. It should be clear that a clean install, as important as it is to many of us, is not a routine/mainstream experience. Nevertheless, the combination of in-box drivers and those available via Windows Update will serve a very broad set of PCs (for example, you should see most of the drivers installed for the new Atom-based machines if you do a clean install). On the other hand, some drivers for PCs are only available from the PC maker and for a variety of reasons are not available for download from Windows Update or even the device manufacturer’s site. For example, mobile graphics drivers are generally available only from the PC maker and not from the graphics component maker—this is a decision they make because of the way these chipsets are delivered for each PC maker.
Obviously attaching an existing device to a new PC is a common occurrence. In this case you may have long ago lost the CD/DVD that came with a device and you just plug it in (because you ignored the warning saying “please run the setup program first”). Again, our goal is to provide these via Windows Update. Often IHVs have updates or significantly large downloads that for a number of reasons are not appropriate to deliver via Windows Update. In that case we can also alert you, with a link many times, to seek the driver from the vendor of the device.
Updating drivers is something we are all familiar with as we often read “get the latest driver” to address issues. We all see this particularly in the enthusiast gamer space where newer drivers also improve performance or offer more features, in addition to improving overall. The primary way to get updated drivers is generally through optional updates in Windows Update, though again many times the latest and greatest must be downloaded directly from an IHV (independent hardware vendor) site.
Our goal is clearly to make sure that drivers for the broadest set of devices are available and high quality. There are many equal partners that contribute to delivering a PC and all the associated devices and we work hard to develop a systematic way to reach the broadest set of customers with high quality software and support.
The table below provides examples of some of the explicit devices we have directly tested thus far during the development of Windows 7. This is just a sampling of that direct testing - many more devices have been directly tested that are not shown here or are covered through equivalence classing.
This information is available in many sources, such as the WHQL web site that lists all qualified devices. For the purposes of this blog we thought it would be fun to provide a list here which we think will most certainly serve as the basis for discussion.
Radeon X300/X550/X1050 Series
Radeon 9800 Pro
Radeon Xpress Series
Radeon Xpress 1200
Radeon X700 PRO
Radeon X800 CrossFire Edition
Mobility Radeon X300
Radeon X850 CrossFire Edition
Radeon X1950 Series
Mobility Radeon X1300
Mobility Radeon X1400
Mobility Radeon HD3200
Radeon HD 2600 XT
Radeon HD 3850
Radeon HD 3870
Radeon HD 3200
Radeon HD 2400
Radeon HD 2900 XT
Radeon HD 2600
Radeon HD 4850
ATI Technologies, Inc. RAGE XL PCI
RADEON 7000 Series
Analog Devices Inc.
iSight 640x480 Firewire
X5 Stereo BT Headset
Print / Scan
Digital Rebel XT
i470D Photo Printer
PowerShot A720 IS
CASIO COMPUTER CO.,LTD.
Live! Cam Optia AF
WebCam Live! USB
Webcam NoteBook 640x480 USB
WebCam Instant 352x288 USB
WebCam NX Pro 640x480 USB
Live! Cam Notebook Pro 640K USB 2.0
Live! Cam Video IM Pro VGA USB 2.0
Webcam Live Ultra 640x480 USB 2.0 Manual Focus Ring
Creative Labs, Inc.
Creative Technology Ltd
NOMAD MuVo TX
Zen Vision M
DSM - 520
DSM - 510
Stylus Color C88+
Stylus Color C84/C85
Stylus Color C86/C87
Stylus Color C64
Stylus Photo R265
Stylus Photo R220
Stylus Photo R320
Stylus Photo 1270
Stylus Photo R200
Stylus Photo 1280/1290
Stylus Color 900/N
Stylus Color C62
Stylus Photo 820
Stylus Color 660
Stylus Color 640
EasyCam USB PC Camera 640x480
Deskjet D1400 series
Deskjet D2400 Series
Deskjet F2100 series
Color LaserJet 2600
Deskjet 3900 Series
Deskjet D4200 Series
Officejet 6200 Series
Officejet 6300 Series
Officejet Pro L7500
Officejet Pro L7600
Officejet 7400 Series
Officejet 5510 Series
Officejet 7300 Series
LaserJet 3030 MFP
Officejet 6100 Series
Officejet V40 Series
Photosmart D7400 Series
PSC 950 Series
Officejet G Series
Photosmart Pro B8350
LaserJet 4345 MFP
Color LaserJet 4700
Color LaserJet 5550
Color LaserJet 3800
Color LaserJet 3600
Color LaserJet 3000
Business Inkjet 1200D
Color LaserJet 4550
Color LaserJet 4600
Color LaserJet CP4005
Color LaserJet 3700
Color LaserJet 3500
LaserJet 9000 MFP
LaserJet 4 Plus
Color LaserJet 1500L
LABTEC WEBCAM PRO 961358
Web Cam Plus 352x288 USB 2.0 Manual Focus Motion Detection
Z42 Color JetPrinter
Z25 Color JetPrinter
Z45 Color JetPrinter
QuickCam Pro 9000
Quickcam Communicate STX VGA Fixed Focus USB 2.0
QuickCam Chat VGA w/Image Capture USB 2.0
961400-0403 QuickCam Notebook Deluxe 1.3MP MF USB 2.0
QuickCam Pro 4000 640x480 USB 2.0
QuickCam Pro 5000 640x480 USB 2.0
Quickcam Vision Pro1
Quickcam Vision Pro2
961403 QuickCam Fusion 1.3MP USB 2.0
QuickCam Messenger 640x480 USB
QuickCam Messenger Refresh 640x480 USB
QuickCam Notebooks Pro 1.3MP USB 2.0
QuickCam Zoom 640x480 USB
QuickCam Communicate 640x480 USB 2.0
QuickCam Orbit MP 1.3MP USB 2.0
QuickCam Orbit 640x480 USB 2.0
QuickCam for Notebooks Pro
LifeCam VX-1000 VGA USB 2.0
LifeCam VX-6000 1.3MP USB 2.0
LifeCam VX-3000 1.3MP USB 2.0
Xbox Live Vision (Xbox 360)
Wireless Picture Frame
Nero8 Home Media
GeForce 7400 Go
Geforce 7950 GX2
Geforce 8400 GS
GeForce 8400M GS
Geforce 8600 GT
Quador NVS 130m
GeForce 9600 GT
GeForce 8800 GT
Geforce 8400GS (G98)
Geforce 9800 X2
Geforce GTX 260
GeForce4 MX 420
GeForce FX 5200
Geforce FX 5900
GeForce Go 6150
Microline 184 Turbo
Discovery 655 or 665
Realtek 262 HD Audio codec
Realtek 268 HD Audio codec
Realtek 660 HD Audio codec
Realtek 862 HD Audio codec
Realtek 883 HD Audio codec
Realtek 888 HD Audio codec
Realtek 885 HD Audio codec
Realtek 882 HD Audio codec
Realtek 861 HD Audio codec
Realtek 662 HD Audio codec
Realtek Semiconductor Corp
S3 Graphics Chrome 440/430 Series
Sansa View Mp3 Player
Zone player ZP80
Gigabeat V2 PMC
Audio Advantage Micro
About every decade we make the big decision to update what we refer to as the applets (note we’ll use applet, application, program, and tool all interchangeably as we write about these) in Windows—historically Calc (Calculator), Paint (or MS Paint, Paint Brush) and WordPad (or Write), and also the new Sticky Notes applet in Windows 7. As an old-timer, whenever I think of these tools I think of all the history behind them and how they came about. I’m sure many folks have seen the now “classic” video of our (now) CEO showing off Windows to our sales force (the last word of this video is the clue that this video was done for inside Microsoft). Windows 7 seems like a great time to update these tools. The motivation for updating the applets this release is not the 10-year mark or just time to add some applet-specific features, but the new opportunities for developers to integrate their applications with the Windows 7 desktop experience. While many use the applets as primary tools, our view of these is much more about demonstrating the overall platform experience and to provide guidance to developers about how to integrate and build on Windows 7, while at the same time providing “out of box” value for everyone. There’s no real “tension” over adding more and more features to these tools as our primary focus is on showing off what’s new in Windows—after all there are many full-featured tools available that provide similar functionality for free. So let’s not fill the comments with request for more bitmap editing features or advanced scientific calculator features :-).
The APIs discussed in this post are all described on MSDN in the updated developer area for Windows 7 where you can find the Windows 7 developer guide. Each of the areas discussed is also supported by the PDC and WinHEC sessions on those sites.
This post was written by several folks on our applications and gadgets team with Riyaz Pishori, the group program manager, leading the effort. --Steven
This blog post discusses some of the platform innovations in Windows 7 and how Windows 7 applications have adopted and showcased these innovations. This post details some of the platform features that developers and partners can expect in Windows 7 and how Windows 7 programs have showcased these innovations. This post also discusses how applications have been given a facelift both in terms of their functionality as well as their user experience by focusing on key Windows design principles and platform innovations. Finally, this post can serve as a pointer or guide to application developers and ISVs to get themselves familiar with some of the key new Windows platform innovations, see them in action and then figure out how they can build on these APIs for their own software.
The post is organized by each subsystem, and how Windows applets are using that particular subsystem.
The Windows Ribbon User Interface is the next generation rich, new user interface for Windows development. The Windows Ribbon brings the now familiar Office 2007 Ribbon user interface to Windows 7, making it available to application developers and ISVs.
There are several advantages of adopting the Windows Ribbon user interface, many of which have been talked about in the Office 2007 blogs. The Ribbon provides a rich, graphical user interface for all commands in a single place, without the need to expose various functions and commands under different menus or toolbars. The Ribbon UI is direct and self-explanatory, and has a labelled grouping of logically related commands. While using an application that is built on the Ribbon UI platform, the user only needs to focus on his workflow and the context of his task, rather than worry about where a particular function is located or accessible. The Ribbon UI also takes care of layout and provides consistency as compared to toolbars which the user can customize in terms of their sizes, location and contents. It also has built-in and improved keyboard accessibility, and making the application DPI and theme aware becomes easier by using the Ribbon. Finally, development and changes to the user interface is very quick and rapid due to the XML-markup based programming model for the Ribbon User Interface.
Paint and Wordpad are two of the first consumers of the Windows Ribbon UI Platform. In Windows 7, both these applications are enhanced with a set of new features, and the user interface of these applications also required to be brought up to the Windows 7 experience and standards. The Windows Ribbon UI is a great fit for these applications to revamp their user experience and make it consistent, and make these applications rich, fun and easy to use. The tasks and commands in these applications were amenable to be applied to the Ribbon UI framework, and it also served as an opportunity for popular native Windows applications to showcase the Windows Ribbon UI platform to consumers, as well as developers and ISVs. Many has asked about the Windows Explorer and IE also using the ribbon, which we did not plan on for Windows 7. Our Windows 7 focus was on the platform and demonstrating the platform for document-centric applications such as Paint and Wordpad.
Both these applications showcase several elements of the Windows UI Ribbon. The Application Menu of both Paint and Wordpad exposes Application related commands that are typically available thru the ‘File’ menu of an application. Both the applications have a core tab set that consists of ‘Home’, which exposes most of the commands in the application, and ‘View’ which exposes the image or document viewing options in the application. The commands in both these tabs are laid out logically in groups of related functionality.
A quick access toolbar (QAT) is provided by both Paint and Wordpad, which comes with certain defaults like Save, Undo and Redo that are meaningful to the application. The user can customize the QAT by using the QAT drop-down, or right-click on any command or group in the ribbon and add it to the QAT.
Several ribbon commands are used in both these applications, like command buttons, split buttons, galleries, drop-downs, check boxes and toggle buttons.
Further, both applications provide a ‘Print preview’ mode which shows a print preview of the image or the document in context. In a mode, all the core tabs are removed and only the mode is displayed for the user to interact with. On exiting a mode, the user is returned to the core tab set.
Paint also exposes a contextual tab for the Text tool, which is displayed only when a text control is drawn on canvas. A contextual tab shown next to the core tab set when the text tool is in focus, and removed when the text is applied to the image on the canvas. The contextual tab set contains the tools that are specific and relevant only to the text tool.
Both the applications provide live previews through ribbon galleries, for example the font size and font name for Wordpad and Paint while formatting text, bullets and lists in Wordpad, and color selection, outline size selection and outline and fill styles for shapes in Paint. A live preview allows the user to see the changes instantaneously on mouse hover, and then apply those changes on a selection. These previews are one of the key elements of the ribbon UI and demonstrate why the metaphor is much more than a “big toolbar” but a new interaction style.
By adopting the Ribbon User Interface, both the applications inherit built-in keyboard accessibility support using ribbon Keytips, have tooltips on all commands, and have ready support for DPI and Windows themes.
Paint and Wordpad can serve as examples of how the Ribbon UI can be easily used in MFC applications. The Windows Ribbon presents new opportunities and options for developers and ISVs to develop applications with the Ribbon User Interface. The Windows Scenic Ribbon programming model and architecture emphasizes the separation of the markup file and the C++ code files to help developers decouple the presentation and customization of the UI from the underlying application code. The platform also promotes developer-designer workflow, where the developer can focus on the application logic, while the designer can work on the UI presentation and layout. The ribbon UI is a significant investment for us and you should expect to see us continue to use it more throughout Microsoft, including an implementation in the .NET Framework as was demonstrated by Scott Guthrie at the PDC, which will be built on Windows 7 natively in the future.
Windows 7 provides support for multi-touch input data, as well as supporting multi-touch in Win32 via Windows messages. The investments in the multi-touch platform include the developer platform that exposes touch APIs to applications, enhancing the core user interface in Windows 7 to optimize for touch experiences, and providing multi-touch gestures for applications to consume. Developers on Windows 7 can build on these APIs decide on the appropriate level of touch support they would like to provide in their software.
Wordpad enhances the document reading experience by using the multi-touch platform and using the zoom and pan gestures. Zooming, panning and inertia lets the user get to a particular piece of content very quickly in an intuitive fashion. By using the zoom gesture, the user can zoom in or zoom out of the document which is akin to using the zoom slider at the right of the Wordpad status bar. On multi-touch capable hardware, the user can zoom in and out of the document by placing his fingers anywhere within the document window and executing the zoom gesture. Wordpad also supports the pan gesture to pan thru the pages of a document that is open in Wordpad. By executing the pan gesture, the user can scroll-down or scroll-up a document similar to using the scroll bar of the Wordpad application.
In Paint multi-touch data is used to allow users to paint with multiple fingers. It is an example of an application that allows multi-touch input without the usage of gestures. For Paint’s functionality, providing multiple finger painting ability was more compelling and enriching than allowing for zoom, pan, rotate or other gestures that act on the picture in a read-only mode and not in an edit-mode. New brushes in Paint are multi-touch enabled, and they handle touch inputs via multiple fingers and allow the user to simultaneously draw strokes on canvas on finger drag. These brushes are also pressure-sensitive, thereby providing a realistic experience with touch by varying the stroke width based on the pressure on the screen. While adopting the multi-touch platform to enhance the end-user experience in Paint, conscious design decisions were made to preserve the single touch experience for functionalities where a multi-touch scenario does not apply such as the color picker, magnifier and text tool.
By building with the multi-touch APIs, Paint and Wordpad have created more natural and intuitive interfaces on touch-enabled hardware and show “out of the box” how different capabilities can be exposed by developers in their software.
Sticky Notes (or just Notes) is an extension of a TabletPC applet available in Windows 7. One of the things which was key to the Notes experience on the desktop was to have the ability to quickly take all the notes away and get them back, but still making sure it is really easy to create a new note. We achieved this by having a single top level window for the sticky notes application. You can minimize all your notes and view a stack of notes in the preview on the command bar with a single click. The stacked preview has been achieved using the new thumbnail preview APIs that enable apps to override the default taskbar previews that are essentially a redirected snapshot of the top level application window, and provide their own. This enables applications to decouple their previews from the top level application window and provide a more productive preview based on the scenario. For example, this was very valuable in Sticky Note scenarios where a quick peek at a note that was last touched provides for quite a productive workflow. Taskbar also caches the preview thumbnail images so once the preview is given to the Taskbar, the application does not need to keep it around – the application does however need to send an updated preview whenever it changes.
Another nifty customization end-point on the task bar is the destination menu (aka jumplist). This menu comes up when a user right clicks on the application in the taskbar or hovers over the application icon in the Start Menu. The Sticky Notes application does not have a single main application window – this makes the application feel really light weight and fits in well into the Windows 7 philosophy of creating simple and powerful user experiences. The challenge then was exposing functionality such as the ability to create a new note from a central location or potentially other custom “tasks”. The destination menu helped exposing these scenarios in a simple yet discoverable way.
The new taskbar functionality and extensibility built in that has the potential to make it a lot easier for people to work with applications/scenarios in a more productive and efficient manner when developers integrate their software with the Windows desktop.
Building on the long history of Search in Windows and the significant enhancements in Windows 7, there are APIs available to developers to deeply integrate their content types with the desktop search user experience affordances in Windows 7. Sticky Notes shows one example of how these APIs can be used.
The Sticky Notes application now allows users to get back to their notes by simply searching for content through Windows Inline search within the start menu. This is in line with allowing users to reach the relevant note as quickly as possible even when the application is closed. Even though search could be done for both Text and Ink content, it is restricted to text because of lower success rates with varied handwriting styles in ink. The application registers a protocol handler that generates a URL for each Note. The Sticky Notes Filter handler gets asked for the content associated with each note that is then indexed by the Search infrastructure. These indexes are then used to perform a quick lookups when the user searches the Search interfaces provided by the Windows Shell. When a user clicks on a result, Search invokes the associated application with the URL corresponding to the one that the protocol handler had generated that the Filter handler associated with the content it sent to the Search indexer.
The search platform also has the ability to enable the filter handler to specify the language of each chunk of content passed on to it that overrides the default Search heuristics used to compute the language - this increases Search accuracy manifold and thereby enhances internationalization support of the entire ecosystem.
The reason Sticky Notes implemented a protocol handler in addition to a Filter handler was because it implements its own integrated storage schema on top of the Windows File system - all the notes are represented by a single .snt file. The protocol handler generates URL's to individual entities (in this case - notes); the filter handler picks out content for each of these URL's and gives it to Search for indexing.
This demonstrates the ease in which applications can plug into the search platform in Windows 7, and add search handlers which can enhance the overall user experience from the App as well as the platform.
Real-Time Stylus (RTS) is infrastructure that provides access to the stylus events coming from pen or touches digitizers. It provides information about strokes and points and provides access to ink-related events. Using RTS, applications can get access to stylus information and develop compelling end-user scenarios and experiences.
Sticky Notes now allows the users to Ink and Type on notes depending on the availability of inking hardware. Users can use keyboard input to type on notes and use the stylus to ink on notes. Though the experience has been designed keeping in mind that users will either use either ink or text on a particular note, it does allow users to ink and type on the same note. However these surfaces are maintained agnostic to each other. Sticky Notes also auto-grows the note while inking on the note, providing a real-time experience of the note adjusting its size to fit the inked content.
Real Time Stylus (RTS) is used for inking features provided in Sticky Notes. Inking gestures are also available to applications, and the scratch out gesture has been implemented in Sticky Notes to delete content.
In addition, Paint uses RTS to get a stream of positional input from mouse, stylus or touch which are used for drawing strokes on canvas. Paint also captures additional input variables like pressure and touch surface area when such input is available from the digitizer, and maps these inputs into the stroke algorithms that are used to generate Paint strokes on canvas. Using this algorithm, the user is able to modulate stroke width and other parameter based on the pressure or touch area on canvas.
Using RTS allows the development of applications and software that can build on the inking platform and provide ways to interact with the application that go beyond mouse or keyboard. Using stylus, inking and gestures, developers can create interactive experiences for end-users.
The Windows Error Reporting (WER) infrastructure is a set of feedback technologies that is built into Windows 7 and other earlier versions of Windows client and server. WER allows applications to register for application failures and capture this data for end-users who agree to report it. This data can be accessed and analyzed and can be used to monitor error trends and download debug information to help developers and ISVs determine the root cause for application failures.
WER can add value to software development at various stages: during development, during beta testing by getting early feedback from end-users, after the release of the product by analysing and prioritizing the top fixes, and at end of life of the product.
Related to failure recovery, Applications can also register with WER for restart on application of a Windows patch that terminates the application and on application of an update that reboots the computer, as well as failure caused due to an application crash or hang or not responding state. Applications can optionally register for recovery of lost data, can develop their own mechanism for recovery.
Several Windows applications adopt the WER infrastructure to collect and analyze data. Calculator, Paint and Wordpad register for restart and additionally recover the current data in the sessions of the application that were running. Sticky Notes also registers for restart and recovery, and returns the user to the set of notes open on the desktop. Using WER, end-users would allow Windows to capture and collect problem data and then would be returned to the applications in the same state that they were in earlier.
As you can see, our primary effort for the applets in Windows 7 is to showcase some of the new platform APIs and innovations available to developers. As you get to try out these applications you will see that while showcasing the Windows 7 platform innovations, we have also added some commonly requested features and functionality. Some of them are: Check and correct, calculation modes and templates in Calculator, New brushes, shapes and multi-touch support in Paint, Open standards support in Wordpad and Ink and text, taskbar and search integration in Sticky notes. Maybe we won’t wait 10 years to update these again :-)
--Riyaz Pishori and team
We’ve seen some comments recently posted on a previous post on accessibility and a member of the User Interface Platform team wanted to offer some thoughts on the topic. Brett is a senior test lead who leads our efforts testing the Accessibility of Windows 7. --Steven
Hi, my name is Brett and I am the test lead for the Windows 7 Accessibility team. Back in November my colleague Michael wrote a blog post about the work our team is doing for Windows 7, I’m following up to that and some recent comments about our new screen Magnifier. On a personal note I would like to mention that I’m a person with low vision and depend on some of the technologies that my team produces to help me in my work.
I’ve been using Windows 7 for my day-to-day work for several months, this is something we call “dogfooding”, which is using our own pre-release products long before the public ever sees a beta. I’ve been using Windows 7 as my primary operating system and have found our new Magnifier to be very useful to me.
Now, about our Magnifier, as you can imagine, the appeal of the many features in Windows varies from person to person, we often say that it is like making pizza for a billion people. The same is true for the features my team owns. I’ve read many comments since we released our Windows 7 beta about magnifier, some are from people that have really benefited from our new work, some have suggestions, and others have concerns. I will say thanks for the feedback, we appreciate all types. Those of you that have benefited are mostly people that need basic magnification and appreciate the easy ability to zoom in and out as needed; I fall into this category myself. Those of you that need magnification in combination with custom colors, high-contrast or some screen readers probably haven’t been able to benefit from the new Magnifier, for you we’ve made sure that the Vista magnifier continues to work. Let me explain a little more about what we’ve done in Windows 7.
To go into more detail about our implementation I need to start with our graphics system in Windows. Over the last several years GPU technology has made huge advances and in Vista we finally made the leap to a modern hardware accelerated graphics system, what we call Aero, which takes advantage of the GPU. We often use the term Aero to refer to the specific elements of Windows visuals, such as transparency and gradients. In practice it is more than that, the modern graphics rendering (technically the desktop window manager along with the DirectX APIs) is not just for aesthetics but for all forms of rendering including text, 2D, and 3D all using modern hardware assisted graphics and a much richer API. It takes time, however, for the diverse ecosystem to adopt this technology, perhaps even over the course of several OS releases. It also takes time for Windows and time for software developers and hardware manufacturers to adopt new technologies; so for a time we will have (and fully support) a mix of both old and new. For example, some screen readers do the great things they do by capturing the data that goes through the original Windows graphics system (GDI) and building their off-screen UI models which is why they need to turn off the new rendering. On the other hand, our new Magnifier is integrated deeply into the desktop window manager (“Aero”) to leverage this graphics horsepower and deliver smooth full-screen multi-monitor magnification.
While, as this demonstrates, these advances aren’t seamless, in Windows 7 my team has worked to make sure that we maintain Vista functionality and compatibility while making new investments. Magnifier is an example of this, we utilize the power of the GPU where we can to bring new capabilities to a broad spectrum of customers, and when Aero needs to be off, whether for screen readers, high-contrast or other needs, we maintain the existing capabilities in the product. And by maintaining compatibility as much as possible, many of the tools you depend on today will continue to work with Windows 7.
So, is Magnifier better for everyone? Not everyone, but certainly for many people, but more than that I can honestly say that we have made advances to accessibility for everyone in Window 7. As Michael noted in his posting, we invested in several areas, there’s not only the Magnifier and on-screen keyboard work, there is also significant work to the underlying accessibility APIs. We also actively support the community and recently made a grant to NV Access to help them improve their open source screen reader support for Windows 7 and Internet Explorer 8.
Thanks for reading, and thanks for your comments,