Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

Disk Defragmentation – Background and Engineering the Windows 7 Improvements

Disk Defragmentation – Background and Engineering the Windows 7 Improvements

  • Comments 89

One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. --Steven

In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.

Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.

This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.

Graph of Historical Trends of CPU and IOPS Performance

Chart of Performance Improvements of Various Technologies

In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.

Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:

  1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.
  2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.

Both rules have reasonably simply understood rationale:

  1. Each time an I/O is issued by the CPU, multiple software and hardware components have to do work to satisfy the request. This contributes toward increased latency, i.e., the amount of time until the request is satisfied. This latency is often directly experienced by users when reading data and leads to increased user frustration if expectations are not met.
  2. Movement of mechanical parts contributes substantially to incurred latency. For hard disks, the “rotational time” (time taken for the disk platter to rotate in order to get the right portion of the disk positioned under the disk head) and the “seek time” (time taken by the head to move so that it is positioned to be able to read/write the targeted track) are the two major culprits. By reading or writing in large chunks, the incurred costs are amortized over the larger amount of data that is transferred – in other words, the “per unit” data transfer costs decrease.

File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk. Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.

Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.

So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:

  1. Any logically related content that was fragmented can be placed adjacently
  2. Free space can be coalesced so that new content written to the disk can be done so efficiently

The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.

Example of disk blocks being defragmented.

Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.

Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often. Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation. Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness. Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.

So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.

The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:

  • Availability of a system that has been “aged” to create fragmentation in a typical or representative manner. But, as noted above, there is no single, representative behavior. For example, the frequency and extent of fragmentation on a computer used primarily for web browsing will be different than a computer used as a file server.
  • Selection of meaningful disk-bound metrics e.g. boot and first-time application launch post boot.
  • Repeated measurements that can be statistically relevant

Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.

In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed. In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit. In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!

Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.

The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:

In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively. Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:

New Windows 7 Defrag User Interface

New Windows 8 Defrag User Interface

 

In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time. Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:

Windows 7 Defrag Schedule

Windows 7 Defrag Disk Selection

Among the other changes under the hood in Windows 7 are the following:

  • Defragmentation in Windows 7 is more comprehensive – many files that could not be re-located in Windows Vista or earlier versions can now be optimally re-placed. In particular, a lot of work was done to make various NTFS metadata files movable. This ability to relocate NTFS metadata files also benefits volume shrink, since it enables the system to pack all files and file system metadata more closely and free up space “at the end” which can be reclaimed if required.
  • If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases.
  • By default, defragmentation is disabled on Windows Server 2008 R2 (the Windows 7 server release). Given the variability of server workloads, defragmentation should be enabled and scheduled only by an administrator who understands those workloads.

Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :-).

Rajeev and Matt

Leave a Comment
  • Please add 3 and 1 and type the answer here:
  • Post
  • Just writing to agree with sokolum: "It would be nice is the system would consider the file type and place them on a pre-reservered place on a harddisk."

    On a side note, much of the performance loss people complain about is realited to Explorer add-ons (not disk defrag.)  Any chance for an Explorer add-on manager of kinds?  Run em in a different process please!

  • Can you make it such that when multiple volumes are selected, they can defragged one after the other (not in parallel) from the GUI. This makes it use defrag.exe and give up on the nice GUI. Defragmenting multiple volumes simultaneously takes a performance hit if I'm doing something else too on my PC.

  • I didn't see this - guess I've been busy using 7.  Anyway I just recently sent feedback that it's useless.  Got a Samsung 160GB Sata II small HD on my test machine and have used about 52 GB.

    Ran Defrag and in just puts 'pas 1 0.5%' and up to 100%, then it puts 'pass 2  35%, etc - apparently there are 10 passes - I didn't stop to watch, I watered the garden, did the washing and cooked a meal and it was still going... and going... after 4 hours it was on about 'pass 10  .05%'.

    There are commercial defragers that do a little bit in the background - or  - they may be bull..

    Whichever way - over 4 hours of not touching the computer to Defrag 52 GB means that Defrag is totally USELESS and will never work and/or run.

    160 GB is the smallest HD I could buy - most people will be buying 2 TB.  3 months to Defrag????

  • I was under the impression that the performance boost of defraging a SSD is like only 0.5%. As the access time is virtually 0 on a SSD as it uses flash memory rather then a spinning platter, so it didnt matter as much if they were a bit muddled.... It could find them very quickly. A small overhead at most..... not worth the loss of lifetime and reliability.

    And the whole Windows Kernel(vs unix) is unsecure argument going on.... Is just insane. Obviously the Winows Kernel is more secure....

    Ill try and explain it metaphysically. There is two towns in the ye old medievil days. One town is a large town of millions (windows users), its heavily fortified, theres always attackers trying to take over its lands and get inside and kill the people inside. Every now and then one does get in and gives a few people the plague, but they have pretty good doctors and they kill that type of plague.

    Now town 2. Well its a small little forrest village. Only has about 500 people. Very community like village. They have no riches. They have nothing. No one attacks them. But the big rich city always has people trying to attack it...

    So although at times Windows may seem unsecure, its EXTREMELY secure relatively. Its just a matter of perspective. Oh and sorry if my story sucked on trying to explain it to people who blatantly don't understand :). And most unstableness is indeed 3rd party. When Vista came out. It was like 70% of crashes were caused by nVidia, 5% ATI, 22% others, 3% Microsoft. Or something like that, i cant remember the specifics. But it was a mostly nVidias fault scenario.

  • Mattisdada:

    That is a very common misconception, propagated by false claims from SSD manufacturers, software vendors, reviewers, etc.  People use the relatively low access time of SSDs to convince themselves and others that fragmentation is a non-issue.  But really, there are two main factors that determine how long it takes to read a file from start to finish: 1) access time (a time cost paid per I/O request) and 2) throughput (how much data can be read per unit of time).

    On a traditional spinning disk, the relatively high access time is what causes the most slowdown when trying to read a fragmented file.  If you need to read a file which is in 200 fragments, and your seek time is an average of 9ms, that's 1.8 seconds spent just seeking around.  On an SSD, that might be more like 200 * 0.1ms = 20ms.  A 90x improvement.

    But access time is only part of the picture.  The other part is throughput, and it's very important.  Let's say your SSD is capable of reading at 200MB/sec.  Well guess what - its throughput of an SSD is actually quite variable depending on I/O size.  The size of a single I/O request affects the throughput you get.  Look at the graphs here:

    http://www.guru3d.com/article/gskill-ssd-solid-state-disk-64-gb-review/6

    If you have to issue a bunch of I/O requests for say, 64KB and under, you're looking at 2-10x decrease in throughput for those requests.  The more fragmented a file is, the more small I/O requests are going to be needed to read it.  It will not be as fast as reading a contiguous file using larger I/O requests, and the difference could easily be much more than 0.5%.

    As I said before, I measured a 30% hit myself.  In that particular test, I used XP, an SATA2 128GB MLC SSD, a 180MB file in 200 fragments (downloaded by Firefox), SysInternals contig to measure and remove fragmentation, filemon to check actual I/O sizes, and a program I wrote to do read timing using different APIs, flags, and requested I/O sizes.

  • Nice summary of graphs and charts in the beginning, but I feel like big "feature" is missing here.

    Perhaps it is not directly related to disk fragmentation (though I though I heard it was back in early Vista days), but what about "aligning" of software for faster loads.

    Great example is during system boot - the OS knows it will need a lot of drivers, registry, executable files (dlls, etc). Aligning all those for one or very few contiguous load would significantly improve startup.

    Granted this is something that can be "prepared" during the initial OS setup, but overtime as you add hardware, update kernel pieces (security, anyone?), what will re-align these pieces for fast load?

    Plus, what about Icons and Background graphics and other such "nonsense" - this is all part of "creating" the user desktop, and the faster it happens the better the experience!

  • One more comment, to add my 2 cents to what tgrand mentions above:

    1. The SSD drives are in their infancy, and manufactures (especially Intel, I hear), are making huge leaps forward with the way data is internally organized to provide huge seek/throughput improvements

    2. Due to the nature of Flash memory Writing and Re-writing to same memory regions degrades media at an accelerated rate. To me, that says that a proper place to "defragment" a file is inside internal firmware of the SSD drive, and not externally by OS. Anyway, I understand most firmware already makes these decisions of where to physically write data, separate from the "logical" OS positioning, based in part on media degradation optimizations.

    On last note, I'd like to see a post about what we're doing to optimize OS for SSD drives. There is a world of functionality and improvements that can be gained, way beyond the silly "Boost" or whatever that thing is called in Vista. I am talking about scenarios where boot partition is SSD, or it's other kind of "mix" where system contains SSD and old-school drives. Another post, Win 7 team, perhaps?

  • adir1:

    You said "the proper place to 'defragment' a file is inside internal firmware of the SSD drive."  But how could this be done?  The kind of fragmentation we're talking about here occurs at the filesystem level - the "logical" OS positioning as you called it.  Only the OS can manage the filesystem.  A storage device can't possibly do it.

    It sounds like you're either mixing the concepts of filesystem fragmentation and wear leveling (they're really completely separate), or you're suggesting there should be some kind of new and very different interaction between OS and storage device...?

  • I must agree with adir1, solokum, hairs and shan.

    How windows organizes the disk is far more relevant than the old-school 'defragmentation' routine. The first graph shows that disks are too slow compared to the CPU. So why not use the cpu power to determine the most optimal place for a file when it is written? And use a continuously running service to optimize the disk when not in use. (like diskeeper)

    I've written more of how I'd like to see windows organize the disk @ http://www.larud.net/subtext/archive/2009/02/10/46.aspx

  • I would add this to the Disk Defrag option.

    Have an option of allowing disk defrag during a screen saver. This would help out since it is during an idle time. Make it a default and allow it able to be turned off.

    I like the other suggestions provided too, but I think it might hinder performance if the OS is continually monitoring when "idle" time is available.

    Disk defrag during a screen saver is a good option. Although not everyone turns their screen saver on, it would be helpful.

  • Mr32bit, I think the OS always knows when the computer is idle.  It's not that hard to detect.

    And, screen savers (and screen power-off) on laptops that are running on battery power are there to save the battery; defrag while running on a bettery may not be ideal.

  • I have mixed feelings about the following suggestions, but these are ideas I had while reading the post.

    1) Schedule a hardrive to turn on at a specified time and turn off once the defrag was complete. The drawback for this would be the additional power used in the middle of the night. The benefit would be a defragmented drive at little impact to the user. Of course, there would be no connection to a network or internet when this occurs.

    2) Create an automatic defrag to occur when the system has been idle for 3-4 hours. Under normal use, this would only occur when your system has been left on overnight. Therefore, the user would only need to leave the system on during any given night. The current scheduled listed above appears to require the user to remember to leave the system on on a given day (such as Wednesday). Many users would likely forget to leave their system on during their scheduled defrag time.    

  • To Scheduler automatic defragmentation on idle just simply go to Control Panel > Administrative Tools > Task Scheduler. Expand Task Scheduler Library > Microsoft > Windows. Find "Defrag" subfolder, right click on "ScheduledDefrag" task, choose "Properties" from falling down menu. Click "Triggers" tab.

    Here you may add a new trigger. A limit is only your imagination :) For instance, choose "On idle" from "Begin a task:" list. Press Ok or tune-up with advanced settings.

    As simple as that.

  • Something good that I just read about windows 7 is that "termination of defragmentation would not damage the system". But does this mean that it would on Vista? I mean, today I started defragmenting my vista HD for the first time, when it has only 10 gb left on a 250 gb HD. After more than an hour, I noticed that defrag.exe was not even consuming any CPU at all on the task manager and assumed that it was all done, and so I killed the process. Only then did I notice it was still going on before I killed it. Would this have damaged any data on my drive?

    Having a status report while defrag is in process now has another reason: so you can actually tell that it is still going on!

  • I believe the defrag API and implementation in modern Windows is set up so that it shouldn't be possible to have data loss or corruption as a result of interrupting the defrag process - whether the interruption is you killing a process, a driver causing a BSOD, power to the system being cut, etc.  I highly doubt there was any fundamental change here between Vista and Windows 7.  But it would've been nice if you'd named your source.

Page 3 of 6 (89 items) 12345»