Engineering Windows 7

Welcome to our blog dedicated to the engineering of Microsoft Windows 7

Disk Defragmentation – Background and Engineering the Windows 7 Improvements

Disk Defragmentation – Background and Engineering the Windows 7 Improvements

  • Comments 89

One of the features that you’ve been pretty clear about (I’ve received over 100 emails on this topic!) is the desire to improve the disk defrag utility in Windows 7. We did. And from blogs we saw a few of you noticed, which is great. This is not as straight forward as it may appear. We know there’s a lot of history in defrag and how “back in the day” it was a very significant performance issue and also a big mystery to most people. So many folks came to know that if your machine is slow you had to go through the top-secret defrag process. In Windows Vista we decided to just put the process on autopilot with the intent that you’d never have to worry about it. In practice this turns out to be true, at least to the limits of automatically running a process (that is if you turn your machine off every night then it will never run). We received a lot of feedback from knowledgeable folks wanting more information on defrag status, especially during execution, as well as more flexibility in terms of the overall management of the process. This post will detail the changes we made based on that feedback. In reading the mail and comments we received, we also thought it would be valuable to go into a little bit more detail about the process, the perceptions and reality of performance gains, as well as the specific improvements. This post is by Rajeev Nagar and Matt Garson, both are Program Managers on our File System feature team. --Steven

In this blog, we focus on disk defragmentation in Windows 7. Before we discuss the changes introduced in Windows 7, let’s chat a bit about what fragmentation is, and its applicability.

Within the storage and memory hierarchy comprising the hardware pipeline between the hard disk and CPU, hard disks are relatively slower and have relatively higher latency. Read/write times from and to a hard disk are measured in milliseconds (typically, 2-5 ms) – which sounds quite fast until compared to a 2GHz CPU that can compute data in less than 10 nanoseconds (on average), once the data is in the L1 memory cache of the processor.

This performance gap has only been increasing over the past 2 decades – the figures below are noteworthy.

Graph of Historical Trends of CPU and IOPS Performance

Chart of Performance Improvements of Various Technologies

In short, the figures illustrate that while disk capacities are increasing, their ability to transfer data or write new data is not increasing at an equivalent rate – so disks contain more data that takes longer to read or write. Consequently, fast CPUs are relatively idle, waiting for data to do work on.

Significant research in Computer Science has focused on improving overall system I/O performance, which has lead to two principles that the operating system tries to follow:

  1. Perform less I/O, i.e. try and minimize the number of times a disk read or write request is issued.
  2. When I/O is issued, transfer data in relatively large chunks, i.e. read or write in bulk.

Both rules have reasonably simply understood rationale:

  1. Each time an I/O is issued by the CPU, multiple software and hardware components have to do work to satisfy the request. This contributes toward increased latency, i.e., the amount of time until the request is satisfied. This latency is often directly experienced by users when reading data and leads to increased user frustration if expectations are not met.
  2. Movement of mechanical parts contributes substantially to incurred latency. For hard disks, the “rotational time” (time taken for the disk platter to rotate in order to get the right portion of the disk positioned under the disk head) and the “seek time” (time taken by the head to move so that it is positioned to be able to read/write the targeted track) are the two major culprits. By reading or writing in large chunks, the incurred costs are amortized over the larger amount of data that is transferred – in other words, the “per unit” data transfer costs decrease.

File systems such as NTFS work quite hard to try and satisfy the above rules. As an example, consider the case when I listen to the song “Hotel California” by the Eagles (one of my all time favorite bands). When I first save the 5MB file to my NTFS volume, the file system will try and find enough contiguous free space to be able to place the 5MB of data “together” on the disk. Since logically related data (e.g. contents of the same file or directory) is more likely to be read or written around the same time. For example, I would typically play the entire song “Hotel California” and not just a portion of it. During the 3 minutes that the song is playing, the computer would be fetching portions of this “related content” (i.e. sub-portions of the file) from the disk until the entire file is consumed. By making sure the data is placed together, the system can issue read requests in larger chunks (often pre-reading data in anticipation that it will soon be used) which, in turn, will minimize mechanical movement of hard disk drive components and also ensure fewer issued I/Os.

Given that the file system tries to place data contiguously, when does fragmentation occur? Modifications to stored data (e.g. adding, changing, or deleting content) cause changes in the on-disk data layout and can result in fragmentation. For example, file deletion naturally causes space de-allocation and resultant “holes” in the allocated space map – a condition we will refer to as “fragmentation of available free space”. Over time, contiguous free space becomes harder to find leading to fragmentation of newly stored content. Obviously, deletion is not the only cause of fragmentation – as mentioned above, other file operations such as modifying content in place or appending data to an existing file can eventually lead to the same condition.

So how does defragmentation help? In essence, defragmentation helps by moving data around so that it is once again placed more optimally on the hard disk, providing the following benefits:

  1. Any logically related content that was fragmented can be placed adjacently
  2. Free space can be coalesced so that new content written to the disk can be done so efficiently

The following diagram will help illustrate what we’re discussing. The first illustration represents an ideal state of a disk – there are 3 files, A, B, and C, and all are stored in contiguous locations; there is no fragmentation. The second illustration represents a fragmented disk – a portion of data associated with File A is now located in a non-contiguous location (due to growth of the file). The third illustration shows how data on the disk would look like once the disk was defragmented.

Example of disk blocks being defragmented.

Nearly all modern file systems support defragmentation – the differences generally are in the defragmentation mechanism, whether, as in Windows, it’s a separate, schedulable task or, whether the mechanism is more implicitly managed and internal to the file system. The design decisions simply reflect the particular design goals of the system and the necessary tradeoffs. Furthermore, it’s unlikely that a general-purpose file system could be designed such that fragmentation never occurred.

Over the years, defragmentation has been given a lot of emphasis because, historically, fragmentation was a problem that could have more significant impact. In the early days of personal computing, when disk capacities were measured in megabytes, disks got full faster and fragmentation occurred more often. Further, memory caches were significantly limited and system responsiveness was increasingly predicated on disk I/O performance. This got to a point that some users ran their defrag tool weekly or even more often! Today, very large disk drives are available cheaply and % disk utilization for the average consumer is likely to be lower causing relatively less fragmentation. Further, computers can utilize more RAM cheaply (often, enough to be able to cache the data set actively in use). That together, with improvements in file system allocation strategies as well as caching and pre-fetching algorithms, further helps improve overall responsiveness. Therefore, while the performance gap between the CPU and disks continues to grow and fragmentation does occur, combined hardware and software advances in other areas allow Windows to mitigate fragmentation impact and deliver better responsiveness.

So, how would we evaluate fragmentation given today’s software and hardware? A first question might be: how often does fragmentation actually occur and to what extent? After all, 500GB of data with 1% fragmentation is significantly different than 500GB with 50% fragmentation. Secondly, what is the actual performance penalty of fragmentation, given today’s hardware and software? Quite a few of you likely remember various products introduced over the past two decades offering various performance enhancements (e.g. RAM defragmentation, disk compression, etc.), many of which have since become obsolete due to hardware and software advances.

The incidence and extent of fragmentation in average home computers varies quite a bit depending on available disk capacity, disk consumption, and usage patterns. In other words, there is no general answer. The actual performance impact of fragmentation is the more interesting question but even more complex to accurately quantify. A meaningful evaluation of the performance penalty of fragmentation would require the following:

  • Availability of a system that has been “aged” to create fragmentation in a typical or representative manner. But, as noted above, there is no single, representative behavior. For example, the frequency and extent of fragmentation on a computer used primarily for web browsing will be different than a computer used as a file server.
  • Selection of meaningful disk-bound metrics e.g. boot and first-time application launch post boot.
  • Repeated measurements that can be statistically relevant

Let’s walk through an example that helps illustrate the complexity in directly correlating extent of fragmentation with user-visible performance.

In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. So, which one is correct? Well, before the question can be answered we must understand why defrag in Vista was changed. In Vista, we analyzed the impact of defragmentation and determined that the most significant performance gains from defrag are when pieces of files are combined into sufficiently large chunks such that the impact of disk-seek latency is not significant relative to the latency associated with sequentially reading the file. This means that there is a point after which combining fragmented pieces of files has no discernible benefit. In fact, there are actually negative consequences of doing so. For example, for defrag to combine fragments that are 64MB or larger requires significant amounts of disk I/O, which is against the principle of minimizing I/O that we discussed earlier (since it decreases total available disk bandwidth for user initiated I/O), and puts more pressure on the system to find large, contiguous blocks of free space. Here is a scenario where a certainly amount of fragmentation of data is just fine – doing nothing to decrease this fragmentation turns out to be the right answer!

Note that a concept that is relatively simple to understand, such as the amount of fragmentation and its impact, is in reality much more complex, and its real impact requires comprehensive evaluation of the entire system to accurately address. The different design decisions across Windows XP and Vista reflect this evaluation of the typical hardware & software environment used by customers. Ultimately, when thinking about defragmentation, it is important to realize that there are many additional factors contributing towards system responsiveness that must be considered beyond a simple count of existing fragments.

The defragmentation engine and experience in Windows 7 has been revamped based on continuous and holistic analysis of impact on system responsiveness:

In Windows Vista, we had removed all of the UI that would provide detailed defragmentation status. We received feedback that you didn’t like this decision, so we listened, evaluated the various tradeoffs, and have built a new GUI for defrag! As a result, in Windows 7, you can monitor status more easily and intuitively. Further, defragmentation can be safely terminated any time during the process and on all volumes very simply (if required). The two screenshots below illustrate the ease-of-monitoring:

New Windows 7 Defrag User Interface

New Windows 8 Defrag User Interface

 

In Windows XP, defragmentation had to be a user-initiated (manual) activity i.e. it could not be scheduled. Windows Vista added the capability to schedule defragmentation – however, only one volume could be defragmented at any given time. Windows 7 removes this restriction – multiple volumes can now be defragmented in parallel with no more waiting for one volume to be defragmented before initiating the same operation on some other volume! The screen shot below shows how defragmentation can be concurrently scheduled on multiple volumes:

Windows 7 Defrag Schedule

Windows 7 Defrag Disk Selection

Among the other changes under the hood in Windows 7 are the following:

  • Defragmentation in Windows 7 is more comprehensive – many files that could not be re-located in Windows Vista or earlier versions can now be optimally re-placed. In particular, a lot of work was done to make various NTFS metadata files movable. This ability to relocate NTFS metadata files also benefits volume shrink, since it enables the system to pack all files and file system metadata more closely and free up space “at the end” which can be reclaimed if required.
  • If solid-state media is detected, Windows disables defragmentation on that disk. The physical nature of solid-state media is such that defragmentation is not needed and in fact, could decrease overall media lifetime in certain cases.
  • By default, defragmentation is disabled on Windows Server 2008 R2 (the Windows 7 server release). Given the variability of server workloads, defragmentation should be enabled and scheduled only by an administrator who understands those workloads.

Best practices for using defragmentation in Windows 7 are simple – you do not need to do anything! Defragmentation is scheduled to automatically run periodically and in the background with minimal impact to foreground activity. This ensures that data on your hard disk drives is efficiently placed so the system can provide optimal responsiveness and I can continue to enjoy glitch free listening to the Eagles :-).

Rajeev and Matt

Leave a Comment
  • Please add 5 and 6 and type the answer here:
  • Post
  • A suspicious mind might come to the conclusion that the decision to omit the graphical fragmentation map found in earlier versions of defrag might have to do with the admitted fact that fragments greater than 64MB are now ignored.

    Bad choice. It should have been an option flag.

    Choosing to make a function *less* powerful in order to *improve* the user experience is not really progress.

    BTW, in much earler times, files were actually laid out on the memory drums such that the next required piece of stored data was just reaching the read heads at the time the head was ready for the next sector. This permitted the hardware to mimic, as much as possible, a continuously available data stream.

    Now, *that* is optimisation.

  • If I understand correctly, it sounds as if you are eliminating defrag of files above 64MB in size completely.

    I find this troubling since I manipulate database files as large as 2-3GB in size frequently.  I used O&O to defrag these files

    in XP and it made a noticeable difference in access times.

    I don't understand the logic of eliminating defrag completely for large files.

  • Well the new features of Diskeeper 2010 means 85% of file fragmentation is prevented before it happens.. it also has the smarts to monitor and defrag only when idle resources.  Seems pretty sexy to me!

  • I thought that under background defrag there is more than a simple scheduled task :( I have it used since XP when the command line "defrag" utility has been introduced. Simple batch file worked great. Neverless defrag in W7 now is as it should be a long time ago.

  • I still want a graphical representation on the data on the disk!  In XP you could see if the pagefile was in 1 contiguous chunk because it was a green chunk (unmovable) (yes, I know there are a couple other files that can also be part of the green space but those can be turned off and removed)  I like my pagefile to have no holes in it for obvious reasons.  So, I disabled virtual memory (and hibernation, system restore, etc) then checked defrag to make sure there is no green blocks - good, now defrag. Now set page file to a set size (same min and max, NOT system managed) and reboot - open defrag to verify that it was written in 1 chunk. good, now turn system restore or whatever else back on.

    So, how can I ensure the page file is contiguous in 7???

  • Can't it be written so it optionally runs at Shutdown?

  • For those of you bashing Linux or Unix. Several people said that Linux/Unix de-fragments as it writes. That is incorrect. Linux/Unix keeps track of what is written on the file system. Afterwords, when it writes it knows where to put everything so it does not have to fragment files. Secondly Linux is more secure than windows. User Privilege Restrictions in Linux are strongly enforced. In windows last time I knew there were serious bugs in DLL files that allowed users that are non-privileged users to do privileged things placing the system at risk.

    Next, Linux does not identify executables by extension. Windows does and this is a big flaw.

    Every one would like to pick on the concept of everything is a file and say that's so old.

    Well those files for devices really do not exist on the hard-drive, they exist in ram as mapped files which have backing in ram.

    Windows devices are also mapped in to ram.  Both are different abstractions but they both are in ram making them equally fast. By Linux/UNIX letting you treat devices as a file, it allows simple read and write commands to manipulate a device. Windows may have separate API commands for manipulating devices which may number in the the hundreds.

    For those of you who say UNIX is so old also... Most operating systems have barely changed..

    All operating systems and processors use the same seven logic gates.... AND, OR, XOR, NOT, !AND, !OR, !XOR for manipulating things... every operating systems and processor uses that... So Technically Windows is as dirt.

  • One more comment.

    When Windows updates servers were being attacked why did Microsoft choose to hide behind Linux Servers?

  • As far as I understood article:

    Win7 defragmenter is improved and it can even fix MFT fragmentation.

    In my PC I have XP-SP3 NTFS boot disk with 4 fragments in MFT and XP can't fix it,

    no any other files reported as fragmented by XP defragmenter, but I can see red marks

    in defragmenter GUI on XP.

    I took this disk to Win7 PC and run defragmentation there.

    After that I move it back to XP machine and I see:

    MFT fragmentation remains and also I got 10+ fragmented files reported.

    After defragmentation on XP these fragmented files were defragmented OK.

    Questions:

    1. Is it OK ?

    2. Can I be sure that Win7 defragmenter always keep XP compatible NTFS disk

    as XP compatible one ?

  • Really enjoyed that, thanks. I need to start leaving my PC on overnight tho!

  • Why does my Disc Degreg ALWAYS show Last Run status as '0% fragmented' ... does that mean it never requires it.

  • Windows 7, asus u80, defrag command in save mode command prompt did not work. Only show information about that command. Is it this normal?

  • I hate not SEEING the process, like we used to! I want to be able to see the difference between the mess I had and the nice tidy disk I'm getting. Now it's just a number ticking. No where near as satisfying. Can this be changed?!   =(

  • A very interesting read.

    Running Vista I find that turning OFF scheduled defrag and running "as and when" from a command prompt using cmd.exe and entering defrag c: -v-w is by far and away the best. You get a detailed text report on the level of fragmentation, all file sizes are defragmented, and once you have run the routine once subsequent runs are very quick, usually around 2 to 5 minutes maximum.

    I do this every week or so.

    Also as a user of incremental backups it is of course best not to have defrag making changes, no matter how small to the file system as that greatly increase backup time and space.

    The auto defrag is great in theory, in practice its not quite so good for all the various reasons outlined in these pages.  

  • Can tell this is a non-optimum solution due to the sheer number of defrag "competitors".

    The defrag when running solution leaves numerous unmovable files at the capicity end.  Point in case, try to shrink a volume right after a factory install.  I've got a >200Gb drive, using only 40G and can't get it to shrink below 149Gb.  Locked files included Restore Points, Windows search service, hibernate file, memory cache, index.dat under user account, windows update files under windows/SoftwareDistribution, and the UsrJrnl (created by chkdsk).  And those are the ones I was able to manually delete/move recreate.  

    I also attempted to run the defrag service from safe mode and services disabled except disk defrag but it has dependencies and failed to start.

    What is needed is an offline defrag that can run without services or pre/during-boot.

    Anyway, if you ever try to shrink a volume you'll see what the fuss is all about.  Event viewer lists the last file that is "in the way" and have to go from there.

Page 5 of 6 (89 items) «23456