Notes on comments.
Welcome to our blog dedicated to the engineering of Microsoft Windows 7
This post is about disk space and the disk space “consumed” by Windows 7. Disk space is the sort of thing where everyone wants to use less, but the cost of using a bit more relative to the benefits has generally been a positive tradeoff. Things have changed recently with the availability of solid-state drives in capacities significantly smaller than the trend in spinning drives. Traditionally most all software, including Windows, would not hesitate to consume a 100MB on a specific (justified) need when looking at a 60GB (or 1,500GB) drive; with desirable machines shipping with 16GB of solid-state storage, we are looking carefully at the disk space used by Windows—both at setup time and also as a PC “ages”. We also had a specific session at WinHEC on solid-state drives that might be interesting to folks. This post is authored by Michael Beck, a program manager in the core OS deployment feature team. --Steven
Let’s talk about “footprint”. For the purposes of this post, when I say “footprint” I’m talking about the total amount of physical disk space used by Windows. This includes not only the Windows binaries, but all disk space consumed or reserved for system operations. Later in this entry, I’ll discuss in detail how the disk footprint is consumed by various Windows technologies.
A number of comments have asked about disk footprint and what to expect in terms of Windows 7’s usage of disk space. Like many of the design issues we have talked about, disk space is also one where there are tradeoffs involved so this post goes into the details of some of those tradeoffs and also discusses some of the feedback we have received. It should be noted, that we are not at the point where we are committing to system requirements for Windows 7, so consider this background and engineering focus.
To structure this post we’ll take two important points of feedback or questions we have received:
We’ll then talk about the focus and engineering of Windows 7.
We definitely get a lot of questions about the new (to Vista) Windows SxS directory (%System Root%\winsxs) and many folks believe this is a big consumer of disk space as just bringing up the properties on a newly installed system shows over 3000 files and over 3.5 GB of disk consumed. Over time this directory grows to even higher numbers. Yikes--below is an example from a Steven's home PC.
“Modularizing” the operating system was an engineering goal in Windows Vista. This was to solve a number of issues in legacy Windows related to installation, servicing and reliability. The Windows SxS directory represents the “installation and servicing state” of all system components. But in reality it doesn’t actually consume as much disk space as it appears when using the built-in tools (DIR and Explorer) to measure disk space used. The fact that we make it tricky for you to know how much space is actually consumed in a directory is definitely a fair point!
In practice, nearly every file in the WinSxS directory is a “hard link” to the physical files elsewhere on the system—meaning that the files are not actually in this directory. For instance in the WinSxS there might be a file called advapi32.dll that takes up >700K however what’s being reported is a hard link to the actual file that lives in the Windows\System32, and it will be counted twice (or more) when simply looking at the individual directories from Windows Explorer.
The value of this is that the servicing platform (the tools that deliver patches and service packs) in Windows can query the WinSxS directory to determine a number of key details about the state of the system, like what’s installed, or available to be installed (optional components, more on those later), what versions, and what updates are on the system to help determine applicability of Windows patches to your specific system. This functionality gives us increased servicing reliability and performance, and supports future engineering efforts providing additional system layering and great configurability.
The WinSxS directory also enables offline servicing, and makes Windows Vista “safe for imaging”. Prior to Windows Vista, inbox deployment support was through “Setup” only. IT professionals would install a single system, and then leverage any number of 3rd party tools to capture the installed state as a general image they then deployed to multiple systems. Windows wasn’t built to be “image aware”. This meant that greater than 80% of systems were deployed and serviced using a technology that wasn’t supported natively, and required IT departments to create custom solutions to deploy and manage Windows effectively. In addition, state stored in the WinSxS directory can be queried “offline”, meaning the image doesn’t have to be booted or running, and patches can be applied to it. These two features of WinSxS give great flexibility and cost reductions to IT departments who deploy Windows Vista, making it easier to create and then service standard corporate images offline.
While it’s true that WinSxS does consume some disk space by simply existing, and there are a number of metadata files, folders, manifests, and catalogs in it, it’s significantly smaller than reported. The actual amount of storage consumed varies, but on a typical system it is about 400MB. While that is not small, we think the robustness provided for servicing is a reasonable tradeoff.
So why does the shell report hard links the way it does? Hard links work to optimize disk footprint for duplicate files all over the system. Application developers can use this functionality to optimize the disk consumption of their applications as well. It’s critical that any path expected by an application appear as a physical file in the file system to support the appropriate loading of the actual file. In this case, the shell is just another application reporting on the files it sees. As a result of this confusion and a desire to reduce disk footprint, many folks have endeavored to just delete this directory to save space.
There have been several blogs and even some “underground” tools that tell you it’s ok to delete the WinSxS directory, and it’s certainly true that after installation, you can remove it from the system and it will appear that the system boots and runs fine. But as described above, this is a very bad practice, as you’re removing the ability to reliably service, all operating system components and the ability to update or configure optional components on your system. Windows Vista only supports the WinSxS directory on the physical drive in its originally installed location. The risks far outweigh the gains removing it or relocating it from the system, given the data described above.
As we all know adding new functionality consumes additional disk space--in Windows or any software. In reality, “code” takes up a relatively small percentage of the overall Windows footprint. The actual code required for a Windows Vista Ultimate install is just over 2GB, with the rest of the footprint going to “data” broadly defined. Let’s dig deeper into the use of storage in a Windows Vista installation and what we mean by "data".
Reliability and security were core considerations during the engineering process that built Windows Vista. Much of the growth in footprint comes from a number of core reliability features that users depend on for system recovery, performance, data protection, and troubleshooting. Some of these include system restore, hibernation, page file, registry back up, and logging. Each of these represent “backup state” that is available to the system to recover from any number of situations, some planned and others not. Because we know that different customers will want to make different tradeoffs of disk space relative to recovery (especially on small footprint devices) with Windows 7 we want to make sure you have more control than you currently do to decide ahead of time how much disk space to use for these mechanisms, and we will also tune our defaults to be more sensitive to overall consumption due to the changing nature of storage.
System restore and hibernation are features that help users to confidently recover their system and prevent data loss, in a number of situations such as low battery (hibernation), bad application installation or other machine corruption (system restore). Combined, these features consume a large percentage of the footprint. Because of the amount of space these use, they are easy to identify and make decisions regarding.
System restore protects users by taking snapshots of the system prior to changes and on regular intervals. In Windows Vista, system restore, is configured to consume 300mb minimally, and up to 15% of the physical disk. As the amount of space fills up with restore points, System Restore will delete older restore points to make room for new ones. The more space you have, the greater the number of restore points you have available to “roll back” to. We have definitely heard the feedback from Windows Vista customers around system restore and recognize that the it takes significant space and is not easy to tune. Some have already seen the pre-beta for Windows 7 provides an interface to manage the space better.
Hibernate is primarily used on mobile PCs and saves your work to the hard disk and puts the computer in an extremely low power state. Hibernate is used on mobile PCs when the battery drains below a certain threshold or when turning the computer off without using Shut Down to extend battery life as much as possible. On Windows Vista, Hibernate is also automatically used with Sleep on desktop PCs to keep a backup copy of open programs and work. This feature is called Hybrid Sleep and is used to save state to the hard disk in case power fails while the computer is sleeping. Hibernate writes all of the content in memory (RAM) to a file on the hard drive named Hiberfil.sys. Therefore, the size of the reserved Hiberfil.sys is equal to the amount of RAM in the machine. In the Windows Vista timeframe, the amount of RAM being built into computers has increased significantly, thus the disk footprint of Hibernate is more noticeable than before. This space must be reserved up front to guarantee that in a critical low battery situation, the system can easily write memory contents to the disk. Any mobile PC user that has experienced their computer automatically entering Hibernate when the battery is critically low can appreciate the peace of mind this footprint growth provides. While we're talking about RAM and disk footprint in the same paragraph, Mark Russinovich has a post this week on virtual memory and how big the swapfile could/should/can be that you might find interesting.
Now it’s clear that in the description above, I don’t account for the entire footprint required by Windows Vista. For instance, we also include many sample files, videos, high resolution backgrounds that allow users to easily customize their experience, and try out new features, but we’ve covered a couple of the more common questions out there.
It’s important that we consider more than just the size of the system once deployed, but we must also look at how the system grows over time as services write logs, updates, and service packs are installed, system snapshots are taken etc. For many, the “growth” over time of the installation proves to be the most perplexing—and we hear that and need to do better to (a) make smarter choices and (b) make it clearer what space is being consumed and can be reclaimed.
The following table provides one view of the installation footprint of a Windows Vista Premium/Ultimate installation. This includes the full installation, but to make it digestible this has been broken down into some logical categories and also highlights some specific features. Part of the reason to highlight specific feature is to illustrate the “costs” for items that have been raised as questions (or questionable).
Here are some items worth calling out:
Windows disk space consumption has trended larger over time. While not desirable, the degree to which it’s been allowed is due in large part to ever-increasing hard drive capacity, combined with a customer need and engineering focus that focused heavily on recoverability, data protection, increasing breadth of device support, and demand for innovative new features. However, the proliferation of Solid State Drives (SSDs) has challenged this trend, and is pushing us to consider disk footprint in a much more thoughtful way and take that into account for Windows 7.
This doesn’t mean that we’re going to stop adding great features or make Windows less reliable or recoverable. As we look to the future, it’s critical that as we innovate, we do so treating the disk space consumed by our work as a valuable resource, and have a clearer design for how Windows uses it. We want to make sure that we are making smart choices for the vast majority of customers and for those desiring more control providing places to fine tune these choices as appropriate. This design goal isn’t about a type of machine, or specific design, all Windows editions benefit from efforts that focus on a reduction of the overall footprint.
For example, as we consider the driver support discussed above, Windows Vista with SP1 installs almost 1GB of drivers on the system to support plug and play of devices. This local cache can get out of date as IHVs release updates to their drivers, and as a result, users are pushed to Windows update to get the latest version once the device is installed.
Why not extend the PnP user experience to include (or only use) the Windows Update cache of drivers and save some disk space? This has several benefits:
With this example it’s easy to see how engineering for a minimal footprint might actually deliver a better experience for people when attaching new devices to their systems. At the same time, we want to be careful about going too far too soon. We get a tremendous amount of feedback regarding the “plug and play” experience or feedback about costly download times (if download is at all possible). For Windows 7 we are going to continue to be deliberate in what we include based on the telemetry of real world devices and reducing the inbox set to cover the most popular devices around the world. At the same time we will continue a very significant effort around having the best available Windows Update site for all devices we can possibly support.
Windows features installed by default make sense in most cases to support many scenarios. We should consider how we make some features/components (such as Media Center) optional when they are not required rather than installing them by default on every system. We’re committed to make more features of Windows optionally installed. As you might notice today in Windows, when you choose to add a feature that was not installed Windows does not require a source (a DVD or network location). This is because the feature is stashed away as part of a complete Windows install—this is itself a feature. We will always keep features available and will always service them even when components are not installed—that way if you add a component later you do not risk adding a piece of code that might have been exploited earlier. This is another important way we keep Windows up to date and secure, even for optional features.
System growth over time is an area where we need to provide more “transparency”. For instance, Windows will archive previous versions of updated system components to allow robust rollback. A new system will install patches as Windows Update makes them available, just as expected by design. As a Service Pack or other large update is installed that contains or supersedes any of the previous patches; we can simply recover the space used by the old updates sometime after the update is successfully installed.
Windows writes logs in many places to aid in troubleshooting and these logs can grow very large. For instance, when an application crashes, Windows will archive a very large dump file to support analysis of the failure. There are many good reasons for this behavior, but as we change our mindset towards footprint, we need to extend our scenarios to include discussions of how to manage the growth, and recover the disk space consumed whenever possible. Other areas where we are considering the default disk space reserved include System restore and hibernation. On a disk constrained system, the 1GB or more reserved to support hibernation is costly and there may be ways to shrink the size of hiberfil.sys. System restore should be configurable, and default in all cases to the minimally useful number of snapshots vs. a blanket 15% of the system disk.
At WinHEC we had several machines on display with 16GB drives/partitions and on those you could see there was plenty of free disk space. Like all the benchmarks, measuring disk space on the pre-beta is not something we’re encouraging at this time.
In conclusion, as we develop Windows 7 it’s likely that the system footprint will be smaller than Windows Vista with the engineering efforts across the team which should allow for greater flexibility in system designs by PC manufacturers. We will do so with more attention to defaults, more control available to OEMs, end-users and IT pros, and will do so without compromising the reliability and robustness of Windows overall.
I would like to joint the others who have expressed the desire to have more control over what gets installed and when. Especially I would like to point out that for most users the hardware present during installation will not change much, and if it does it is so seldom the the user can live with having to use the installation disc or downloading drivers from the net. Also, when new hardware is installed it is usually newer than the Windows installation and there are no drivers for it on disc. In short, for the majority of drivers if the driver is not needed during installation if probably will not be needed later either (printers are a prime example), of course there are exceptions such as USB memory sticks etc. but they should be easy to find. (Don't you get information about what drivers are installed from the Customer Experience Improvement Program?)
As others have pointed out I don't particularly like the idea of spending lost of disc space on features (and updates to those features) I do not use and often will not use, ever.
And while vsp1cln.exe is a great tool I guess that most users does not know that it exists, why not make it easier for the users by simply asking the user if they want to keep uninstall information when the installation is completed? And once again, do you have any statistics on how often users do uninstall a service-pack?
Now a few answers to comments from other users
Yes, it is possible to standardise network drivers. You actually don't need more than a few drivers per network type and chip maker. This is how it works in the open source world where one driver is enough to support all wireless cards that uses the Ralink network chip. The reason it's not done like this in Windows is probably a combination of tradition and politics. And don't somebody come tell me that the device manufacturer's drivers provides more features and configurability, the open source drivers usually have better support than the proprietary once (except for the proprietary features which you won't be using anyway).
Fragmentation is still a problem on SSD since it will require access to more locations in the disc and thus more than one DMA transfer, i.e. to read a file in four fragments you need to do four requests to the disc. Fragmentation will probably also result in suboptimal usage a disc space.
Regarding the hibernation-file:
It is true that you usually do not need to save the full size of the RAM since you usually don't use more than perhaps half of it, much of it is buffers that don't have to be saved, and you could use compression. But to prevent catastrophic failure when i.e. the battery runs out you must be sure that you have enough disk space to store everything that has to be saved. The problem is to determine how much space is required, since it changes depending on what applications you have open, and the compression ratio depends on what's in that memory. In short, reserving space for all RAM is the safest option.
A few words on hardlinks:
A file on disc consists of three things, a name/path, the file data, and metadata (there is metadata associated with both the name and the data). A hardlink is the connection between the filename and the file data. What causes the confusion is when there are two (or more) names that links to the same data. Since they link to the same data they both have the same size, thus it is not wrong to report the size of the SxS folder the way explorer does. Mr. Beck claimed that the files in the SxS folder points to files in the System32 folder, but it it just as correct to say that files in the System32 folder points to files in the SxS folder. Generally speaking it is not possible to find the other names that points to some data (unless you search through all files), all you know is that it will be in the same filesystem. This makes it very hard to calculate the actual space used since any user can create additional hardlinks.
Very happy to know it's only ~ 400 MB and not several GBs. Still I feel MS can allow users to see the free and used space more effectively in Windows 7 (WinSxS as well as the space taken by Volume Shadow Copies). And what about the performance impact of WinSxS??? "Please wait while Windows configures updates". This is *NOT* what I want to see in Windows 7, nor did I see it in Windows XP.
Regarding hibernate, I would like to know how the Windows 9x family (98 and especially Windows Me) required only half the amount of disk space of installed RAM for hibernation. And make installing language components optional (MUI resources and IMEs), I just don't want any other language other than English, say for example.
As for the %Windir%\system32\Driverstore folder, like the innumberable comments here, I want MS to get it loud and clear that I would want to save that disk space or at least I most certainly want to be able to decide as a user. Didn't XP support a large number of devices as well (more in fact than Vista including very very old ones). Then why does Vista create a DriverStore folder? To make driver reinstallation/rollback easier? I don't want that. I welcome the Windows Update only decision in this regard. For driver installation, I *want to have* "Have Disk" functionality during the initial wizard (which Vista removed).
And MS does need to do something about the 'footprint' of Windows Installer and cache MSIs as well. I can find several MSIs of apps like QuickTime and Windows Live Messenger (even MSN Messenger) on my PC.
For patches, I would prefer to have the /nobackup switch back. Remember, you need to let the user decide. The default option can be to keep backups for those won't don't know/aren't aware. During/after service pack installation, a checkbox can make more users aware instead of vsp1cln.exe.
Lastly, I know Explorer is sort of symbolic link aware starting with Vista, but it should be completely aware of all NTFS attributes (hardlinks, sparse files etc everything) and be able to manage them (have to GUI for that feature).
Since one of the main reasons to reduce the footprint of Windows is SSDs, is it possible to have the OS files been configured into a more speed-critical and more-writes part, and another part that is less-used and mostly read-only?
With such a separation, it would be possible to have the more-used part on an actual SSD while the other part on a slower but cheaper flash-based drive, like the hardware in some eeePCs.
Looking at the above table it would seem that some things like the printer drivers can be placed in the slower drive.
MS, plese take a good look at the post by email@example.com because it contains everything you want to know about how to manage this.
If allowed, I would specify I want some installable elements to be local and some remote. If I specify remote then I will definitely need the ability to re-install any network drivers - they are critical to get what I need. I'm never going to need Arabic fonts, get them off my system!!
The amount of space taken up by printer drivers is ridiculous. Really, how different are all those HP printer drivers? There are thousands of them and highly duplicative I'm sure. Further though, why not expand the same idea to local devices as network ones, and get the drivers directly from the device itself? On a network, you can download the driver fromt he node acting as the spooler. If you incorporate a sensible driver model, you could have a miniature disk image and have the drivers needed for the device on the device itself, with a simple protocol to grab it and install it. You can use tools to flash a new driver image on the device itself too.
Windows should be able to boot from a device with less than 4Gb RAM. This RAM/memory should be on the motherboard of the PC itself, like the BIOS is. Windows should flash its minimal image into this area. Anything which is 'optional' can be on the primary disk, but the machine need not boot from disk.
According to a post on the Core Team blog, the WinSxS directory works in the exact opposite fashion as described in this post. Instead of files being linked into WinSxS from everywhere else, they claim the actual files exist in the WinSxS and are hardlinked out to wherever they are needed.
PLEASE don't forget c:\windows\installer which is also almost 10GBs if you use SQL, VS, Office and some other stuff.
I feel that this post didn't touch on most important aspects -- The Versioning that started with Vista and happens "behind the scenes". I find it utterly impossible to use on partitions where very large media files are involved and being edited. It creating "shadow versions" of files, is causing the disk space to "magically disappear".
Will this be reviewed? Why can't I control to tell it (also for PERFORMANCE benefits) not to VERSION certain files?! (Think Media, Gaming, etc!)
As far as the file system is concerned, they are the same file with two different references pointing to the same data on disk. The actual data on the disk isn't removed until all references (hard links) to it are removed.
So, in reality, neither is an "owner" of the file, so it's not really correct to say that the "actual files" exist in one location or the other. They are both just references to the same data on disk. If another hardlink is created to the same file, it's just another reference.
Both this article and the one you linked to explain the concept in the way they do in an attempt to simplify it. In reality, neither are technically 100% true (deleting one of the locations but not the other will never remove the actaul data), but for all intents and purposes both explanations are close enough.
Sorry for not being active but I didn't want to repeat myself that much (not to mention those cosmetic posts in which I am not interested), however I must repeat myself again now. Do it as modularly as possible:
Firstly and foremost, the simplest and most obvious way to keep a small footprint is to NOT install what is not needed (wide concept, includes everything from drivers, passing by "fancyware", to architectural optimizations).
Secondly, all the info one should need to uninstall anything is a plain text list of what was installed/changed (compressed preferably since if it is going to be used at all it will be once), there is no need to keep all those installer/uninstaller packages. As for upgrades, there is no need to uninstall them, if something goes wrong, as the version and what the problem is are known, then simply fix it, again (or there is that system restore thing eating space for some reason, by the way, if the user knows how to disable it then he also has maturity enough to handle some bugs). Also a proper system for external backups, or in another partition where you would want to temporarily keep such backups (system restore data included, if you like), until they are not needed anymore (too old/deprecated) or they are committed to an external medium, is not a bad idea, specially if there is a selective restore option.
Thirdly, it is to be expected that conscious users (minority unfortunately) are going to use several partitions or drivers or both, I would go even as far as suggesting that you, by default, isolated the system folder in a independent dedicated partition. I myself have six, four in one disk for the system, the games, other programs (unable to express how much of a PAIN that default "c:\program files" is, noticeably those "common files" that some use more than the main install folder, not to mention, how is it that you change the internet explorer add-ons installation path again... pandasoft activescan for instance eats up 150MB alone) and miscellaneous. The two others in my secondary drive are, one for my personal files and the other for swap (page file if thats how you call it).
As for links, call it nostalgia from the linux hard/soft links but it is a very good concept. I mean, if the administrator decides that it is a good idea to change one folder from a program or the system to another partition without breaking it, then a hard link wold do the trick flawlessly (move the folder to where you want and replace it by the hard link, I know that there is already some functionality on this regard that can be done with dynamic drivers and ntfs, but not good enough yet, too much prerequisites and limitations).
Lastly, maybe a bit unrelated, do not auto-run anything automatically (I am including, but it is not necessarily the case, those reads that decide which program to launch like the movie player if there is video or the audio player if audio, and yes I know you can tell it to do nothing but I am talking about the "read", the brilliancy of reading even a portable HDD for such a purpose is beyound me, I myself disable the service that does it, the result is sooooo much better... seriously, it should not even exist, much less by deafault), removable storage (pendrives in particular) are such a pain to deal with due to that lazy autorun.inf that all sort of viruses seems to use nowadays.
About the drivers … I’m of two minds on this. One tells me that yes, it is good to plug something in and have it Just Work. The other mind is staring at this 1GB glob of stuff, 98% of which will never be used in the lifetime of my computer, and thinking “Why keep this around all the time?” I’d be more in favor of going the Windows Update route, but that has been horribly hit or miss for me. The most egregious of misses would be when I plug in something made after Vista shipped, thus not inbuilt drivers to autoload. I say “Sure, look online for drivers! Should be newer than the ones in the box.” Only instead of finding anything, it spends a veritable age searching only to tell me it hasn’t found anything. So I back up and toss in the driver CD that came with said device and install the drivers from there. Suddenly Windows Update is yelling at me, “Hey! Hey! I have a newer version of that thing you just installed!” Why it couldn’t manage to find it the first time around I cannot say.
One thing, though, can be used to help cut down on this “wasted” space: NTFS file compression. Driver cache? Installer cache? Unused Windows components in winsxs? All of them should be compressed. While it isn’t very powerful, NTFS compression is almost free on modern CPUs. Just compressing whatever was not in use while in Safe Mode inside the winsxs folder slimmed mine from 9.6GB to 6.2GB of actually used disk space. Even greater savings could be had if unused versions were CABed or ZIPed (although that would add more steps and complications, whereas NTFS compression is totally transparant).
@caywen (and anyone else asking about why the hibernation file acts as it does)
Eriwik mentioned much of the reasoning behind why compression or only saving out what’s used isn’t done: it’s more complex, and thus has more points of failure. Another reason is the way in which NT boots up. After the BIOS (or EFI) hands over boot control to NT, the bootloader fires up and loads basic hardware support. It then reads the boot information to know where the OS is. It’s at this point that, if a hibernation file is present and “turned active” that it will load it into memory by simply copying it byte for byte. This is all before the kernel is loaded, before any complex file system drivers are present. Thus to support compression or partial sizes or anything else fancy, more logic would have to be pushed down to that layer. Or, alternatively, load the hibernation data after loading the kernel and all the associated drivers and memory management. The first option would involve making the system more complex (and thus more prone to failures) while the second option would cause boot times to be even longer.
Yes, you are “doing something wrong.” Or, more correctly, you’re overlooking something. Just click the far left arrow on the breadcrumb bar. You can now breadcrumb all the way back to the Desktop, as well as hit anything FROM the Desktop in one go.
@Laith, Leeoniya, and anyone else asking about Explorer’s behavior in regard to hardlinks
The reason that Explorer can’t tell you the correct amount of actual disk space taken up by files is because of the way it calculates the information. The code is most likely left over from the ancient days when FAT32 ruled the desktop. On a FAT32 partition, you’d never find yourself in a situation where what the directory structure presented was not what was actually on the disk [in situations when the disk is healthy]. Thus it could afford to be stupid and just keep adding file sizes to the total as it recursively went down the chain. And, like a lot of code from this age, it has probably not changed much; it isn’t causing system crashes and reworking it would take time away from doing new things.
Now, even given that, I do believe this is erroneous behavior. Much like the path length limitations Explorer imposes that NTFS doesn’t have, it is something that needs to be fixed. The Size on Disk part SHOULD report the actual size on disk, taking into account hardlinks. (The Size column can probably report the doubles as counting as two.) Since there are already APIs in place to check reference counts (and under Vista, even APIs to enumerate where all the hardlinks actually are), Explorer could actually be modified to take them into account. One way is to store the unique file ID along with the reference count, and then decrement one from the count when it adds the file size to the heap. If the count goes to zero, remove it from the list. Then compare all new file IDs to what is in the list. If it is new, repeat the same process. If it matches one in the list, decrement the counter and do not add to the heap. This will be a bit slower, but the process of accumulating file size info is already a slow process.
I wholeheartedly agree with Jalf's comments - the key thing required 'in the box' is working ethernet/mouse/touchpad/keyboard/basic video drivers/USB mass storage. Users can then pick whether further updates should come from Windows Update (hopefully with some slighty more updated drivers) or from the installation DVD. 800MB of printer drivers really seems excessive, especially as all new printers come with a installation CD anyway or you can download them from the manufacturer's website.
Let's get the basic installation footprint down :-)
PS - loving these blogs, roll on public beta of Windows 7!
We need less backup files!
We need different editions for SSD and normal HDD!
We need expert version so we could choose what we want to install!
We want different editions and more choices for install for beta, everyone dosen't need accessibility or tablet or media center!!! <- these take 1,5GB and slow down installation!
Xp was like 600-800mb , having more cohices then we could also have 900mb installation or 800mb what fits on CD!!!
We want that programs wont great junk files and leftover files! <- that's why I use portable program versions so it wont touch my computer space so much and can be used delete not uninstall what is badly developed and wont be needing to worry about useless startup entry programs when we already have PREFETCH enabled , why programs still create their own startup prefetchers???
Why is codec pack taking 60mb and WMV win7 also like 60mb? <- can't it be improved? and why can't it prefetch videos and pictures so it wont need to load them so slow for previews?
Test on older systems and compare with XP not VISTA ,we know XP is faster than WIN7 but we wanna know how is development going compared to old XP , make more startup devices and services to manual so that when programs need it then OS can open them! <- doesn't seem hard to fix it! <- I had 15 services running on XP , atleast make 16-26services not 40 on WIN7!!!! <- during startups
I really like the fact that I can plug a device into my pc and Vista automatically finds the driver and installs it from the local driver collection. To decrease disk footprint, it certainly is an option to remove the drivers. But not everyone has fast a internet connection with a large cap. (over here you're lucky if you have anything more than dialup) I'd actually like a larger collection of drivers, so more devices install instantly. To decrease footprint - have an option to copy the drivers or not at installation.
Speaking of SSDs, why not have an option to configure the Windows OS to accommodate running on a smaller-capacity SSD? Simply selecting whether the OS must run SSD-style or HDD-style. With the former going wild with restore checkpoints and the like while the latter has the minimum of it. That way you can make full use of a 1TB hard drive or a 80GB SSD if that is from where your computer boots. Come to think of it, someone who can afford a high performance SSD will most certainly have a fast internet connection - and consequently won't need the driver collection.
DrizztVD, the internet connection is going to break a lot of people up..
I actually want to pick on one thing Michael mentioned in this post and that is the system protection/restore/copy-on-write incremental backups feature that is included with Windows. The Volume Shadow Copy Service, VSS, was presumably written by Microsoft. So was the NTFS file system. Why, then, is VSS unable to determine if data is moved via defragmentation as opposed to an actual change in the file on an NTFS drive formatted with <16KB clusters? See KB312067, yes it says it's for Server 2003, but "previous versions" are shadow copies and System Restore uses the same, as far as I know. Since most defragmentation programs these days--Windows's own defragger, PerfectDisk, Diskeeper, O&O Defrag, etc.--use the system's own defrag API, surely VSS should be able to recognize these calls and largely ignore them.
I know this is a somewhat petty complaint, but when a system tool, Windows's defrag, is automatically set to run by default and that tool invalidates Vista's vaunted recovery ability there is something wrong. Is there any chance this limitation will be addressed some day?