Larry Osterman's WebLog

Confessions of an Old Fogey
Blog - Title

So why does NT require such a wonking great big paging file on my machine?

So why does NT require such a wonking great big paging file on my machine?

  • Comments 14

UPDATE 5/5/2004: I posted a correction to this post here.  Sorry if there was any inconvenience.

 Raymond Chen posted a fascinating comment on the dangers that paging causes server applications the other day.

One of the people commenting asked one of the most common questions about NT:

Why does the OS keep on asking for a wonking great big paging file?

I'm not on the Memory Management team, but I've heard the answer enough times that I think I can answer it :)

Basically the reason is that a part of the contract that NT maintains with applications guarantees that every page of virtual memory can be flushed from physical memory if necessary.

If the page is backed by an executable (in other words it's code or static data), then the page will be reloaded from the executable file on disk. If the page isn't backed by an executable, then NT needs to have a place to put it.

And that place is the paging file.

You see, even if your machine has 16G of physical RAM, NT can't guarantee that you won't suddenly decide to open up 15 copies of Adobe Photoshop and start editing your digital photo album. Or that you won't all of a sudden decide to work on editing the movie collection you shot in ultra high-def. And when you do that, all of a sudden all that old data that Eudora had in RAM needs to be discarded to make room for the new applications that want to use it.

The operating system has two choices in this case:

1.      Prevent the new application from accessing the memory it wants. And this means that an old application that you haven’t looked at for weeks is stopping you from doing the work you want to do.

2.      Page out the memory being used by the old application, and give the memory to the new application.

Clearly #2's the way to go. But once again, there's a problem (why are there ALWAYS problems?)

The first problem is that you need a place to put the old memory. Obviously it goes into the paging file, but what if there's no room in the paging file? Then you need to extend it. But what if there's not enough room on disk for the extension? That's "bad" - you have to fail Photoshop’s allocation.

But again, there's a solution to the problem - what if you reserve the space in the paging file for Eudora's memory when Eudora allocates it? In other words, if the system guarantees that there's a place for the memory allocation in the paging file when memory's allocated, then you can always get rid of the memory. Ok, problem solved.

So in order to guarantee that future allocations have a better chance of succeeding, NT guarantees that all non-code pages be pre-allocated in the paging file when the memory is allocated. Cool.

But it doesn't explain why NT wants such a great big file at startup. I mean, why on earth would I want NT to use 4G of my hard disk just for a paging file if I never actually allocate a gigabyte of virtual memory?

Well, again, there are two reasons. One has to do with the way that paging I/O works on NT, the second has to do with what happens when the system blue screens.

Paging I/O in NT is special. All of the code AND data associated with the paging file must be in non-pagable memory (it's very very bad if you page out the pager). This includes all the metadata that's used to describe the paging file. And if your paging file is highly fragmented, then this metadata gets really big. One of the easiest ways of guaranteeing that the paging file isn't fragmented is to allocate it all in one great big honking chunk at system startup time. Which is what NT tries to do - it tries to allocate as much space for the paging file when the paging file is created to attempt to help keep the file contiguous. It doesn't always work, but...

The other reason has to do with blue screens. When the system crashes, there's a bit of code that runs that tries to write out the state of RAM in a dump file that Microsoft can use to help diagnose the cause of the failure. But once again, it needs a place to hold the data. Well, if the paging file's as large as physical RAM, then it becomes a convenient place to write the data - the writes to that file aren't going to fail because your disk is full after all.

Nowadays, NT doesn't always write the entire contents of memory out - it's controlled by a setting in the Startup and Recovery settings dialog on the Advanced tab of the system control panel applet - there are 4 choices - None, a small "minidump", a Kernel memory dump and a full memory dump. Only the full memory dump will write all of RAM, the others limit the amount of memory that's written out. But it still goes to the paging file.

 

  • Thanks! That's answered my question from Raymond's blog... (And now I've added yet another thing to read to my list; at this rate I'll never get any work done!)
  • Forgot the <a href=http://yesihaveabeard.blogspot.com/2004_03_01_yesihaveabeard_archive.html#107963321132301660>trackback.</a>
  • Larry, I think that there is a generally wrong assumptions taken on paging file size. There are some "magic formula" about the right size of paging file, but mostly are based upon hardware (and memory) we had in the 90s. Now it is common to have a 512Mb RAM PC, I work all day with a 1.5Gb notebook and with a 2Gb desktop. In these cases could be not so smart to have a paging file of 2 or 3 Gb (like some of these "magic formula" suggest you). Sometime it's better to avoid having no paging file (option you have starting from Windows XP) but a reasonable 256/512Mb is a right size even if you have Gb of RAM.
  • Marco, remember that the paging file has to be big enough to hold a snapshot of all of physical RAM (for the memory dump system recovery option). That's the minimum paging file size the system sets.

    It may be incorrect, but that's where the numbers come from.
  • It's not 100% on topic and this is just IMHO and AFAIK(I am not an NT guy). I am not trying to flame anyone.
    NT's paging algorithms are somewhat suboptimal. NT was designed from scratch and authors put lots of untested conceptual ideas into it. Some are very nice and help memory management big time (some depending on hardware). For instance page coloring was sort of modern feature at the time and it does help to reduce cache conflicts. There are innovations like that outside of MM too. The best known one would be the registry. But this is totally OT, I wanted to talk about the MM.
    NT decides which pages to clean (sync with backing store) based on working set model. Most of other common OSes use some kind of LRU approximation. Some systems have WS as a second advisory algorithm, but LRU normally takes a priority.
    Problem with WS is that is has hard time adopting to changing load conditions. Hence horrable performance under NT when someone decides to run a large query. Computer almost goes offline. All the foreground memory pushed to backing store and once query ends, it's pretty much reload back into the same frames same data.
    Anyone cares to comment?

    PS. Personally, I don't understand why did not MS hire 3/4 MM programmers/consultants and fixed this. It should probably take 3 months for alpha quality code.
  • IS, I deleted the first of your dup'ed comments, I'm assuming your browser timed out, since there didn't appear to be any difference between the two.
  • Yes, thank you. I am not sure what happened, but it was not intention to post the same thing twice.
    Hope everyone understands, I am not trying to flame/blame. I think, I even understand why paging was written the way it was. I was simply curious whether anyone cares to comment on my observations and/or why have not this been addressed [yet].
  • Unfortunately I can't answer your comment :( As I said above, I'm not a MM guy. I'm a networking/security/email/lotsaotherstuff guy but not MM :(
  • Is there any reason to set the paging file greater than 4095 (ie. the maximum addressable memory space)? If so, why?
  • "Is there any reason to set the paging file greater than 4095" -- because EACH APP gets the same virtual memory model.
Page 1 of 1 (14 items)