“Out Of Memory” Does Not Refer to Physical Memory

“Out Of Memory” Does Not Refer to Physical Memory

Rate This
  • Comments 59

Inside Eric's head I started programming on x86 machines during a period of large and rapid change in the memory management strategies enabled by the Intel processors. The pain of having to know the difference between “extended memory” and “expanded memory” has faded with time, fortunately, along with my memory of the exact difference.

As a result of that early experience, I am occasionally surprised by the fact that many professional programmers seem to have ideas about memory management that haven’t been true since before the “80286 protected mode” days.

For example, I occasionally get the question “I got an ‘out of memory’ error but I checked and the machine has plenty of RAM, what’s up with that?”

Imagine, thinking that the amount of memory you have in your machine is relevant when you run out of it! How charming! :-)

The problem, I think, with most approaches to describing modern virtual memory management is that they start with assuming the DOS world – that “memory” equals RAM, aka “physical memory”, and that “virtual memory” is just a clever trick to make the physical memory seem bigger. Though historically that is how virtual memory evolved on Windows, and is a reasonable approach, that’s not how I personally conceptualize virtual memory management.

So, a quick sketch of my somewhat backwards conceptualization of virtual memory. But first a caveat. The modern Windows memory management system is far more complex and interesting than this brief sketch, which is intended to give the flavour of virtual memory management systems in general and some mental tools for thinking clearly about what the relationship between storage and addressing is. It is not by any means a tutorial on the real memory manager. (For more details on how it actually works, try this MSDN article.)

I’m going to start by assuming that you understand two concepts that need no additional explanation: the operating system manages processes, and the operating system manages files on disk.

Each process can have as much data storage as it wants. It asks the operating system to create for it a certain amount of data storage, and the operating system does so.

Now, already I am sure that myths and preconceptions are starting to crowd in. Surely the process cannot ask for “as much as it wants”. Surely the 32 bit process can only ask for 2 GB, tops. Or surely the 32 bit process can only ask for as much data storage as there is RAM. Neither of those assumptions are true. The amount of data storage reserved for a process is only limited by the amount of space that the operating system can get on the disk. (*)

This is the key point: the data storage that we call “process memory” is in my opinion best visualized as a massive file on disk.

So, suppose the 32 bit process requires huge amounts of storage, and it asks for storage many times. Perhaps it requires a total of 5 GB of storage. The operating system finds enough disk space for 5GB in files and tells the process that sure, the storage is available. How does the process then write to that storage? The process only has 32 bit pointers, but uniquely identifying every byte in 5GB worth of storage would require at least 33 bits.

Solving that problem is where things start to get a bit tricky.

The 5GB of storage is split up into chunks, typically 4KB each, called “pages”. The operating system gives the process a 4GB “virtual address space” – over a million pages - which can be addressed by a 32 bit pointer. The process then tells the operating system which pages from the 5GB of on-disk storage should be “mapped” into the 32 bit address space. (How? Here’s a page where Raymond Chen gives an example of how to allocate a 4GB chunk and map a portion of it.)

Once the mapping is done then the operating system knows that when the process #98 attempts to use pointer 0x12340000 in its address space, that this corresponds to, say, the byte at the beginning of page #2477, and the operating system knows where that page is stored on disk. When that pointer is read from or written to, the operating system can figure out what byte of the disk storage is referred to, and do the appropriate read or write operation.

An “out of memory” error almost never happens because there’s not enough storage available; as we’ve seen, storage is disk space, and disks are huge these days. Rather, an “out of memory” error happens because the process is unable to find a large enough section of contiguous unused pages in its virtual address space to do the requested mapping.

Half (or, in some cases, a quarter) of the 4GB address space is reserved for the operating system to store it’s process-specific data. Of the remaining “user” half of the address space, significant amounts of it are taken up by the EXE and DLL files that make up the application’s code. Even if there is enough space in total, there might not be an unmapped “hole” in the address space large enough to meet the process’s needs.

The process can deal with this situation by attempting to identify portions of the virtual address space that no longer need to be mapped, “unmap” them, and then map them to some other pages in the storage file. If the 32 bit process is designed to handle massive multi-GB data storages, obviously that’s what its got to do. Typically such programs are doing video processing or some such thing, and can safely and easily re-map big chunks of the address space to some other part of the “memory file”.

But what if it isn’t? What if the process is a much more normal, well-behaved process that just wants a few hundred million bytes of storage? If such a process is just ticking along normally, and it then tries to allocate some massive string, the operating system will almost certainly be able to provide the disk space. But how will the process map the massive string’s pages into address space?

If by chance there isn’t enough contiguous address space then the process will be unable to obtain a pointer to that data, and it is effectively useless. In that case the process issues an “out of memory” error. Which is a misnomer, these days. It really should be an “unable to find enough contiguous address space” error; there’s plenty of memory because memory equals disk space.

I haven’t yet mentioned RAM. RAM can be seen as merely a performance optimization. Accessing data in RAM, where the information is stored in electric fields that propagate at close to the speed of light is much faster than accessing data on disk, where information is stored in enormous, heavy ferrous metal molecules that move at close to the speed of my Miata. (**)

The operating system keeps track of what pages of storage from which processes are being accessed most frequently, and makes a copy of them in RAM, to get the speed increase. When a process accesses a pointer corresponding to a page that is not currently cached in RAM, the operating system does a “page fault”, goes out to the disk, and makes a copy of the page from disk to RAM, making the reasonable assumption that it’s about to be accessed again some time soon.

The operating system is also very smart about sharing read-only resources. If two processes both load the same page of code from the same DLL, then the operating system can share the RAM cache between the two processes. Since the code is presumably not going to be changed by either process, it's perfectly sensible to save the duplicate page of RAM by sharing it.

But even with clever sharing, eventually this caching system is going to run out of RAM. When that happens, the operating system makes a guess about which pages are least likely to be accessed again soon, writes them out to disk if they’ve changed, and frees up that RAM to read in something that is more likely to be accessed again soon.

When the operating system guesses incorrectly, or, more likely, when there simply is not enough RAM to store all the frequently-accessed pages in all the running processes, then the machine starts “thrashing”. The operating system spends all of its time writing and reading the expensive disk storage, the disk runs constantly, and you don’t get any work done.

This also means that "running out of RAM" seldom(***) results in an “out of memory” error. Instead of an error, it results in bad performance because the full cost of the fact that storage is actually on disk suddenly becomes relevant.

Another way of looking at this is that the total amount of virtual memory your program consumes is really not hugely relevant to its performance. What is relevant is not the total amount of virtual memory consumed, but rather, (1) how much of that memory is not shared with other processes, (2) how big the "working set" of commonly-used pages is, and (3) whether the working sets of all active processes are larger than available RAM.

By now it should be clear why “out of memory” errors usually have nothing to do with how much physical memory you have, or how even how much storage is available. It’s almost always about the address space, which on 32 bit Windows is relatively small and easily fragmented.

And of course, many of these problems effectively go away on 64 bit Windows, where the address space is billions of times larger and therefore much harder to fragment. (The problem of thrashing of course still occurs if physical memory is smaller than total working set, no matter how big the address space gets.)

This way of conceptualizing virtual memory is completely backwards from how it is usually conceived. Usually it’s conceived as storage being a chunk of physical memory, and that the contents of physical memory are swapped out to disk when physical memory gets too full. But I much prefer to think of storage as being a chunk of disk storage, and physical memory being a smart caching mechanism that makes the disk look faster. Maybe I’m crazy, but that helps me understand it better.

*************

(*) OK, I lied. 32 bit Windows limits the total amount of process storage on disk to 16 TB, and 64 bit Windows limits it to 256 TB. But there is no reason why a single process could not allocate multiple GB of that if there’s enough disk space.

(**) Numerous electrical engineers pointed out to me that of course the individual electrons do not move fast at all; it's the field that moves so fast. I've updated the text; I hope you're all happy with it now.

(***) It is possible in some virtual memory systems to mark a page as “the performance of this page is so crucial that it must always remain in RAM”. If there are more such pages than there are pages of RAM available, then you could get an “out of memory” error from not having enough RAM. But this is a much more rare occurrence than running out of address space.

  • >>For example, when many applications I use attempt to cross the ~2GB limit, they error or crash.

    >If your app author is unwilling to take on the pain of writing its own memory mapper/unmapper, then clearly it is stuck with not allocating more memory than it can map. -- Eric

    Which is it? Your main article appeared to say 'It Just Works' but this response seems to contradict that earlier statement.

    It's always been my experience that if you're going to work with very very large amounts of data, you have to be able to process them without needing it all in memory at the same time. When you hit that stage, you're working with files on disk from the start.

    It's the same idea as iterating over an enumerator. You don't always want always assume you can Count() or ToList() it because what happens if it's streaming data?

  • "Accessing data in RAM, where the information is stored in tiny, lightweight electrons that move at close to the speed of light is much faster than accessing data on disk, where information is stored in enormous, heavy iron oxide molecules that move at close to the speed of my Miata."

    Ha, this was great. But speaking as an old hard-disk guy, iron oxide is almost as outdated as expanded/extended memory. Your new shiny computer probably uses cobalt thin film media rather than rust on its disk platters. So maybe more like your Ferrari (come on, we know you have one)?

  • Well, in my case, i sometimes prefer to disable paging file, thus removing

    1. the cost of unnecessary disk io

    2. the chance of allocating memory more than what i have as RAM.

    All in all, if i don't need too much memory on my laptop (i have 4GB of RAM and it's almost always enough) this method reduces the I/O and also helps to keep myalways moving laptop's harddrive physicaly safer (less IO less chance of bad sectors)

    In this case, there is no use for page faults, and it's a happy system as long as you don't need big chunks of memory. And trust me, it boosts the performance since Windows DOES paging even if everything fits in RAM, but unable to do so with this method.

  • Thank you for submitting this cool story - Trackback from DotNetShoutout

  • "The amount of data storage reserved for a process is only limited by the amount of space that the operating system can get on the disk."

    "RAM can be seen as merely a performance optimization."

    This is not always correct because windows can operate without a paging file. And if you have a lot of ram (which btw is very cheap these days) you should kill the paging file.

    Indeed. Re-read the sixth paragraph. -- Eric

    I have 3 computers at home (3GB, 3GB and 8GB) and 2 at work (4GB and 16GB) and none of them uses a paging file. Having a paging file makes sense if you don't have a lot of memory but nowadays it less and less useful. 2GB of DDR2 can be bought for $30 or less. Buy RAM and kill the paging file and do not think of ram as data in a file on disk.

  • Cool. Reminded me of my operating systems classes in college. A brief explanation of the dining philosophers problem would have been excellent.

  • Started programming in the 32-bit systems (x86). I think it makes a lot of sense, although i work mostly with managed code. It explains why an app running on a system with almost 4GB of ram and free disk space of 20GB, throws an out of memory exception when it ram usage bearly 1.5GB (after running for long hours).

    And it is the only major app running on the system.

    What do you suggest will solve the problem? (Leaving mapping/unmapping memory aside, as it is managed code).

    My suggestion: use the CLR memory profiler to figure out why you are using so much memory. If you have a memory leak, fix it. If you don't, figure out what the largest consumer of memory is, and optimize it so that it doesn't consume so much memory. -- Eric

  • in all of this, I'm curious.  What does this set of memory guru's recommend as a swap file setting?  Is it Swap File = RAM or Swap File = RAM * 1.5 (or some other setting)?  Or, do you really recommend letting Windows manage the swap file size?  I'd heard (long ago) that this just eats up system resources letting Windows manage it and I've always opted to just set it to 1.5 times the RAM.

    Thanks!

  • To dive deeper into this article I need to know if you race your Miata? If yes, autocross or road race?

    Seriously, thank you for the article!

    Bob

  • Excellent arcticle.  One question.  I have a website that relies heavily on caching to reduce database access.  About once a day I start to receive OutOfMemory exceptions and the worker process recycles itself.  Would you recommend upgrading RAM ( Currently 4GB ) or the OS to 64 bit?

    Better than either of those strategies would be to fix the actual bug, which is that the caching logic leaks memory. If the cache is growing without bound and eventually filling up all of memory, giving it more room to grow is just making the problem worse, not better. -- Eric

  • I'm sure this won't be a popular question: I have a number of boxes running XP, and have been for years. I used to be able to run dozens of applications at the same time. Now, after the past few service packs, I've noticed that GIVEN THE SAME INSTALLATIONS, AND THE SAME HARDWARE I cannot run nearly as many applications at the same time. A number of colleagues have noticed similar issues. Its almost as though something has been pushed out in a service pack to peg the number of concurrent apps, or limit memory in some way. Of course this might encourage some to "upgrade" to x64, but given the number of everyday apps that dont install / run on x64 that is not an option. Just to re-iterate - this is not the gradual slowing down of performance we have all come to expect, this has been a distinct change over the past few packs, on several machines. Any thoughts?

  • Reminded me of my operating systems classes in Uni

  • I do like this article quite a bit, and the concept of treating RAM as a disk cache is refreshing.  Of course, you did neglect to acknowledge memory allocations that neither (a) mirror data on disk or (b) are ever intended to reach the disk.  I'm thinking mainly of the application stack, but also of temporary allocations used for decompressing data, and other such transitive operations.  Perhaps it's best to ignore these exceptions for sanity's sake!

  • Loddie, the OS already uses the 1.5 x RAM rule for the minimum when there is less than 2GB fitted or 1 x RAM for more than 2GB. The maximum is 3 x RAM in both cases. At least, that's my recollection, I don't have my copy of Windows Internals 5th Edition to hand right now.

    There really is no right setting for the swap file. In Task Manager, before Windows Vista, Commit Charge Peak shows the maximum amount of the swap file that has been used (actually physical + swap file) since the system was booted. Limit is the current maximum possible commit. ('Commit Charge' - you 'commit' virtual memory to make it usable, 'reserve' just stops anything else from using that address range, but you get access violation exceptions if you try to reference reserved, but not committed, memory. The process is 'charged' for memory it commits, so 'commit charge' is the total memory committed by all processes.)

    Windows Vista rather more sensibly shows the actual page file usage. Commit Charge Peak isn't shown.

  • While horribly late to the party (way beyond fashionable), I'd still like to ask a question.

    What do I do if this "out of memory" error happens in the managed app running under CLR? I don't get direct access to any memory/pages. So while the problem is clear to me after reading this article, I am in complete darkness as to what should I do about it.

    Rewrite the program to not use so much memory per process. Or, less good, tell the user to stop throwing problems at the program that require so much memory. -- Eric

Page 3 of 4 (59 items) 1234