When v1.0 released, the only OS that ASP.NET supported was Win2k, the only process model was aspnet_wp, and the only architecture we supported was x86.  The aspnet_wp process model had a memory limit that was calculated at runtime during startup.  The limit was configurable (<processModel memoryLimit/>) as a percentage of physical RAM, and the default was 60%.  This limit prevents the process from consuming too much memory, especially in the face of a memory leak caused by user code, and allows the process to gracefully recycle.  We also had a feature known as the ASP.NET cache, which allows you to store objects with various expiration and validation policies.  The ASP.NET cache had built in logic that would drop entries when private bytes became too close to the private bytes memory limit for the process.  The actual percentage at which the cache began to drop entries is an implementation detail, and it is different on different hardware.  Suffice it to say that the cache dropped entries when memory usage approached the process memory limit.

This default limit of 60% worked okay for machines with small amounts of RAM.  The 60% value was chosen to allow plenty of breathing room in the case where the limit was exceeded, since when that happens the new process is created before the old process has completely drained existing requests.   Stress runs showed that memory limits higher than this resulted in too much memory paging during process recycles.  However, there was still a problem on boxes with large amounts of RAM.  For example, a box with 4GB of RAM had a default memory limit of 2.4GB (60%) by default.  This obviously doesn’t work given that the user mode address space is only 2GB.  Furthermore, ASP.NET apps typically had a very fragmented virtual address space.  We often saw apps throwing OutOfMemoryExceptions when virtual bytes reached about 1.5GB.  We found through experimentation that on x86 with a 2GB user-mode virtual address space, a conservative private bytes limit of 800 MB worked for most people.  We began recommending that people use this as a cap on private bytes.  Of course some applications could go beyond this, but if you wanted to play it safe, 800MB was a good limit for private bytes.

In v1.1, we also supported WS03.  WS03 used a different process model (w3wp).  This process model gets its private bytes limit from IIS configuration ("Maximum Used Memory" on Recycling tab of Application Pool properties in IIS Manager), not the aforementioned <processModel memoryLimit/>.  Unfortunately, this limit had no default (it was not set by default).  So if the application used the ASP.NET cache, we would never drop entries and eventually you would start seeing OutOfMemoryExceptions.  These are non-recoverable, and required human intervention since the process would typically stay up and serve responses with a nicely formatted OutOfMemoryException error page from that point forward.

In v2.0, we fixed this by exposing new configuration for the cache <caching><cache privateBytesLimit/><caching>.  Now the cache could have a memory limit independent of the process memory limit.  For backward compatibility, we also applied the process memory limit if it was set.   Unfortunately, this complicated things a bit, and the way we calculate the cache memory limit is hidden to the user.  If you don’t set a cache or a process memory limit, we calculate one for you.   If the user mode address space is 2GB, we use MIN(60% physical RAM, 800MB).  If the user mode address space is greater than 2GB and the process is 32-bit, we use MIN(60% physical RAM, 1800MB).  And for 64-bit processes, we use MIN(60% physical RAM, 1TB).  That’s what happens if you don’t set any limits.  However, if you set both a cache memory limit and a process memory limit, we will use the minimum of the two.  And if you only set one, we will use the one you set.  Confused?  You'll be happy to know that the actual limit we use is exposed by the property Cache.EffectivePrivateBytesLimit.

While 60% may not work for boxes with 1TB of RAM, this value is configurable.

Enough about private bytes.  The cache also has a physical memory limit that was introduced in v2.0.  It was introduced because the garbage collector (GC) becomes very aggressive in low memory conditions.  If the cache is consuming a bunch of memory and inducing the low memory condition, then it needs to release entries to alleviate the pressure on the GC.  In 2.0, the cache dropped entries when available memory was <= 11%.  We later discovered this was too aggressive, and have backed it off in 2.0 SP1 so that now we can use much more physical memory before dropping entries.  The actual limits that we use are an implementation detail, and they are different for different hardware.

The v2.0 SP1 cache work was requested as a QFE by MS.com.  The KB article for this is at http://support.microsoft.com/kb/938276.  Anyone using the v2.0 ASP.NET cache should install this QFE.  It will of course be included in v2.0 SP1, when it is released.

The cache memory manager should not be the primary eviction mechanism.  It is better to use expiration policies on the entries, so that they expire  before encountering memory pressure.  Most of the issues surrounding memory stem from the fact that the ASP.NET cache is not able to detect how much memory it is using.  It knows the number of entries, but not their sizes.  It uses Private Bytes for the process and available physical memory for the machine to determine when to drop entries, even though the cache may not be the cause of the memory pressure.  I suggest thinking of the cache memory manager as a safety net or fallback, and using expiration policies or other forms of validation to ensure that your cache entries are removed before encountering memory pressure.  If you only have a handful of cache entries this is not really an issue, and you can rely on the cache memory manager.  But if you're inserting unique entries on a per-request basis, or if you just simply have a very large number of entries, it makes sense to use expiration and/or validation policies.