Cache is used to reduce the performance impact when accessing data that resides on slower storage media. Without it your PC would crawl along and become nearly unusable. If data or code pages for a file reside on the hard disk, it can take the system 10 milliseconds to access the page. If that same page resides in physical RAM, it can take the system 10 nanoseconds to access the page. Access to physical RAM is about 1 million times faster than to a hard drive. It would be great if we could load up all the contents of the hard drive into RAM, but that scenario is cost prohibitive and dangerous. Hard disk space is far less costly and is non-volatile (the data is persistent even when disconnected from a power source).
Since we are limited with how much RAM we can stick in a box, we have to make the most of it. We have to share this crucial physical resource with all running processes, the kernel and the file system cache. You can read more about how this works here:
The file system cache resides in kernel address space. It is used to buffer access to the much slower hard drive. The file system cache will map and unmap sections of files based on access patterns, application requests and I/O demand. The file system cache operates like a process working set. You can monitor the size of your file system cache's working set using the Memory\System Cache Resident Bytes performance monitor counter. This value will only show you the system cache's current working set. Once a page is removed from the cache's working set it is placed on the standby list. You should consider the standby pages from the cache manager as a part of your file cache. You can also consider these standby pages to be available pages. This is what the pre-Vista Task Manager does. Most of what you see as available pages is probably standby pages for the system cache. Once again, you can read more about this in "The Memory Shell Game" post.
Too Much Cache is a Bad Thing
The memory manager works on a demand based algorithm. Physical pages are given to where the current demand is. If the demand isn't satisfied, the memory manager will start pulling pages from other areas, scrub them and send them to help meet the growing demand. Just like any process, the system file cache can consume physical memory if there is sufficient demand.
Having a lot of cache is generally not a bad thing, but if it is at the expense of other processes it can be detrimental to system performance. There are two different ways this can occur - read and write I/O.
Excessive Cached Write I/O
Applications and services can dump lots of write I/O to files through the system file cache. The system cache's working set will grow as it buffers this write I/O. System threads will start flushing these dirty pages to disk. Typically the disk can't keep up with the I/O speed of an application, so the writes get buffered into the system cache. At a certain point the cache manager will reach a dirty page threshold and start to throttle I/O into the cache manager. It does this to prevent applications from overtaking physical RAM with write I/O. There are however, some isolated scenarios where this throttle doesn't work as well as we would expect. This could be due to bad applications or drivers or not having enough memory. Fortunately, we can tune the amount of dirty pages allowed before the system starts throttling cached write I/O. This is handled by the SystemCacheDirtyPageThreshold registry value as described in Knowledge Base article 920739: http://support.microsoft.com/default.aspx?scid=kb;EN-US;920739
Excessive Cached Read I/O
While the SystemCacheDirtyPageThreshold registry value can tune the number of write/dirty pages in physical memory, it does not affect the number of read pages in the system cache. If an application or driver opens many files and actively reads from them continuously through the cache manager, then the memory manger will move more physical pages to the cache manager. If this demand continues to grow, the cache manager can grow to consume physical memory and other process (with less memory demand) will get paged out to disk. This read I/O demand may be legitimate or may be due to poor application scalability. The memory manager doesn't know if the demand is due to bad behavior or not, so pages are moved simply because there is demand for it. On a 32 bit system, the file system cache working set is essentially limited to 1 GB. This is the maximum size that we blocked off in the kernel for the system cache working set. Since most systems have more than 1 GB of physical RAM today, having the system cache working set consume physical RAM with read I/O is less likely.
This scenario; however, is more prevalent on 64 bit systems. With the increase in pointer length, the kernel's address space is greatly expanded. The system cache's working set limit can and typically does exceed how much memory is installed in the system. It is much easier for applications and drivers to load up the system cache with read I/O. If the demand is sustained, the system cache's working set can grow to consume physical memory. This will push out other process and kernel resources out to the page file and can be very detrimental to system performance.
Fortunately we can also tune the server for this scenario. We have added two APIs to query and set the system file cache size - GetSystemFileCacheSize() and SetSystemFileCacheSize(). We chose to implement this tuning option via API calls to allow setting the cache working set size dynamically. I’ve uploaded the source code and compiled binaries for a sample application that calls these APIs. The source code can be compiled using the Windows DDK, or you can use the included binaries. The 32 bit version is limited to setting the cache working set to a maximum of 4 GB. The 64 bit version does not have this limitation. The sample code and included binaries are completely unsupported. It is just a quick and dirty implementation with little error handling.
Tool works fine on 2008 x64 in that it commits the change immediately after running the command. However the setting doesn't stick after a server reboot but defaults back to 8386607MB. Is there a way to make the setting permanent or do we have to resolve to running the tool during the Windows startup sequence?
Thank you very much for your post. I am so sory not found it 2 years ago... I am trying to donload binaries with no success. Tried several ISP's. Please, help?
CecoM, this has now been replaced with Microsoft Windows Dynamic Cache Service - grab it from http://www.microsoft.com/downloads/details.aspx?FamilyID=e24ade0a-5efe-43c8-b9c3-5d0ecb2f39af&displaylang=en
I just tried to install this tool on Windows 2008 R2 but it failed to start with the notification that the tool was written for an earlier version of Windows.
Will there be an update soon or is there a tool provided within R2 that manages the cache size?
I wanted to get some clarification after reading the post related to the Windows Dynamic Cache Service. I am running windows 2008 R2 NFS server. Our workload performs heavy I/O operations to large files. We are running out of memory during peak workloads. You (Team) Mentioned that some architectural changes to memory management in R2 may address these issues Will this service help my situation? does it even apply to R2? If so your help and guidance would be appreciated as i truly believe this to be our issue! P.S - I need your help bad if im going to solidify Windows continued use for our application...please help!
(Question from another blog reader) I am seeing similar issues on Windows 7. What should I do to try to address/investigate such an issue since they are not supposed to exist anymore?
I'm a SQL Server Analysis Services MVP, and I'm very interested in the interaction between the system file cache and SSAS (as it leverages the system file cache heavily). Do you know any experts on that topic from Microsoft that I should ping? Two quick questions for you. I've written some C# code that lets you clear the windows system file cache. The main reason for this is to repeatably retest the performance of an Analysis Services query on a completely cold system file cache (without server reboot). Two questions: 1. Is uses NtSetSystemInformation. Is this doing anything different than SetSystemFileCacheSize? You can see the code here: asstoredprocedures.svn.codeplex.com/.../FileSystemCache.cs 2. From your article, it appears that limiting the system file cache doesn't zero the system file cache memory that's trimmed, but rather moves it to standby. Is there an API (or any other way) of clearing standby memory in the system file cache? That code mentioned above isn't producing repeatable cold system file cache tests, and I believe it's because soft faults from standby are so much faster than hard faults.
[SetSystemFileCacheSize() internally uses NtSetSystemInformation(). I would recommend using SetSystemFileCacheSize() over NtSetSystemInformation() because SetSystemFileCacheSize() is a public API. While you can use NtSetSystemInformation(), you run a greater risk of your application breaking if we change the interface.
If you want to clear out physical RAM without a server reboot, I would recommend creating an application that will consume most of available memory and then dump the pages onto the free list. First get the current File System Cache’s working set size, then set the limit to something fairly low (but not too low). Next find out how much available memory is on the system, and then allocate that much memory in your process (leave about 64 MBs free). Write at least one byte per page to guarantee that it will be committed to your process’s working set. Finally restore the System File Cache’s work set to where it was and then exit the process.]
I'm not able to start the DynCache service on my Windows Server 2008 SP2 64-bit. Struggling with these for some time now. Please help!!!
Windows could not start the Dynamic Cache Service service on Local Computer.
Error 216: 0xd8
On Windows 7 SP1 I had to hardcode into the LimitCache function:
and it now works:) (upper limit is 512MBytes, as reported by SysInternals CacheSet, and once it reached this limit, it didn't cross it).
Hello, very interesting article.
I am using Windows Server 2008 R2 and am seeing this runaway file cache issue consuming all of the available physical RAM.
My application does a ton of random access reads and writes.
What were the changes to the memory manager between Windows 2008 and 2008 R2?
I am curious since the runaway cache problem is still there.
What is your recommendation for dealing with it on Windows 2008 R2?
[The cache manager in Windows Server 2008 R2 handles almost all scenarios more efficiently, and usually avoids the need for dyncache. Unless you have many individual files open, the cache manager should not encounter the scenario described in this article on R2. It is difficult to provide 1:1 support through blog comments, if you need troubleshooting assistance you may want to open a support incident so that our engineers can assist you.]
Ugh, this is terribad. With the current Steam sale lots of downloading is being done. While Steam is downloading 'Cache WS' in Process Explorer is constantly growing. I saw 3GB Cache WS on the 4GB system and instead of discarding this clearly useless memory, it's starting to swap running programs to disk, aarrgh. It's currently so bad I just run Cacheset.exe 1024 1024 half-hourly. Not that it would be stick to anywhere near 1024KB but at least it clears the cache instantly.
[You may benefit from the service described in this article: http://blogs.msdn.com/b/ntdebugging/archive/2009/02/06/microsoft-windows-dynamic-cache-service.aspx.]
Is SystemCacheDirtyPageThreshold still relevant for Windows Server 2012?
[That setting is only relevant on Windows Server 2003 SP2, or SP1 with KB920739 installed. For more information refer to http://support.microsoft.com/kb/920739.]