Excessive cached read I/O is a growing problem. For over one year we have been working on this problem with several companies. You can read more about it in the original blog post:
On 32 bit systems, the kernel could address at most 2GB of virtual memory. This address range is shared and divided up for the many resources that the system needs; one of which is the System File Cache's working set. On 32 bit systems the theoretical limit is almost 1GB for the cache’s working set; however, when a page is removed from the working set it will end up on the standby page list. Therefore the system can cache more than the 1 GB limit if there is available memory. The working set; however, is just limited to what can be allocated within the Kernel's 2GB virtual address range. Since most modern systems have more than 1 GB of physical RAM, the System File Cache's working set's size on a 32 bit system typically isn't a problem.
With 64 bit systems, the kernel virtual address space is very large and is typically larger than physical RAM on most systems. On these systems the System File Cache's working set can be very large and is typically about equal to the size of physical RAM. If applications or file sharing performs a lot of sustained cached read I/O, the System File Cache's working set can grow to take over all of physical RAM. If this happens, then process working sets are paged out and everyone starts fighting for physical pages and performance suffers.
The only way to mitigate this problem is to use the provided APIs of GetSystemFileCacheSize() and SetSystemFileCacheSize(). The previous blog post "Too Much Cache" contains sample code and a compiled utility that can be used to manually set the System File Cache's working set size.
The provided APIs, while offering one mitigation strategy, has a couple of limitations:
1) There is no conflict resolution between multiple applications. If you have two applications trying to set the System File Cache's working set size, the last one to call SetSystemFileCacheSize() will win. There is no centralized control of the System File Cache's working set size.
2) There is no guidance on what to set the System File Cache's working set size to. There is no one size fits all solution. A high cache working set size is good for file servers, but bad for large memory application and a low working set size could hurt everyone's I/O performance. It is essentially up to 3rd party developers or IT administrators to determine what is best for their server and often times, the limits are determined by a best guesstimate backed by some testing.
We fully understand that while we provide one way to mitigate this problem, the solution is not ideal. We spent a considerable amount of time reviewing and testing other options. The problem is that there are so many varied scenarios on how users and applications rely on the System File Cache. Some strategies worked well for the majority of usage scenarios, but ended up negatively impacting others. We could not release any code change that would knowingly hurt several applications.
We also investigated changing some memory manager architecture and algorithms to address these issues with a more elegant solution; however the necessary code changes are too extensive. We are experimenting with these changes in Windows 7 and there is no way that we could back port them to the current operating systems. If we did, we would be changing the underlying infrastructure that everyone has been accustomed to. Such a change would require stress tests of all applications that run on Windows. The test matrix and the chance of regression are far too large.
So that brings us back to the only provided solution - use the provided APIs. While this isn't an ideal solution, it does work, but with the limitations mentioned above. In order to help address these limitations, I've updated the SetCache utility to the Microsoft Windows Dynamic Cache Service. While this service does not completely address the limitations above, it does provide some additional relief.
The Microsoft Windows Dynamic Cache Service uses the provided APIs and centralizes the management of the System File Cache's working set size. With this service, you can define a list of processes that you want to prioritize over the System File Cache by monitoring the working set sizes of your defined processes and back off the System File Cache's working set size accordingly. It is always running in the background monitoring and dynamically adjusting the System File Cache's working set size. The service provides you with many options such as adding additional slack space for each process' working set or to back off during a low memory event.
Please note that this service is experimental and includes sample source code and a compiled binary. Anyone is free to re-use this code in their own solution. Please note that you may experience some performance side effects while using this service as it cannot possibly address all usage scenarios. There may be some edge usage scenarios that are negatively impacted. The service only attempts to improve the situation given the current limitations. Please report any bugs or observations here to this blog post. While we may not be able to fix every usage problem, we will try to offer a best effort support.
Side Effects may include:
Cache page churn - If the System File Cache's working set is too low and there is sustained cached read I/O, the memory manager may not be able to properly age pages. When forced to remove some pages in order to make room for new cache pages, the memory manager may inadvertently remove the wrong pages. This could result in cached page churn and decreased disk performance for all applications.
Version 1.0.0 - Initial Release
NOTE: The memory management algorithms in Windows 7 and Windows Server 2008 R2 operating systems were updated to address many file caching problems found in previous versions of Windows. There are only certain unique situations when you need to implement the Dynamic Cache service on computers that are running Windows 7 or Windows Server 2008 R2. For more information on how to determine if you are experiencing this issue and how to resolve it, please see the More Information section of Microsoft Knowledge Base article 976618 - You experience performance issues in applications and services when the system file cache consumes most of the physical RAM.
I downloaded DynCache for x64 systems. The ReadMe file indicates
"3) Import the DynCache.reg registry file. This registry file contains default settings that you will probably want to modify. "
Contents of file reproduced below. Can anyone offer clarification of what I can modify? (Server in questionis x64, 16GB RAM. Sql 2005 x64) .
Windows Registry Editor Version 5.00
"MinSystemCacheMBytes"=dword:00000064 (MIN > MAX ???)
"AdditionalBackOffCounter"="\\SQLServer:Memory Manager\\Total Server Memory (KB)"
Why are there folders for i386 and IA64 in the released package?
I'm experiencing system slowness due to slow writes to a SSD. I’m aware of the risks but... ¿is it possible to increase the system write cache and/or reduce the write pace of dirty pages of the cache to the disk? ¿Which is the role of the “Writewatch” on HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management (no info on Technet about it)?
I have been reading your posts about the memory management but I can’t see a way to preserve part of the cache for writes (I can set the maximum amount of memory for cache with your excellent Microsoft Windows Dynamic Cache Service and the amount of dirty pages with SystemCacheDirtyPageThreshold)
I’m using the LargeSystemCache configuration and the disk CacheIsPowerProtected and UserWriteCacheSetting options are set. All I want is to cache more writes into memory. It will be fantastic to be able to cache “write through” requests also (more risks but I can handle them)
Thanks for your answer!
Most of your recommendations are generic for most “old SSD” that have too much problems shuttering due to writes. I have not enough memory to disable the page file (only 4 GB) but I have configured the page file on another mechanical disk and I have disabled defrag and paging executive.
I know that preventing "write through" writes can be a bad idea in general and the system may end unbootable but it is like having your SO on a RAID 0. Enabling "write caching" and "advanced performance" check on disks without having a real backup power supply is also "risky" but it is not so bad if you know what you are doing.
In the end you risk the integrity of the system to gain performance... Like all of life, is about tradeoffs. I don't want you recommend that as a "best practice" but it is interesting to know how it is possible to achieve this behavior or if it is impossible on Windows.
For example it can be very useful for demo virtual machines that need to run as fast as possible and you will disregard the changes after the demo anyway.
The folder has amd64, i386, and ia64.
Where is the folder just for 64-bit, NOT ia64 and amd64?
Where is the Windows Dynamic Cache service for 64-bit server
that is NOT IA or AMD?
You mentioned above: "So that brings us back to the only provided solution - use the provided APIs."
This solution is only for 64-bit machines right? Based on your explanations above, it sounds like a 32-bit OS wouldn't suffer from this memory issue. Meaning someone could 'downgrade' their OS to 32-bit and basically escape this issue altogether right?
I have a SQL Server 2005 install that has 15 active instances, none of them the default one. I think this may help my issue with "There is insufficient system memory to run this query", but I would need to add the different counters for the different instances (each instance has it's own counter in the form of MSSQL$INSTANCENAME). Any recommendations on how to tackle this?
I'm trying to mimic the setcache.exe behavior. If I set MaxSystemCacheMBytes to 4096 and MinSystemCacheMBytes would be 0 (the default which is 100MB), start the dynamic cache service and then run setcache.exe it appears that the settings have not been activated. Is this expected behavior or am I missing something?
i am getting this error in the applications event log when I load the service. Shoud i concerned with this?
Windows cannot open the 32-bit extensible counter DLL ASP.NET in a 64-bit environment. Contact the file vendor to obtain a 64-bit version. Alternatively, you can open the 32-bit extensible counter DLL by using the 32-bit version of Performance Monitor. To use this tool, open the Windows folder, open the Syswow64 folder, and then start Perfmon.exe
[The service uses performance counters if you enable active monitoring. It looks like you have a 32 bit only perf counter. You can ignore this error, or you can disable ASP.NET counters if you aren't using them for anything else.]
I have a windows 2008 R2 EE Edition server and we are using HP Data Protector 6.2 on this server. We have 4 GB RAM on this server. Whenever backup jobs are running, the memory utilisation is 3.82 GB. If we stop the HP DP services, the memory utilisation comes down. We referred this issue to HP and they told us to follow this article.As per this article I installed the service. but when i try to start it, I get an error 1153:The specified program was written for an earlier version of windows. Can you help ? I have copied the dyncache.exe file from the retail/amd64 folder.
[At this time we do not have a publicly available version of dyncache that works on R2. If you open a support incident with Microsoft Support, they can supply you with a version of the tool that works on R2. You can find more information on opening such an incident at http://support.microsoft.com/select/Default.aspx?target=assistance.]
All our X64 file servers seem to have this issue to a greater or lesser extent. It seems a total pain to make those of use who haev upgraded to the latest version jump through the hoops of opening a support to get the fix.
[The memory management algorithms in Windows 7 and Windows Server2008 R2 operating systems were updated to address many file caching problemsfound in previous versions of Windows. There are only certain unique situationswhen you need to implement this service on computers that are running Windows 7or Windows Server 2008 R2. We would like to know more about thosesituations where the Dynamic Cache Service is required so we can identify areas for additional improvement. For example, the fix in KB 2564236 I/O throughputis low when large files are read sequentially in Windows 7 or in Windows Server2008 R2 might be more appropriate than running the Dynamic Cache Service.]
it's a shame, it should be provided as a hotfix, but when I call the Microsoft Support Service they want me to pay a case, I can't believe that Microsoft sells the R2 patch to fix the issue
[The current publicly available version of dyncache will run on R2. We maintained the same download link, so you can get it from the link in this article.]
There is an hideous hack in the DynCache source code: It does the same mistake as here: blogs.msdn.com/.../58973.aspx
[Unfortunately that link didn't come through. DynCache has worked well but is now largely obsolete because this issue has been better addressed in more recent versions of Windows.]
I'm running this service in Windows Server 2008 SP2. I'm using the AMD64, debug version. However, running Debug View does not show anything. I've tried running Debug View as administrator as well - still nothing. The service starts alright, but doesn't seem to do anything. The memory consumed by the system cache still exceeds the values set in the registry.
[By default debug output is not displayed. You need to set a debug print filter mask. Under HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Debug Print Filter create a value named DEFAULT and set it to 0xF and reboot. After this dbgview should work.]