Yesterday I found this really nice Channel 9 interview with Maoni Stephens (Dev Owner of the CLR GC) and Andrew Pardoe (Program manager for the CLR GC) where they talked about the new Background GC in CLR 4.0.
She also talks about it here and there is not much value in me repeating what she already says there but basically the main points of the video and the post are:
Concurrent GC is being replaced by Background GC in CLR 4.0
Concurrent GC is the mode of the GC that you use in desktop applications for example. The goal of the concurrent GC is to minimize pause time, and it does so by allowing you to still allocate while a GC is in progress (hence the concurrent part).
Concurrent GC is only available in workstation mode.
In server mode (which is what you use in ASP.NET for example when you have multiple processors/cores), simplified all managed calls are paused while in a GC, which means that you can’t allocate anything. This means that the process pauses slightly while in a GC but on the other hand what you loose in pause time, you gain in throughput as GCs are made by x number of GC threads concurrently, where x is #procs*#cores.
In concurrent GC you were allowed to allocate while in a GC, but you are not allowed to start another GC while in a GC. This in turn means that the maximum you are allowed to allocate while in a GC is whatever space you have left on one segment (currently 16 MB in workstation mode) minus anything that is already allocated there).
The difference in Background mode is that you are allowed to start a new GC (gen 0+1) while in a full background GC, and this allows you to even create a new segment to allocate in if necessary. In short, the blocking that could occur before when you allocated all you could in one segment won’t happen anymore.
Background GC’s will be available in the Silverlight CLR as well
The CoreCLR uses the same GC as the regular CLR, so this means that Silverlight apps benefit from this as well…
As Server mode does not use concurrent GC this will not be available in Server GC
Having this in server mode would be incredibly cool as GCs can get pretty hefty, especially in 64 bit apps with very large heaps, but as Maoni mentions in the video and in the post, this work for the concurrent GC lays the foundation for the same work being done in the Server GC. Because of the complexities involved in doing this in Server GC though this is not included in v4.0.
If you do have a lot of latency due to heavy pause times during full garbage collections, there is a feature that was introduced in 3.5 SP1 that allows you to be notified when a full GC is about to occur. You can then redirect to another server in a cluster for example while the GC occurs.
I just want to mention that this not being in the Server GC does not mean that you should switch your server apps (asp.net etc.) to use workstation with concurrent, Server GC is optimized for these scenarios and should still be used there.
Have fun, Tess
PingBack from http://microsoft-sharepoint.simplynetdev.com/background-garbage-collection-in-clr-40/
Yesterday I found this really nice Channel 9 interview with Maoni Stephens (Dev Owner of the CLR GC)
Thank you for submitting this cool story - Trackback from DotNetShoutout
"...allows you to be notified when a full GC is about to occur. You can then redirect to another server in a cluster for example while the GC occurs. "
Wow. This is so removed from my world where I deploy to a single server and get a few hundred visits a day. Do you have any details about the kind of application that needed this feature?
any site with high load and high mem usage running on 64 bit... I would say that about half my customers are running in clustered/load balanced environments, then again the other half are the types of sites you're talking about:)
Thanks for givng this nice post. Very important keypoints.
.NET Generic Method for Loading Interfaces in C# (For a Plugin System) Which came first, the View or
typo "what you loose in pause time, you gain"
Came across this article quite late...but really helpful. Precise and self explanatory.