I got a comment on one of the posts asking this question and I started writing a comment-answer but it turned into a long-winded rant so I decided to blog it instead:)
So you're looking at a dump and run !gcroot on your object but it doesn't find a root. Then why is it still around on the heap...
There are many reasons for this but the short answer is:
It was still alive (rooted) last time a garbage collection for that specific generation was run.
This is not completely true... it could be that there is a problem with !gcroot in this specific case causing it to not find the root but this would be pretty rare. It could also be that you are running the workstation version where only partial collections are done if the garbage collection take too long, since the workstation GC is optimized for applications with UI and we dont want to block the UI threads causing the UI to flicker.
Short of this, we can go back to "the object was alive during the last collection" and take a look at some of the cases for this
Garbage collection is allocation triggered except for in a few cases such as the app manually calling GC.Collect() or the ASP.NET cache reduction mechanism for example.
What does this mean?
Simplified, each generation (0, 1, 2 and large object) has a limit. I.e. how much data can be stored in each generation before a garbage collection occurrs. These limits are dynamically changed based on the applications memory usage.
When the application allocates a new object and the limit for generation 0 is exceeded, a gen 0 GC is triggered. Objects in use (rooted) are then moved to Gen 1. If this causes the Gen 1 limit to be exceeded a Gen 1 GC is triggered and objects in-use move to Gen 2 etc. Gen 2 is currently the max generation. Large objects are treaded separately.
So let's say you perform a stress test, and then leave the server idle for 4 hours with no requests, this means that no new allocations are made and thus no new GC's are triggered so the memory used for the process will never get reduced. In essence, this does not mean that you have a memory leak, it just means that you are not triggering any new GC's. A proper stress test should have some kind of slow-down period after the main stress.
Going back to the objects that are not rooted...
So now we know that the most likely reason is that a garbage collection of that generation has not occurred since the object was unrooted.
Another alternative is that the object has a finalizer method, and thus is registered (and rooted) with the finalizer and will therefore be held until the finalizer thread gets around to finalizing it. Or it could be a member variable of an object with a finalizer.
This would not show up in !gcroot, but you can see the object show up in !finalizequeue.
Implementing a finalize method (even if there is no code in it) will automatically put your object on the finalizequeue which means that your object will survive at least one garbage collection, therefore you should carefully consider if your object really needs a finalizer.
Worst case scenario your finalizer might be blocked so it will take a long time for your object to be finalized if ever... (run !threads to identify the finalizer thread and check if perhaps it is stuck when finalizing objects). I have on my todo list to write a "case study" on blocked finalizers.
Another alternative is that the object you are looking at is a "large object", or a member variable of a "large object". Garbage collection of the large object heap is much more infrequent than the small object heap, which means that any object that is stored on the large object heap may stay around for a substantial amount of time.
Finally, if you have a repro, you can try calling
and see if your object is still around after executing this.
This will garbage collect all generations (including large object), then execute any finalizers, and then garbage collect again to take care of all the objects that had finalizers.
The specific question in the comment was "could this be because it is used in interop?"The answer is no, with interop if your object was still in-use it would be rooted in a refcount, or as a pinned object or similar, depending on how it was created and used.
Note: this is by no means an exhaustive list of all reasons, it's just the most common ones that came to my mind when reading the comment, but at least hopefully it should somewhat explain why you see unrooted objects on the heap...
I would recommend that you take a peak at Maoni's blog on using the GC efficiently (in the blogs i read section) if want an interesting read on what the GC does.
Is there any way to compact the allocated memory used by the large object heap. As it gets more and more fragmented, we see our allocated memory grow to mamoth sizes.
There is no way to force compaction. Technically you could collect it using GC.Collect but it is not recommended. Instead you should take a close look at what is on the large object heap and try to avoid using it as much as possible through chunking up the data etc. since high usage of the LOH causes high CPU in GC.
A few days ago I posted a question I had gotten on email (look here for complete post): " We use Page.Cache
I created an instance of a class through assembly.CreateInstance() method. I want it to be GC when I ran some of the methods inside it. To make sure, I call GC.Collect() after finish, but it had not been removed, and with a for loop for 10 times with 10 GC.Collect(), my memory increase without stop.
Is there any attention with CreateInstane() ???
Oh, and how can I trace the references to my object???
Without the details of the repro test I'm going to guess that your object is still kept alive on the stack (referenced by a stack pointer) when the GC.Collect call is made. Testing stuff like this in a loop is tricky, preferably if you want to test it you should have another button (outside of the one with the loop) that calls GC.Collect, GC.WaitForPendingFinalizers and GC.Collect again to make sure it has been finalized as well if it needs to be.
If that is not enough, run !GCRoot on the object to see where it is rooted
Very useful and good article, but I have 1 question.
We only have 0, 1 and 2 generation, so what is GC.Collect(3) do?
We have 0, 1, 2 and the large object heap. GC.Collect(3) or GC.Collect() performs a full collection of all of these
During our ASP.NET debugging chat there were many questions around the GC and the different generations.
I am using the WMI queries in my Windows Service. My service has a timer which performs the WMI queries to fetch the currently running process from the machine and write it to a log for every 60 seconds. The process of fetching all the Running processes and writing the log takes 4 seconds only. When i see my Task Manager for the first 60 secs. the mem usage is 7 MB and when the process runs, it goes till 19 MB and never come down again. I have disposed all the objects and not sure what i am missing. I am using GC.Collect, GC.WaitForPendingFinalizers and GC.Collect in the timer when the operation is over but still its not reducing the Mem Usage. Any help on this would be appreciated.
Hi There, I can't believe I've found someone who can help me...
First, I would like to say that I love your blog and weekly I read the work of yours.
After of lot reading I finally came into this post and I very much hope you can answer me.
I came across I problem that I have read in many forums and blogs. And people actually doesn’t have a good solution for it: .NET memory consumption.
Microsoft guide lines blame our system design and at the same time I don’t believe that I have a poor design.
I have the following VPS environment that host my asp.net application:
Windows 2003 Server
The ASP.NET is layered as following:
1 - UI layer: ASP.NET files
2 - Service Interface: An façade object that intermediates the UI layer and Business Layers
3 - Business Layer
4 – Data Access Layer
This is a Microsoft guide line design standard and most of the architects should be familiar with it.
In the service layer, I implement IDisposable for all objects and I believe that when GC collects an object in the service layer it will work at all layer beneath.
On the method implementation of IDisposable I have the following code:
This application used to hold almost 200MB when I saw in the Task Manager.
I have done I lot of research looking the for better performance I used many tools for debugging memory, performance counters and so on. So, I changed a lot of users controls for asp.net pages, I did a some changes following the Microsoft guidelines for .NET performance and managed code performance.
I finally made my ASP.NET application start up holding 30MB (before was 70MB).
When my 3 users start working in it, the asp.net work processor goes to 70MB.
CRYSTAL REPORTS AND LOOPING OF 10.000 ITEMS
When users try crystal reports for pdf or word creation, the application goes to 150MB depending of the size and report complexity.
These reports are generated on the UI layer with the data that came produced form the Business layer and other layers beneath. Also, when I run heavy logics in the business layer (looping 10 thousand items) the memory goes up quickly.
A POSSIBILE SOLUTION AND DRAWBACKS
I have done so many things and some many changes looling to improve the application because I would like to host another application on the same server.
The code following code works for me but nobody recommends to use it and I don’t believe that this is good idea to use on ASP.NET environment.
So, the Simple question is: How can I free the memory and stop an ASP.NET application from keeping holding so much memory before the IIS recycle works?
First off it is important to know what you are looking at in taskmanager, i.e. if it is working set, private bytes or virtual bytes. Preferably you should instead look in perfmon and look at #Bytes in all heaps to see if it is really your .net objects that are the cause of the memory increase.
As you know .net reserves memory in heap segments (32 MB/16 MB or 64 MB depending on .net framework version or GC mode), so if you are looking at virtual memory then that could potentially go up and not go down, but having a high amount of virtual memory reserved (if it is not high enough to cause an OOM in the app) doesnt really matter as long as it isn't committed. I say this with some reservations, but for example if you added another application to the same application pool it would then be able to use memory out of these reserved segments. If you have it in a different app pool then it wouldnt matter either as you would swap unused memory pages from RAM out to disk.
Now, if it is in #Bytes in all heaps, there are two reasons why they wouldnt be released
1. if you have a reference to them (in that case go through some of my memory investigations to debug it further) or
2. the GC hasnt run yet and collected them.
In case #1 a GC.Collect wouldnt help, and in case #2 you still wouldnt need to do a GC.Collect because if you were to allocate more memory so that you needed the memory that is currently used by these objects, then a GC would run to collect these anyways...
Note: If there are no allocations, the GC will not run, so if you create a large report and use up 150 MB and then let the process idle, the memory will not get released as mentioned in the article...
Basically, its hardly ever a good thing to run GC.Collect because you end up messing up the GCs optimization, and in 99,99% of the cases you actually don't need it.
On the other hand, an application that uses 80 MB to create a report (150-70) doesnt seem like a very scalable application. Sounds like you might want to look into why it needs to use this much for that operation...