.NET Memory Allocation Profiling with Visual Studio 2012

.NET Memory Allocation Profiling with Visual Studio 2012

Rate This
  • Comments 16

This post was written by Stephen Toub, a frequent contributor to the Parallel Programming in .NET blog. He shows us how Visual Studio 2012 and an attention to detail can help you discover unnecessary allocations in your app that can prevent it from achieving higher performance.

Visual Studio 2012 has a wealth of valuable functionality, so much so that I periodically hear developers that already use Visual Studio asking for a feature the IDE already has and that they’ve just never discovered. Other times, I hear developers asking about a specific feature, thinking it’s meant for one purpose, not realizing it’s really intended for another.

Both of these cases apply to Visual Studio’s .NET memory allocation profiler. Many developers that could benefit from it don’t know it exists, and other developers have an incorrect expectation for its purpose. This is unfortunate, as the feature can provide a lot of value for particular scenarios; many developers will benefit from understanding first that it exists, and second the intended scenarios for its use.

Why memory profiling?

When it comes to .NET and memory analysis, there are two primary reasons one would want to use a diagnostics tool:

  1. To discover memory leaks. Leaks on a garbage-collecting runtime like the CLR manifest differently than do leaks in a non-garbage-collected environment, such as in code written in C/C++. A leak in the latter typically occurs due to the developer not manually freeing some memory that was previously allocated. In a garbage collected environment, however, manually freeing memory isn’t required, as that’s the duty of the garbage collector (GC). However, the GC can only release memory that is provably no longer being used, meaning as long as there are no rooted references to the memory. Leaks in .NET code manifest then when some memory that should have been collected is incorrectly still rooted, e.g. a reference to the object occurs in an event handler registered with a static event. A good memory analysis tool might help you to find such leaks, such as by allowing you to take snapshots of the process at two different points and then comparing those snapshots to see which objects stuck around for the second point, and more importantly, why.
  2. To discover unnecessary allocations. In .NET, allocation is often quite cheap. This cost is deceptive, however, as there are more costs later when the GC needs to clean up. The more memory that gets allocated, the more frequently the GC will need to run, and typically the more objects that survive collections, the more work the GC needs to do when it runs to determine which objects are no longer reachable. Thus, the more allocations a program does, the higher the GC costs will be. These GC costs are often negligible to the program’s performance profile, but for certain kinds of apps, especially those on servers that require high-throughput operation, these costs can add up quickly and make a noticeable impact to the performance of the app. As such, a good memory analysis tool might help you to understand all of the allocation being done by the program, in order to help spot allocations you can potentially avoid.

The .NET memory profiler included in Visual Studio 2012 (Professional and higher versions) was designed primarily to address the latter case of helping to discover unnecessary allocations, and it’s quite useful towards that goal, as the rest of this post will explore. The tool is not tuned for the former case of finding and fixing memory leaks, though this is an area the Visual Studio diagnostics team is looking to address in depth for the future (you can see such an experience for JavaScript that was added to Visual Studio as part of VS2012.1). While the tool today does have an advanced option to track when objects are collected, it doesn’t help you to understand why objects weren’t collected or why they were held onto longer than was expected.

There are also other useful tools in this space. The downloadable PerfView tool doesn’t provide as user-friendly an interface as does the .NET memory profiler in Visual Studio 2012, but it is a very powerful tool that supports both tasks of finding memory leaks and discovering unnecessary allocations. It also supports profiling Windows Store apps, which the .NET memory allocation profiler in Visual Studio 2012 does not currently support as of the writing of this post.

Example to be optimized

To better understand the memory profiler’s role and how it can help, let’s walk through an example. We’ll start with the core method that we’ll be looking to optimize (in a real-world case, you’d likely be analyzing your whole application and narrowing in on the particular offending areas, but for the purpose of this example, we’ll keep this constrained):

public static async Task<T> WithCancellation1<T>(this Task<T> task, CancellationToken cancellationToken)
{
    var tcs = new TaskCompletionSource<bool>();
    using (cancellationToken.Register(() => tcs.TrySetResult(true)))
    if (task != await Task.WhenAny(task, tcs.Task))
        throw new OperationCanceledException(cancellationToken);
    return await task;
}

The purpose of this small method is to enable code to await a task in a cancelable manner, meaning that regardless of whether the task has completed, the developer wants to be able to stop waiting for it. Instead of writing code like:

T result = await someTask;

the developer would write:

T result = await someTask.WithCancellation1(token);

and if cancellation is requested on the relevant CancellationToken before the task completes, an OperationCanceledException will be thrown. This is achieved in WithCancellation1 by wrapping the original task in an async method. The method creates a second task that will complete when cancellation is requested (by Registering a call to TrySetResult with the CancellationToken), and then uses Task.WhenAny to wait for either the original task or the cancellation task to complete. As soon as either does, the async method completes, either by throwing a cancellation exception if the cancellation task completed first, or by propagating the outcome of the original task by awaiting it. (For more details, see the blog post “How do I cancel non-cancelable async operations?”)

To understand the allocations involved in this method, we’ll use a small harness method:

 

using System;
using System.Threading;
using System.Threading.Tasks;
 
class Harness
{
    static void Main() 
     { 
         Console.ReadLine(); // wait until profiler attaches
         TestAsync().Wait(); 
     }
    static async Task TestAsync()
    {
        var token = CancellationToken.None;
        for(int i=0; i<100000; i++)
            await Task.FromResult(42).WithCancellation1(token);
    }
}
 

static class Extensions
{
    public static async Task<T> WithCancellation1<T>(
    this Task<T> task, CancellationToken cancellationToken)
    {
        var tcs = new TaskCompletionSource<bool>();
        using (cancellationToken.Register(() => tcs.TrySetResult(true)))
            if (task != await Task.WhenAny(task, tcs.Task))
                throw new OperationCanceledException(cancellationToken);
        return await task;
    }
}

The TestAsync method will iterate 100,000 times. Each time, it creates a new task, invokes the WithCancellation1 on it, and awaits the result of that WithCancellation1 call. This await will complete synchronously, as the task created by Task.FromResult is returned in an already completed state, and the WithCancellation1 method itself doesn’t introduce any additional asynchrony such that the task it returns will complete synchronously as well.

Running the .NET memory allocation profiler

To start the memory allocation profiler, in Visual Studio go to the Analyze menu and select “Launch Performance Wizard…”. This will open a dialog like the following:

image

Choose “.NET memory allocation (sampling)”, click Next twice, followed by Finish (if this is the first time you’ve used the profiler since you logged into Windows, you’ll need to accept the elevation prompt so the profiler can start). At that point, the application will be launched and the profiler will start monitoring it for allocations (the harness code above also requires that you press ‘Enter’, in order to ensure the profiler has attached by the time the program starts the real test). When the app completes, or when you manually choose to stop profiling, the profiler will load symbols and will start analyzing the trace. That’s a good time to go and get yourself a cup of coffee, or lunch, as depending on how many allocations occurred, the tool can take a while to do this analysis.

When the analysis completes, we’re presented with a summary of the allocations that occurred, including highlighting the functions that allocated the most memory, the types that resulted in the most memory allocated, and the types with the most instances allocated:

image

From there, we can drill in further, by looking at the allocations summary (choose “Allocation” from the “Current View” dropdown):

image

Here, we get to see a row for each type that was allocated, with the columns showing information about how many allocations were tracked, how much space was associated with those allocations, and what percentage of allocations mapped back to that type. We can also expand an entry to see the stack of method calls that resulted in these allocations:

image

By selecting the “Functions” view, we can get a different pivot on this data, highlighting which functions allocated the most objects and bytes:

image

Interpreting and acting on the profiling results

With this capability, we can analyze our example’s results. First, we can see that there’s a substantial number of allocations here, which might be surprising. After all, in our example we were using WithCancellation1 with a task that was already completed, which means there should have been very little work to do (with the task already done, there is nothing to cancel), and yet from the above trace we can see that each iteration of our example is resulting in:

  • Three allocations of Task`1 (we ran the harness 100K times and can see there were ~300K allocations)
  • Two allocations of Task[]
  • One allocation each of TaskCompletionSource`1, Action, a compiler-generated type called <>c_DisplayClass2`1, and some type called CompleteOnInvokePromise

That’s nine allocations for a case where we might expect only one (the task allocation we explicitly asked for in the harness by calling Task.FromResult), with our WithCancellation1 method incurring eight allocations.

For helper operations on tasks, it’s actually fairly common to deal with already completed tasks, as often times operations implemented to be asynchronous will actually complete synchronously (e.g. one read operation on a network stream may buffer into memory enough additional data to fulfill a subsequent read operation). As such, optimizing for the already completed case can be really beneficial for performance. Let’s try. Here’s a second attempt at WithCancellation, one that optimizes for several “already completed” cases:

    public static Task<T> WithCancellation2<T>(this Task<T> task, 
CancellationToken cancellationToken) { if (task.IsCompleted || !cancellationToken.CanBeCanceled) return task; else if (cancellationToken.IsCancellationRequested) return new Task<T>(() => default(T), cancellationToken); else return task.WithCancellation1(cancellationToken); }

This implementation checks:

  • First, whether the task is already completed or whether the supplied CancellationToken can’t be canceled; in both of those cases, there’s no additional work needed, as cancellation can’t be applied, and as such we can just return the original task immediately rather than spending any time or memory creating a new one.
  • Then whether cancellation has already been requested; if it has, we can allocate a single already-canceled task to be returned, rather than spending the eight allocations we previously paid to invoke our original implementation.
  • Finally, if none of these fast paths apply, we fall through to calling the original implementation.

Re-profiling our micro-benchmark while using WithCancellation2 instead of WithCancellation1 provides a much improved outlook (you’ll likely notice that the analysis completes much more quickly than it did before, already a sign that we’ve significantly decreased memory allocation). Now we have just have the primary allocation we expected, the one from Task.FromResult called from our TestAsync method in the harness:

image

So, we’ve now successfully optimized the case where the task is already completed, where cancellation can’t be requested, or where cancellation has already been requested. What about the case where we do actually need to invoke the more complicated logic? Are there any improvements that can be made there?

Let’s change our benchmark to use a task that’s not already completed by the time we invoke WithCancellation2, and also to use a token that can have cancellation requested. This will ensure we make it to the “slow” path:

    static async Task TestAsync()
    {
        var token = new CancellationTokenSource().Token;
        for (int i = 0; i < 100000; i++)
        {
            var tcs = new TaskCompletionSource<int>();
            var t = tcs.Task.WithCancellation2(token);
            tcs.SetResult(42);
            await t;
        }
    }

Profiling again provides more insight:

image

On this slow path, there are now 14 total allocations per iteration, including the 2 from our TestAsync harness (the TaskCompletionSource<int> we explicitly create, and the Task<int> it creates). At this point, we can use all of the information provided by the profiling results to understand where the remaining 12 allocations are coming from and to then address them as is relevant and possible. For example, let’s look at two allocations specifically: the <>c__DisplayClass2`1 instance and one of the two Action instances. These two allocations will likely be logical to anyone familiar with how the C# compiler handles closures. Why do we have a closure? Because of this line:

using(cancellationToken.Register(() => tcs.TrySetResult(true)))

The call to Register is closing over the ‘tcs’ variable. But this isn’t strictly necessary: the Register method has another overload which instead of taking an Action takes an Action<object> and the object state to be passed to it. If we instead rewrite this line to use that state-based overload, along with a manually cached delegate, we can avoid the closure and those two allocations:

private static readonly Action<object> s_cancellationRegistration =
    s => ((TaskCompletionSource<bool>)s).TrySetResult(true);
…
using(cancellationToken.Register(s_cancellationRegistration, tcs))
  

Rerunning the profiler confirms those two allocations are no longer occurring:

Start profiling today!

This cycle of profiling, finding and eliminating hotspots, and then going around again is a common approach towards improving the performance of your code, whether using a CPU profiler or a memory profiler. So, if you find yourself in a scenario where you determine that minimizing allocations is important for the performance of your code, give the .NET memory allocation profiler in Visual Studio 2012 a try. Feel free to download the sample project used in this blog post.

For more on profiling, see the blog of the Visual Studio Diagnostics team, and ask them questions in the Visual Studio Diagnostics forum.

Stephen Toub

 

 

Follow us on Twitter (@dotnet) and Facebook (dotnet). You can follow other .NET teams, too: @aspnet/asp.net, @efmagicunicorns/efmagicunicorns, @visualstudio/visualstudio.

Leave a Comment
  • Please add 1 and 3 and type the answer here:
  • Post
  • This is something which now ,I can ask my customer to get us VS2012.

  • Cool. Although I'd "discovered" the .NET memory allocation profiler and played around with it a bit, I soon felt lost in the profiling results, and really didn't know where to begin. Now I understand. Stephen, you have a gift for making the complicated easy. Thanks for a great post.

    Btw, there is a tiny typo in the section on interpreting and acting on the profiler results. After the declaration of WithCancellation2, the second bullet point is missing it's first character, and reads as:

    hen whether cancellation has already been requested; if it has, we can allocate a single already-canceled task to be returned, rather than spending the eight allocations we previously paid to invoke our original implementation.

  • Can we get a method like System.Diagnostics.Debug.HasReferences(object abc) to return true if an object still has an event handler referring to it?  This would greatly help in designing classes that clean up all of their member variable event handlers, such as collection changed handlers on an ObservableCollection member variable.  

    We would use the diagnostic method to verify that our member variables are un-hooked correctly when closing an XAML based dialogue, for example.

  • @santosh poojari: Great :)

    @Jerome: Thanks for the nice words; I'm glad this helped.  And thanks for pointing out the typo (the first word should be "Then"); we'll get this fixed in the post.

    @Ted: I'll pass along your request.  In the meantime, you could try using something like this to see if there are any other references still keeping the object alive:

    static bool HasReference(ref object obj)

    {

       var wr = new WeakReference(obj);

       obj = null;

       GC.Collect();

       GC.WaitForPendingFinalizers();

       GC.Collect();

       return wr.IsAlive;

    }

  • @Ted: You can use WMemoryProfiler in your unit tests to achieve it in a more indirect way. You can get take a snapshot of the managed heap and check later if your expected objects are gone now (i.e. are no longer referenced).

    This ways you can check not only for one object but for all of them in e.g. your namespace to look out for strange artefacts.

    @Stephen: I have used the Visual Studio Memory Profiler but compared to commerical ones it is lacking a ton of features. E.g. the SciTech .NET Memory Profiler can examine memory dumps to analyze the managed heap which is very cool for tracking down issues on customer machines.

    You should also mention that PerfView from Vance Morrison can do similar stuff with a not so pretty UI but it can do lots of stuff to already running processes which were not started under a profiler at all.

  • @Alois Kraus: Yes, thanks, I actually did mention PerfView in the post (see the last paragraph in the "Why memory profiling?" section).

  • visual studio 2012 is awesome.

  • You are mentioning "such as by allowing you to take snapshots of the process at two different points and then comparing those snapshots to see which objects stuck around for the second point, and more importantly"

    How can I make snapshots? Is this ability not included in the professional edition of 2012 and 213?

  • @Vinculum: That paragraph you reference wasn't about any particular tool, it was simply stating the kinds of features developers typically look for in such profiling tools.  However, the memory analysis tool in Visual Studio 2013 does allow you to compare snapshots... see Andrew Hall's posts at blogs.msdn.com/.../using-visual-studio-2013-to-diagnose-net-memory-issues-in-production.aspx and blogs.msdn.com/.../net-memory-analysis-enhancements-in-visual-studio-2013.aspx for more info.

  • get a cup of coffee ?? I have waited 12 hours and after that the status bar shows 1/4, so will take 2 days!

  • PS and created an 80 Gig File

  • @Marcus: the memory profiler records and analyzes every single allocation in your application, and this turns out to be a significant amount of data to collect and process. For this reason it is best to use it for a short period of time on isolated workloads (such as the benchmark shown in this blog post), otherwise you will get large files and it will take a very long time to process.

    To collect a smaller set of data, you can start the memory profiler paused (Analyze -> Profiler -> Start with Profiling Paused) and then start profiling once your application is at the point that you want to analyze. We are also looking into improving the performance for future releases.

  • When I perform the .NET memory allocation profiling I will get a different output. I won't get the three tables with functions allocating most memory, Types With Most Memory Allocated and Types With Most Instances. I will get two tables with "Hot Path" and "Functions Doing Most Individual Work". Do you know why ?

  • @David: it sounds like you are looking at a CPU Sampling report. Did you forget to change the default from "CPU Sampling" to ".NET Memory Allocation" on the first page of the performance wizard?

  • I selected the correct method in the Wizard (.NET memory allocation. What I am doing is starting my app by doubleclicking the exe in the bin, only then I start the Wizard and at the final step I uncheck the "Launch profiling after wizard finishes". After finishing the Wizard I open the Performance Explorer and attach the process I want to profile (my app) and the profiling will start. If you follow this procedure the output of the .NET memory allocation profiling is the same as the CPU sampling. I also tried with your sample application and it also behave this way.

Page 1 of 2 (16 items) 12