The Microsoft Dynamics CRM Blog
News and views from the Microsoft Dynamics CRM Team

Best Practices for CRM Memory Usage

Best Practices for CRM Memory Usage

  • Comments 4

Are you trying to improve the memory usage of your .net application? I’ve spent some time recently trying to improve memory usage of our .net code. In this blog, I would like to share techniques for debugging memory issues as well as useful coding practices that would help.

.Net Framework does a great job of hiding the complexity of memory management from application developers. However, you will find that you need to get a deep understanding of how the .net garbage collector (GC) works in order to be able to understand and improve your application’s memory usage.

There are many excellent articles and resources on the web that you can use to learn about troubleshooting .net GC issues. I’ve included a few links that I found helpful at the end of the blog.

Overview

The articles listed at the end of the blog do a great job of explaining how the GC works and how to diagnose memory issues. I’d like to provide a quick overview of the process here and encourage you to read the reference articles for more details.

You will need to understand the types of objects being instantiated by your application. You can gather this data either by adding your own instrumentation to your application, or thru’ windbg/sos debugger commands, or by using profiler tools available from various vendors.

You will also need to understand what type of GC heap allocations your application is causing. Short-lived objects use the Gen-0 heap, long lived objects such as caches use Gen-2 heap and large objects (>80K bytes in size) use the large object heap (LOH). Excessive use of Gen-2 and LOH heaps is typically bad for GC performance since these heaps are garbage collected relatively infrequently.

The above data points will in turn help identify where you can focus your optimization efforts.

Best Practices to follow

Here, I’d like to talk a little bit about coding practices and patterns that we found helpful in improving our memory usage. The exact bottlenecks for your application might be different, but the following practices will definitely help.

1) Dispose objects - Improving Managed Code Performance has additional details on the Dispose/Finalize pattern. If your code is creating an instance of a type that is disposable (implements IDisposable), ensure your code is disposing the object (either by wrapping its usage inside an using statement, or by calling Dispose explicitly in a finally block).

Disposing objects correctly gives you a lot of benefits including:

     a. Ensuring scarce unmanaged resources such as sql connections, handles etc are released for reuse as early as possible.

     b. Prevents long buildups in the finalizer queue (which is processed by a single thread) and can improve performance of GC.

2) Use StringBuilder class – If you have code that concatenates dynamically generated strings (example: creating xml input/output), use StringBuilder class rather than direct string concatenation. In addition to reducing the number of allocations, it will also help reduce the number of large object allocations (which are very expensive) if you are dealing with large xml strings for example.

3) Refactor web services into smaller,standalone dlls – If you have asmx style web services ([WebMethod] entrypoints), it is common to have the code that surfaces the webmethods in the same dll as the underlying implementation/business logic. If you expose a lot of web methods, this assembly can get pretty large.

We were noticing that for many incoming web requests, the asp.net runtime was allocating large byte[] (rooted under an instance of System.Security.Policy.Evidence class). The contents of the byte[] match the contents of our web service assembly (which was pretty big). Under load, this was causing heavy use of the Large Object Heap (LOH) and the associated side-effects.

Luckily, there is an easy workaround. The workaround is to refactor the code such that the webservice methods are in smaller, standalone dlls and seperate the implementation into other dlls. The following article also discusses a similar issue/workaround.

4) Initialize dictionaries/hashtables to reasonable sizes – If your code maintains dictionaries/hash tables with a large number of entries, pay attention to the initial size of the dictionary when you are instantiating it. As you add items into the dictionary, the dictionary has to be resized once its capacity is exceeded.

The resizing algorithm used by the .net classes is fairly sophisticated (doubles the current size and adjusts it to nearest prime number), however each resize will trigger a copy of the dictionary. This can get expensive if a resize is triggered on a dictionary with a large number of entries since it means an allocation on the LOH.

5) XmlSerializer Usage - If your code uses XmlSerializer class, be very careful on the constuctors you are using to instantiate the instances. Some constructores result in a new temporary assembly getting generated for each instance and you will easily run into out of memory situations due to the assembly leak. The MSDN documentation of XmlSerializer has more information on this. You can workaround this by maintaining your own cache of the xmlserializer instances.

If you have a favorite tip or technique related to optimizing .net memory usage, I’d welcome you to post a comment.

Useful Articles about .Net GC

Cheers,

Jagan Peri

Page 1 of 1 (4 items)
Leave a Comment
  • Please add 4 and 3 and type the answer here:
  • Post