I got an interesting question from our Support team today and thought I’d blog the answer since it identified an area that can easily cause some confusion. In the BOL documentation for CREATE EVENT SESSION the description for the MAX_MEMORY session option states the following:

“Specifies the maximum amount of memory to allocate to the session for event buffering. The default is 4 MB. size is a whole number and can be a kilobyte (kb) or a megabyte (MB) value.” [Underline is mine.]

An observant Extended Events user noticed that their event session appeared to be using more than the specified value of for MAX_MEMORY and and that the amount over MAX_MEMORY could be quite large. So what’s up with that?

Buffer memory versus session memory

You’ve probably already guessed that I underscored the words “event buffering” in the BOL description because it was important. To better understand the importance, you need a crash course in how the asynchronous targets in Extended Events work. This is the real short version…

When an event that is being track by an event session that includes asynchronous targets, the event data is written to an in memory event buffer. The event data sits in that event buffer until it is processed to the targets. The event buffer is processed based on two options, MAX_MEMORY and MAX_DISPATCH_LATENCY. I’ve discussed these options in more detail in a previous post.

So the MAX_MEMORY option of the session is really only specifying the amount of memory to allocate for the event buffer, not for the entire event session. The amount of memory above MAX_MEMORY required for an event session is primarily related to the asynchronous in-memory targets. (eg. ring buffer, pairing, etc.) Most times it is fairly obvious how these targets will contribute to memory allocation, for example the ring buffer has its own MAX_MEMORY option to define the size of the memory allocated for use by the ring buffer. This is separate from the memory used for the event buffer.

So what’s the catch?

The most interesting in-memory target in terms of giving you “unexpected” behavior (but I’m telling you about it now, so it won’t be unexpected anymore) is the pairing target. The pairing target handles memory like this:

  1. The event buffer is processed to the pairing target.
  2. The target finds all the “pairs” and throws them out.
  3. The target allocates memory for the “orphans” and stores them.
  4. The event buffer is free to collect more event data.
  5. Lather, Rinse, Repeat.
  6. The next time around the target will attempt to match any orphans that it has stored with the new data from the event buffer (and throw out the pairs) and then allocate memory to store the new orphans along with any remaining orphans.

If you happen to have a large number of orphans being generated you’ll find that the pairing target increases in size each time the event buffer is processed. If you’ve picked events that don’t pair well, then the pairing target can start eating up a lot of memory. The pairing target does have an option to RESPOND_TO_MEMORY_PRESSURE that will force it to stop collecting orphan events when under memory pressure, but if you have plenty of memory the target will happily use it.

What not to do

The canonical example of events that people would like to pair but that “don’t pair well” is the lock_acquired / lock_released events. These events are just begging to be paired, but it’s common for the SQL engine to release multiple, related locks as a single event. This results in a situation where multiple lock_acquired events are fired, but only a single lock_released event is fired and you end up with an increasing number of lock_acquired events being tracked in the pairing target that don’t represent legitimate lock orphans.

The typical reason for wanting to do this type of pairing is to identify blocking but it is better to use the dm_tran_locks DMV to track blocking in this case.

Hopefully this clarifies the MAX_MEMORY option and what it is actually controlling.

-Mike