SQL Server Performance Assessment – What are my top 20 expensive batches

SQL Server Performance Assessment – What are my top 20 expensive batches

  • Comments 1

Previously here , I discussed the nature of collecting data when you are experiencing a SQL Server performance problem. A performance problem sometimes intimidates people as they are not sure where to start. This post details one of my preferred ways to take a quick overview of what the situation is.

Let’s assume that using one of the techniques described, you have some SQL Profiler data. One of the first things that I like to do is aggregate the data together into a database so that I can then simply query it for both an overview of aggregated problems or fetch details of individual batches quickly and easily, using TSQL.

My current preferred method of doing this is to use the readtrace tool that installs as part of RML Utilities. You might choose to run readtrace.exe from the command line (as I do), or you might prefer to also install SQLNexus and set the options there to have it run readtrace for you. Either way at the end of it you’ll be left with a database that contains the readtrace schema.

What I like to discover first off, is answer the question

“Are there actually SQL batches that are taking a long time to run?”

This may sound obvious, but considering the number of different types of performance problems, and the different sub sets of data you might want to analyze, this provides an immediate split of the problem into separate areas. With this data you can follow a line of questioning like this:

1. Do I have long running SQL batches?
2. If yes – is it the same batches that are long running all the time?
3. Is it a small number or even a single batch that is long running?
4. Can I recognize a pattern to this behavior? Is the result of a recent application change?
5. Is this a crazy query being issued manually by a user?

And so on….the point being as well that if the answer to the first question is “no”, then maybe you should be looking elsewhere within your application architecture for the problem, or maybe you should be collecting additional or different diagnostics. There’s no hard and fast rule here, but it’s always nice to have some sort of an idea of what direction you might wish to proceed in.

Consider the following example. You make a small change to a previously healthy application and suddenly all your users start complaining of unbearably slow performance. You collect the profiler data and run it through readtrace. When you query the readtrace data you find that a single SQL batch is always at the top of every list of the most expensive batches, under every category. This is a stored procedure which acts as a common entry point for all application functionality. Knowing this you can now work out that every user is experiencing application slowness because this common stored procedure always runs slowly. You can now focus on why this specific piece of code is going slow and deal with it as necessary.

So anyway, the script is attached to this post. It’s based upon one of the old scripts which we used to use internally before the advent of SQLNexus, and has been amended by me to suit my requirements. I often use it to send customers summaries of their badly performing code by email, and so it’s designed to have results output to text. This is also why it only prints the initial 100 characters of each batch, but you can change this easily enough if you have long SQL batches.
You might note that it only does batches, and not statements. This is because first off when I have a random performance problem, I don’t tend to collect statement level data in profiler as it’s so much more expensive to collect and sometimes can exaggerate the situation. I work on the basis of taking a summary collection and then going back for more targeted data if need be. readtrace does know how to analyze statement level data if you collect it (in fact you’ll note that if you don’t collect it, it warns you) so you can always write your own version on statement level data by following this principal and looking at the readtrace schema which comes with the documentation.

Finally you might ask, why not do this with DMVs and look at the data real time. Well you can do this, and sometimes I would, but that’s not what this post is about! There are both advantages and disadvantages to this method, and I do have an archive of DMV queries which I’m ready to run should I ever do online troubleshooting of this nature. These are the type of queries which you can also see if you download the performance dashboard reports. But as I say, I really like readtrace because once you have the data in this format, you can run your stock queries like the one attached here, but if you find something out of the ordinary, then you can write something custom to analyze the data in more detail.

Attachment: top20batches.sql
Leave a Comment
  • Please add 4 and 2 and type the answer here:
  • Post