I do a lot of performance analysis on SQL Server Instances, and I normally start with a series of Performance Counters from both Windows and SQL Server. This gives me the ability to limit what I need to look at by seeing which Hardware and Windows components (and SQL Server-specific  features, such as locks and blocks) are experiencing the most pressure. It isn’t the place I stop, but it is the place I start.

After I collect the metrics, I put them into (what else) Excel for analysis. I’ve also done this in a SQL Server Database, but Excel gives me charts and allows me to ship the results around to other folks, some of whom don’t know T-SQL.

In any case, I then measure the MAX, MIN and in some cases the SUM of these numbers. I also include the AVERAGE, which is what a lot of folks focus on. If you do this as well, and it is a good number to look at, by the way, remember to include the Standard Deviation calculation.

The Standard Deviation is one measurement that shows how well you can trust the Average. If the Standard Deviation is significant, say almost half or even equal or higher to the Average, that means that the Average isn’t really that Average. If I see this, I’ll also include the MODE to see if one number or another is a better guide.

The point is, analysis isn’t just about looking through a few totals and making assumptions – it’s understanding what you’re looking at and taking the numbers together to come up with a hypothesis to test.