This week we will show how you can use another simple tool to capture useful data for doing a performance analysis of SQL Server. This tool is included in the SQL Server installation and it is called the Profiler tool. It is very useful in troubleshooting Performance issues, because you can identify which queries are causing high CPU (CPU cycles) and high IO (Reads). Running this trace can have a performance overhead if you choose to collect too many events, so I like to keep it simple for a start.

Here is how you can capture a simple and lightweight Profiler trace:

a. Click on Start and go to Programs | Microsoft SQL Server 20XX | Performance Tools. Choose the tool “SQL Server Profiler”.
b. Click on File | New trace. Connect to the affected SQL Server instance.
c. Give a name to the trace and use the “Standard” template.
d. Check the option “Save to file”. Choose to save the trace to a disk that has adequate free disk space.
e. Enable file rollover and set maximum file size to 1024 MB.
f. At the events selection tab, remove Audit Login and Audit Logout events.
g. Now Run the trace to start collecting events.
h. Profiler will be creating trace files of 1 GB size. You can keep deleting the old trace files, if the performance issue takes a long time to appear.

These steps will produce some .trc files that will have a similar filename. They will have captured all the queries that were executed, so you can then open them and check which queries are causing the IO issues.

There are many ways to analyze these files:
1- You can open each of them inside the Profiler tool (this might cause the Profiler tool to hang if the files are too big and/or too many)
2- You can load them in the tool called SQLNexus for automatic performance analysis: http://sqlnexus.codeplex.com/
3- You can import these files inside a table and run ad-hoc queries to analyze them

Methods 1 and 2 are pretty straightforward, so I will mention how you can use Method 3 - with a homemade script:

-- Import all the trace files/events into the mytable table
SELECT * INTO mytable
FROM ::fn_trace_gettable('c:\mytracefiles\mytrace.trc', default)

-- Find out the top 100 queries that had the biggest duration
select top 100 textdata as TSQL_Statement,
(duration/1000000) as Duration_in_Seconds, reads as IO_Reads, writes as IO_Writes, cpu as IO_CPU,
loginname as Login_Name, ApplicationName as Application_Name, starttime as Start_Time, endtime as End_Time
from mytable
order by duration desc

-- Find out the top 100 queries by IO (Reads)
-- Each Read is 8 KB and these are logical Reads, not physical
select top 100 textdata as TSQL_Statement,
(duration/1000000) as Duration_in_Seconds, reads as IO_Reads, writes as IO_Writes, cpu as IO_CPU,
loginname as Login_Name, ApplicationName as Application_Name, starttime as Start_Time, endtime as End_Time
from mytable
order by reads desc

-- Find out the top 100 queries by CPU (CPU Cycles)
select top 100 textdata as TSQL_Statement,
(duration/1000000) as Duration_in_Seconds, reads as IO_Reads, writes as IO_Writes, cpu as IO_CPU,
loginname as Login_Name, ApplicationName as Application_Name, starttime as Start_Time, endtime as End_Time
from mytable
order by CPU desc

That's it for now! Next week I will start discussing on how to troubleshoot more specialized performance issues.