I know, I know, this is the canonical starting point topic everyone has already covered with the Whidbey profiler. I just needed something easy to psyche myself up before launching off into the more technical issues we've come across while working on getting Beta 2 ready, so bear with me and consider this one of those fluffy first chapters that plague most technical literature:

The profiler tool shipping with Whidbey represents an amalgamation of two profiling tools that originated within Microsoft Research and have been used inside the company for some time. One was a sampling profiler and the other an instrumentation profiler. Now we are shipping a product including both modes of profiling in a single tool.  In order to make the right choice about what mode to use, it's important to understand the differences:

  • Sampling takes statistical samples of application performance so that in the aggregate you can get a feel for application performance and problem areas. It is "low overhead" since the sampling nature does not interfere as much with the performance of the software under test.  Sampling also results in a smaller file size once the performance data collected is flushed to disk. 
  • Instrumentation actually modifies the binary, inserting probes at function entry and exit that collect performance data every step of the way while the application is running. It is not as low overhead as sampling because those additional instructions for the probes (while as fast as we know how to make them) still affect performance of the running application compared to the non-instrumented version.  The real issue however that seems to really surprise first time users is the amount of data instrumentation can collect compared to sampling. In some cases users have inadvertently instrumented so much of an application and left it running for so long that they came back the next morning to discover that it had consumed the entire hard drive. Instrumentation was overkill in those cases, but used appropriately can capture a level of detail and exactness that sampling cannot.  For example, if you are worried about a small portion of user code being dwarfed by time spent in dependency code that you don't have control over, you may choose instrumentation of just that piece of code.  Sampling would collect data on everything but targeted instrumentation can allow you to pinpoint and front load the decision about what code you care about in the first place so that the job of separating signal from noise during the analysis stage becomes that much easier. 

So basically those are the two modes for profiling.  If you don't mind my lame comparisons, it's like having a hedge trimmer and a lawn mower.  You could probably mow your lawn with the hedge trimmer and trim your hedge with the lawn mower, and nobody would stop you.  But if you mow your lawn with the lawn mower, you can have that personal satisfaction of using the right tool for the job and get the job done before it gets dark.  But of course never, ever, wear your best shoes while mowing the lawn or they'll turn green.