Hi! My name is Bill Wert, and I’m a member of the CLR Performance Test team. I’m going to write a couple blog posts to describe at a high level how we test startup time and working set on my team, and point out some tools you can use in your development and testing processes to keep track of the performance of your application.
This first post is going to outline a few things you should consider before you begin.
There are several things to consider when you are setting up a machine for performance testing. The goal is to provide a highly consistent environment for testing. This will allow comparison of the performance of your product from build to build, with confidence that any deltas are due to changes in your product and not some environmental impact. It may be that you’re using multiple computers for performance testing, either as several developers perform their own measurements or in a lab. In this case, it’s important to make the machines identical in hardware and software configuration. That isn’t to suggest that you should only test on a single configuration, however. It’s a common practice to have a high end, mid-range, and low end specification. The key is to avoid comparisons of time between those platforms.
In addition to hardware, several configuration settings or installed software can have an impact on performance testing. These include:
· Virus scanners
· Unneeded services running in the background
· Processes accessing the disks while your test is executing
· Network traffic causing test machines to spend processor cycles responding to packets
· Screen savers
· System power states (sleep and hibernate are the obvious ones, but modern processors will also dynamically change frequency.)
In performance testing, there is a balance to be struck between emulating the customer environment and having a pristine environment to make testing more conclusive. My recommendation for daily regression runs is to err toward creating that pristine environment. It’s important to consider the impact of things like virus scanners to your product, but largely these are things outside of your control. With that in mind, it can make sense to try and minimize the impact of these external forces during development, so you can better focus on the things you can control and fix.
In considering what to build a performance testing suite around, you will of course want to focus on what your key scenarios are. For example, a word processing program might focus on the time from program start to an empty editor window, and the time from starting the program in response to a document being double clicked by the user and being ready for editing.
For performance testing, it’s important to have automated scenarios. If the scenarios aren’t automated, it will be difficult to impossible to get repeatable results. Of the many things you should consider while building test automation, I’d like to point out two here. The first is the position of the mouse. In our labs, we’ve seen variance in test scenarios depending on whether the mouse is over a window or not, and which control it’s over. It’s best to ensure the mouse is in a consistent location every time (like 0, 0.) The second is window positioning. By default, for non-full screen windows, Windows will change the position of the window each time it launches. If possible, when you start your process, pass the correct flags to make it start full screen. (For example, start /max from a batch script or using ProcessStartInfo.WindowStyle.) The final note on the test scenarios is that you may need to instrument the application for startup time testing to indicate the “end point” of startup. I will elaborate more on this in the next post, but I wanted to mention it here.
Of course, to measure the performance of an application accurately, we’ll need some tools. If you’re reaching for the stopwatch button on your wrist watch right now, stop!
Using tools like the Windows Performance Toolkit, which is available at http://msdn.microsoft.com/en-us/performance/default.aspx. The key tool here is xperf.exe. We’ll look more at these when we talk about startup time testing in the next post.
VMMap is a relatively new tool available from Sysinternals. You can download it here: http://technet.microsoft.com/en-us/sysinternals/dd535533.aspx. This tool is excellent for both analyzing virtual address space usage, and memory usage.
CLR Profiler is a tool for analyzing the managed heap. It will be discussed more in depth in a later post. It is available here: http://www.microsoft.com/downloads/details.aspx?FamilyID=a362781c-3870-43be-8926-862b40aa0cd0&DisplayLang=en.
In the next two posts, I’ll cover first startup time measurement, and then memory usage. I look forward to seeing any questions you may have in the comments!