I have been very busy the past year working with a particular client revamping their entire Development Operations Strategy. My favorite part of this revamp is to their testing strategy. As performance in the sub second category is extremely important to them, we have decided to automate the process such that we can track trending and see the impact of our performance metrics.
This part is to serve as an overview for what we have built and the technologies used.
What we built!
The end result is an automated profiling on a scheduled basis against the latest successful build from the nightly build system against an isolated (distributed) test environment for C# Libraries and also ASP.NET web sites and endpoints. The primary technologies leveraged in this system are:
1. The Visual Studio Profiling Tools (vsperfclrenv, vspervcmd, instr.exe, vsperfreport)
2. Team Build
3. MSTest (test controller/agent set up, load tests, unit tests, web performance tests)
4. Team Foundation Server
So when we first decided to give this thing a shot, we thought we would use a Custom Data Diagnostic Adapter. The Data Diagnostic Adapter is pretty cool, though due to how the profiling tools work in their current state, this is not the route that you want to go, it will give you a run for your money. The way we built the system is:
1. Created our environment (Test Controller/Agents with appropriate software stacks)
2. "Borrowed" from the Default Lab Template for finding the latest successful build.
3. Build it
4. Instrument all of the resulting binaries via an activity in MSBuild
5. Set VSPerfClrEnv, VSPerfCmd AND make the call to MSTest passing our load test container and .testsettings file
6. Shut down the profiling session
7. call vsperfreport to finalize the document with symbols
8. copy the result to our output directory for other teams to do with as they will.
Our Testing environment consists of 1 test controller and 2 test agents. The test controller is configured for load testing and the two agents configured to run as a service. SQL Server is installed on the controller (not recommended) and one agent is set up as a Web Server and the other a client box with stacks appropriate to that and all machines have data collectors and profiling tools installed on them. This environment will serve as a sample that will be similar to production. Our .testsettings file then references our controller. Do not forget to use tagging or you will end up with your binaries being sent to all servers and tested. I will talk about test mixes (unit vs web performance) later in the series.
To Be Continued...
So hopefully this got you excited about the series and what we will talk about. Please feel free to ask questions for more specifics, I will do better to respond to posts.