In my experience, many testers tend to identify with certain types of measurements and metrics which on the surface may seem to be desirable to maximize. These statistics can provide feedback to the tester to indicate that they are making progress and getting something done through the course of their work. However, when one looks below the surface, “overachieving” within these statistics may in fact be an indicator of an unhealthy software development system. Let’s take a look at a couple of tester-related statistics which I believe are worthwhile to scrutinize: Number of Tests, and Number of Bugs Found.
I think many agree it is good to provide regression coverage by authoring stable automated tests – it provides a way to repeatedly check that the system has not deviated from a well-known “good” state. As part of the daily job of the SDET, writing test automation is something that most of us do on a regular basis. And heck, if writing one test case to provide coverage is good, then writing 100 cases must be better, right? This line of thought unfortunately seems to be quite common. I have seen multiple instances of testers being proud of the sheer volume of tests they added to the automation system:
“As part of my feature, I added 450 tests to the BVTs, isn’t it great?”
Not necessarily :) One should consider the ramifications before claiming that “more is always better”:
What then should be the tester’s goal when writing automation? In my opinion, testers should shoot for the “Minimal Maximal Set”, which is coined to describe the minimal set of test cases which provide the maximum amount of coverage. If you can generate a set of 2 robust test cases which provides the same amount of coverage as another set of 100, the TCO for the minimal set will likely be far less over the lifetime of the cases and will be net positive in terms of team efficiency.
Testers find bugs; it is part of their DNA. On the surface, it is great to see testers find many defects that would otherwise make their way to the customer. It is also great to see testers to carry a certain pride in being able to uncover bugs in the process of their work. However, this is again a case of “more is [not always] better”. Some types of bugs found again and again can indicate intrinsic problems in the way that we develop and test the system. For example, consider the following:
Again, what should be the goal here? While it is a fact that good testers can find lots of bugs, I believe that we should look far beyond the numbers and ask ourselves "What do these bugs tell us about the product?" and "What do these bugs tell us about the way we test the product?" Many folks tend to gravitate towards simple bug statistics because they are easy to measure, but unfortunately they don't tell us much in isolation.
Only when we look at what the statistics are really telling us can we begin to make meaningful progress...
-Liam Price, Microsoft