One of the most common questions I hear towards the end of a product cycle is "are we ready to ship?".  The answer to the question depends on many factors, most importantly the status of the quality metrics.

Today, I would like to spend some time talking about some of the key quality metrics.  In later parts of this series (I expect a couple more), I will go into some more detail including: interpretation of quality metrics and how to identify warning signs.

Pass Rate
Pass rate is one of the most straight forward metrics.  This metric measures the percentage of tests which passed during the test run.

Bug / Defect Counts
When looking at bug counts as a quality metric, it is important to include resolved as well as active defects.  Active bugs (the number of known defects in the product) are a well understood metric. 

The more interesting number, for me, is the count of resolved (not yet closed) bugs -- especially those resolved as "fixed".  For each of the bugs that have been fixed, a change has been made to the product.  Every change introduces a risk of some part of the product being broken.  It is very important to keep a close eye on the resolved bug and to make sure each one of them is re-tested to ensure that the bug was indeed fixed and that no other issues arose due to the change(s).

One related metric, which I consider to be part of the Bug Counts is the percentage of reactivated bugs.  These are the ones that, once resolved, were reactivated because the original issue was not completely fixed.  As a rule, I only reactivate a bug if the issue described within was not fixed.  Any additional issues related to the fix, such as a new bug being introduced, should be tracked in a separate entry in the bug database.

Mean Time To Failure (MTTF) / Mean Time Between Failures (MTBF)
One of the most important aspects of testing a product is placing it under stress.  A while back, I described my two classifications of stress tests (long haul and short haul) and mentioned that the results of long haul stress testing tend toward the product's MTTF/MTBF metric.   These results should be tracked between stress test passes (looking for changes that cause robustness to decrease) and measured against the product specification.

Performance
As with stress results, it is important to track performance data between test passes and measure against the product specification.  I discuss measuring performance here.

The above metrics directly measure the quality of a product: how well it works, how much work is left to do, how long it will run and how fast.  The next two I will talk about are often discussed in quality meetings and status reports.  While not direct measurements of quality, they are still very important as they are measurements of risk.

Code Coverage
Code coverage is one of the best measurements of the testing being performed on a product.  If the code coverage data is very low (below 50%) the product is not being adequately tested.  Any portion of a product that is not being covered by testing is a risk.  Blocks of code that are not being tested cannot have their quality properly assessed.  If the code has not been exercised, there is no way to be sure of the quality.  Even when code is reviewed carefully (and appears correct), it does not always do what the developer intended.

Code Volatility / Code Churn
By measuring the amount of change in the product from one build to the next, the amount of testing required to validate quality can be assessed.  The larger the amount of change, the more testing is required to ensure the quality of the product.  As a product gets closer to the target ship date, the changes in the code should decrease, leading to a stable product.  I like to think of the weeks leading up to releasing a product like watching an airplane land.  A smooth, gradual descent leads to a very comfortable landing.  A steep descent is significantly less comforting. 

Take care!
-- DK

Disclaimer(s):
This posting is provided "AS IS" with no warranties, and confers no rights.