Recently we’ve been reviewing what Excel reports we could offer to support TFS customers. We think Excel is a powerful platform that allows manipulation and exploration of your TFS data beyond RDL reports (SQL Server Reporting Services).

The particular report I want to describe is used by some larger teams at Microsoft and is useful for predicting and tracking the expected future bug count on projects that have long standard cycle times (e.g. a year or more) and fixed end dates. In basic form, the Excel chart looks like this:

The blue area represents the predicted active bug count for the whole eight-month project, while the dashed red line tracks the actual active bug count from the start of the project until today. Looking at this chart you might start to ask some probing questions about what was happening on the project: why is there such a divergence between the predicted and the actual bug count in April? We were expecting a bug spike during mid-May when we released a CTP (for example), but why are we getting this big increase in bugs earlier in the project? Researching the answers to these questions could help to unblock testing (i.e. improve the closure rate of active bugs) and could also help inform us about not accepting additional scope or new features until the bug count is more closely tracking our prediction.

Reading the Excel chart this way assumes certain things, most notably that our prediction makes some kind of sense! How can we produce a prediction of future active bug counts? As mentioned above, the technique relies on certain fundamentals including planning to a fixed end-date and having comparable cycle times and consistent milestones in each major release of the software. The ability to predict bugs for a future cycle of a project depends on being able baseline the rates at which the team can both produce and fix bugs. This means we need to know the number of testers (bug producers) and developers (bug fixers) and the estimated ability of the team to raise and resolve bugs each week. These latter numbers will vary as the team moves through different activities during the project (setup, development, stabilization, CTP, more development, etc).

The Excel spreadsheet is structured in this form, as a series of weekly data predictions about opening and closing bugs given the size of the testing and development teams:

For each week we record the number of testers, based on what we know about our resourcing. Below this we calculate the maximum number of bugs we expect a testing team of this size to be able to open, which is based on the greatest number of new bugs opened in any week during the previous cycle of the project. Next we estimate the percentage ability of the testing team to apply effort to bug opening during each week – this may vary depending on the other testing activities (such as test setup, server configuration, etc., which are not bug-finding activities as such). We also record the actual percentage effort the test team was able to apply to opening bugs. Following this you can see the projected number of bugs we expect the testing team to be able to open this week, which is a simple calculation based on the numbers above. Finally we record the actual number of bugs opened this week, and the cumulative count of open bugs as at this week.

The Excel cells for the development team are similar to the testing team.

The following is a chart from a project, showing the original ‘locked’ bug prediction based on our estimated bug generation and resolution rates over time, and our current tracking against the prediction (the thick red line). The original prediction is fixed so that you can continue to track actual against the original estimate. In addition, you can see a dotted red line which plots our most recent estimation of the future bug count.

You will note a couple of milestones have been added to the chart, Beta 2 and Beta 3 (these are simply drawing objects added to the chart for annotation). While the chart described so far allows for bugs produced and closed by our team, the issuing of a Beta causes external bugs (from outside the team) to be delivered in greater numbers. For this reason we extend the Excel data collection sheet to add an additional column each week, being the number of non-test-team bugs opened. Whenever a build is issued for wider examination (CTP, Beta, etc) we expect to see an increase in bugs coming from outside the test team, so we record this in the workbook prediction as follows:

Charts of this nature are useful because of the questions they cause us to ask. Bug counts can be considered one form of proxy for progress of the solution under development. For example, we may decide, based on a large divergence from our predicted bug count, that we won’t accept additional scope to be added to the project until we have understood in more detail why we are diverging from our original bug trend prediction.

With projects that have a reasonably standard cycle associated with development and release, we can plot the bug trends of several releases on the same chart to examine the profile of bug counts over many versions. This encourages us to try to understand the behavior of projects. In combination with other reports/charts from TFS we can shed light on the complex flow of software development.

Here is a chart showing two past versions of a project (v4 in blue, and v5 in orange). The current development (v6) is shown by the dotted line in red:

You can see the effect of the various milestone dates on the past bug counts – the counts rise shortly after the release of a Beta version. We have also added predicted dates for the v6 Beta milestones (red vertical bars) and added our estimates of increased external bug counts at those milestones.

Some teams at Microsoft use these Excel-based reports as one tool to help understand the progress of complex developments and to signal problems for further investigation. Would such an Excel report be useful to you? Should we ship something like this? How should we deliver it? We could develop the report above into an in-box offering – let us know if you think this is a good idea. Our view is that this is a non-trivial workbook to setup and maintain, being more of a tool than a report, and is applicable to a particular style of management of larger, fixed cycle projects.

We’d be interested to hear feedback on this, and also how you predict bug counts on your larger projects – or any other techniques you have developed. I look forward to your comments.