In this article we’ll explore how Microsoft Test Manager (MTM) can help testers and developers work more closely together within a product development iteration – such as an agile sprint – to build quality upstream and keep their project on track.
First let’s build a diagram representing an imaginary ‘Iteration N’ that occurs in the middle of a product cycle and consider the collaborative challenges faced by testers and developers at key points. Developer activity will be shown along an upper blue arrow, with time flowing left to right. Tester activity will be shown on the lower yellow arrow. Green arrows in the middle represent nightly builds of the application under development.
Let’s say our team begins the iteration with a planning exercise, and decides to move two user stories (US#1 and US#2) from the product backlog to the iteration backlog.
After the planning exercise, the developers begin implementing the user stories in order of priority, while the test team gets itself prepared to test each user story as soon as possible, ideally when it becomes available with a nightly build.
As the iteration progresses, the developers finish implementing the first user story, US#1, and immediately forge ahead with the second, US#2. They can’t focus exclusively on US#2, however, because the testers begin testing US#1 and filing bugs against it. The developers must therefore split their effort between implementing new features and addressing bugs. This lengthens the time it takes to implement #US2:
Eventually the devs check in the code for US#2 along with a set of fixes (they hope) for US#1. Now it is the testers' turn to multitask, as they have to begin testing US#2 as well as verifying the supposed fixes for US#1. Some of the bugs will have been sent back with a resolution of “no repro”, and others will be resolved with fixes that fail to adequately address the issues, or which result in a new issues that necessitate the filing of new bugs.
Thus, the remainder of the iteration will play itself out with developers chasing bugs, and testers engaged in verifying, reactivating, and filing new bugs as needed:
Toward the end of the sprint the bug count trends lower as the team focuses on Q/A; but a hidden danger lurks. The testers might well wonder if any of the test cases that had passed earlier in the sprint have been impacted by more recent code churn. For example, what if a particular test for US#1 had passed when originally run against Build #3; but would now fail if run against a more recent build?
The testers will need to address this risk by investing some effort in checking for regressions. Let’s account for this in our timeline by adding a “regress impacted tests” block near the end of the iteration:
Having walked through this idealized iteration (and having lived through more complex iterations in real life), we can imagine how such an exercise can get off to a decent start and finish badly by the end. There is a tendency toward mayhem as devs and testers react (or fail to react) to each others’ actions, play ping-pong with low quality bugs and fixes, and churn roughshod over previously validated code. By the end of the sprint the team may well find itself in a schedule jam, forced to cut features and postpone dealing with issues.
Fortunately, Microsoft Test Manager (MTM) can help the team stay on track and achieve a happy ending by adding value at key points in the iteration cycle:
Let’s discuss these five benefits within the context of the iteration cycle we built above…
A key challenge for testers in the early part of an iteration is to plan for test coverage and be in position to begin testing as quickly as possible, ideally ahead of development.
MTM keeps the test team ahead of development by providing rich tools for planning/authoring the test effort:
The snapshot below shows requirement-based test planning in progress using MTM:
To learn more about requirement-test-planning features see the blog entry “No More Missed Requirements”.
While planning test coverage -- and throughout the iteration -- it is vital that the test team maintains awareness of what changes the devs are checking in to the builds. The test team should be poised to jump on any newly implemented requirements or make the appropriate course correction when the scope of a requirement has changed.
The MTM Assign Build feature helps testers tune in to the build cycle so they are aware of requirements and other changes that become available in new builds:
To learn more about the MTM Assign Builds feature, see the blog entry “No More Missed Requirements”.
The bugs will start flying by the middle stretch of the iteration, and if they are of low quality they will introduce a serious drag on the team. A low quality bug that is hard to repro or hard to investigate wastes the time of a developer who could instead be focusing on implementing the remaining user stories. A bug sent back as “no repro” wastes the time of a tester who could instead be covering untested areas of the product.
A high quality bug, in contrast, will be easy-to-repro, investigate and resolve correctly, leading to a short lifecycle and minimal drag on the team.
MTML helps testers file high quality bugs by capturing pertinent information during manual testing. When the developer gets a bug, it may have the following types of rich data attached:
As a dramatic example, the following screenshots show a crashing bug created with MTM that links to historical debugging information that enables the developer to click on a thread and identify the exact line of code where the crash occurred. This is a bug that will not be sent back “no repro.”
For more information about how MTM helps testers capture data and file rich bugs, see the blog entry “Create Actionable Bugs”.
Resolved bugs will typically tend to pile up on a tester’s plate. Processing the bugs may require a lot of context switching on the tester’s part, including recalling information and rerunning test cases.
MTM streamlines this process with the following features:
The following sequence of screenshots demonstrates the streamlined workflow for (1) clicking ‘Verify’ for a resolved bug in MTM My Bugs, (2-3) playing back the recorded steps in MTM Test Runner and resolving it ‘passed’, and (4) closing out the bug.
To learn more about using MTM My Bugs to verify resolved bugs, see the blog article “Making the Most of Your Bugs”.
To learn more about recording and playback, see the help article “Recording and Playing Back Manual Tests”.
Toward the end of an iteration the testers should invest some effort in checking for regressions. The team likely won’t have the resources to rerun all test cases. Fortunately, MTM helps the team focus on the subset of test cases that have been impacted by code churn.
The MLTM Recommended Tests feature compares a recent build against an earlier build and identifies which test cases were impacted by code churn. Testers can select some or all of the impacted tests, reset them to “active”, and rerun them to test for regressions.
For more information about MTM Recommended Tests, see the blog article, “Test Impact Walkthrough”.
To recap, in this article we’ve used a simple product iteration to illustrate how MTM adds value at key points in the cycle:
These features help testers partner with the developers in their org to build quality upstream and keep the project on track.
We hope you try out these features in Beta release and welcome your feedback.
Thanks and Regards,
Michael Rigler Sr. Program Manager VSTS TeamTest