I have been having various conversations of late about exploratory testing and Session-Based Test Management. Interesting conversations. Conversations I never expected to have, in some cases.

One tester thought exploratory testing was nifty. He was also concerned that it would mean he would never code again. "I spent all this time learning to program, and now that I am exploratory testing I don't get to do the coding I love to do?"

I am finding this to be a common misconception, that exploratory testing, or manual testing, means nothing other than a mouse and keyboard comes between a tester and the product they are testing. This is patently false. An effective tester will use every tool at their disposal. If my test mission is to boundary test every input widget in my application, spending half an hour to write a small automated script which blasts thousands of data values into every widget may well be a better use of my time - and find more bugs faster - than doing the same task manually. Especially if I watch it as it runs and so can notice unexpected side effects. (And if by some chance my testing cannot benefit at all from any tool, almost certainly some part of the rest of my job can!)

Another tester said he did not see how exploratory testing and SBTM could work for functional testing.

I goggled at him a bit. SBTM seems to me custom-built for functional testing as its primary goal is to provide a view onto what testing has been done and what testing is still left to do. I explained this in detail and suggested he give it a whirl for several weeks and see how it goes. If it turns out to not work for him, I reminded him, he could always go back to writing scripted test cases.

A test lead and I had a similar discussion. He was embarking on the potentially challenging task of introducing exploratory testing into his management-gets-warm-fuzzies-from-knowing-we-run-oodles-of-scripted-test-cases environment. He was curious how to ensure coverage of requirements  and use cases and such when using exploratory testing.

I explained how I base my test mission on the functional areas. For example, a web application might have missions such as:

  • Login - Functionality
  • Login - Security
  • Login - Accessibility
  • Login - Globalization and Localizations

I generally also have one or more test missions for use cases and user scenarios, where the number of missions depends on how many use cases/scenarios there are, how complicated they are, and into what sorts of logical groupings they fall. Similarly, I cover risks by having an explicit test mission for each risk, including in the test mission description any specific concrete issues I want to be sure are covered.

This test lead was planning to have his more experienced testers do exploratory testing for functional areas and system-level scenarios while his less experienced testers executed scripted test cases for areas of high risk.

Well. This seemed backwards to me, and I said so. If ever there is a case where I want my testers using their brains, and where I want to apply my most experienced testers, it is certainly on high risk areas! If a specific problem occurs which a) must never ever occur again, and b) the particular steps matter, I will generally create a new test mission along the lines of "Ensure these specific steps no longer cause this problem, and then look for other ways this or a similar problem could occur". This way I know that those particular steps work, and I also get a tester's brain looking for similar issues.

If scripted test cases were necessary in order to make my management happy, I would work with them to understand what risks they believe will be mitigated and/or what information they wish to be provided, develop a small set of scripts to cover those risks and provide that information, and then create a test mission for each script along the lines of "Run this script verbatim, and then use this script as a jumping-off point for the remainder of the test session".

I tell you all this for several reasons. One is to provide you some thoughts to chew on, if you are pondering similar questions. Another is to provide you some insight into my current thinking, so that you can identify areas where you think I am bonkers and engage me in a conversation where you tell me so and why and we discuss from there. Another is so I can remember what I said the next time somebody asks me a similar question. <g/>

 

*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great testing and coding skills required.