I know there are people out there who found out about Spec Explorer 2010 as a great testing tool, but are wondering what all that "Model-Based Testing" buzz is about. If you are in this group, this post should be a good place to start.

Of course, you can also search the Web for Model-Based Testing (MBT, for short) and you'll find plenty of information, since MBT is not new or restricted to Spec Explorer. It's in fact the subject of ongoing academic research and an industrial practice that has been around for a while and is here to stay. Tools like Spec Explorer just make it easier to learn and to use, and generally more accessible to a larger audience.

So I'll cut to the chase and explain what MBT is and what it's used for. Model-Based testing is considered to be a lightweight formal method to validate software systems.

It's formal because it works out of a formal (that is, machine-readable) specification (or model) of the software system one intends to test (usually called the implementation or System Under Test, SUT).

It's lightweight because, contrary to other formal methods, MBT doesn't aim at mathematically proving that the implementation matches the spec under all possible circumstances. What MBT does is systematically generate from the model a collection of tests (a "test suite") that, when run against the SUT, will provide sufficient confidence that it behaves as the model predicted it would.

So the difference between lightweight and heavyweight formal methods is basically about sufficient confidence vs. complete certainty. Now, the price to pay for absolute certainty is not low, which results in heavyweight formal methods being very hard (sometimes prohibitively hard) to apply to real-life projects. MBT on the other hand scales much better and has been used to test life-size systems in huge projects, some of them inside Microsoft.

Here's a simplified diagram showing how MBT works.

Model-Based Testing in a Nutshell

image

The method starts from a set of requirements usually written in prose, sketches or just shared by the development team as tribal knowledge.

The first task in the process (#1) is to create a machine-readable model that expresses all the possible behaviors of a system meeting the requirements. This has to be done by humans and is arguably the most work-intensive step. A key ingredient to keep it manageable is to work at the right level of abstraction. That is, as a modeler you should focus on certain aspects of the system that will be tested (such as UI elements or API calls) and forget about the rest. Other models can be created for other system aspects, as long as each of them is kept at a clear abstraction level.

In the case of Spec Explorer, models are written as sets of rules in a mainstream programming language (C#), which makes the learning curve much less steep than tools that require learning an ad-hoc formal language. Spec Explorer runs as an add-on to the Microsoft Visual Studio integrated development environment, which provides extensive support for .Net languages such as syntax coloring, auto-completion and refactoring. A small but powerful configuration language called Cord (short for "Coordination Language") complements the C# code by providing features to combine models, generate test data and select certain scenarios that are especially relevant for testing.

I said authoring a model is the most work-intensive step in an MBT process, but it also has a big payoff! Just by trying to turn informal requirements into a machine-readable model you are very likely to discover inconsistencies and missing pieces in the requirements ("What on Earth is the system expected to do when I press the Esc key twice?") This is illustrated by flow #2, where feedback about the requirements is obtained from the model.

Once the model is in place, a Model-Based Testing tool such as Spec Explorer can do its magic. It can automatically take a model and generate standalone test cases from it (#3). Spec Explorer is among the MBT tools that generate complete test suites from a model, including both the inputs to be provided to the SUT and the expected outputs, also called a test oracle.

So the test cases automatically generated by Spec Explorer will run independently from the model, in a standard unit test framework, such as the one included in Visual Studio or NUnit. The tests will provide the test sequences (and data) to control (#4) the implementation. But they will also observe (#5) the results coming back from the system to compare them with the outputs expected by the model and issue a verdict (#6) of Pass or Fail, together with log information for diagnostics. Moreover, this execution of the test cases in lockstep with the SUT can be reiterated to reproduce bugs and debugged to understand what went wrong.

Of course such a verdict will provide feedback about the implementation (#7). After all, finding bugs or gaining confidence on the implementation is the goal of the whole thing, right? Well, yes and no. A failing test case might also mean that the implementation is behaving correctly, but an error was made while creating the model! Or perhaps the model faithfully reflects the documented requirements, but it was the requirements that were wrong to begin with! If this happens, you shouldn't freak out. One of the big advantages of MBT is it makes test suite maintenance much easier, when compared with manually written test cases. Just take the result as normal feedback about the model or requirements (#8), fix the model to convey the behavior actually expected from the running system, re-generate your test cases, and voilà! You are back on track. The previously failing test cases should now pass. Keith, Wolfgang and I have shown an example of this workflow in our Channel 9 video.

I hope this both satisfies your curiosity about Model-Based Testing and feeds your curiosity about Spec Explorer, at least enough to install it and start using it!