One of the more interesting things about working at Microsoft is the close collaboration the Microsoft Research (MSR) group has with academic research in many universities around the world.  Today I was able to go to a talk form one of the researchers I most admire in the software engineering field: Prof. Andreas Zeller of Saarsland University in Germany.  Prof. Zeller has a long history of research into bugs - what causes them, how to detect them and how to fix them.  His book "Why programs fail" is one of very few books to cover this area.

Prof. Zeller's talk was on the subject of mutation testing to understand test suite effectiveness.  You can find out more, including watching a pre-record of pretty much the same talk I attended, here:

http://www.st.cs.uni-saarland.de/mutation/

While I had some questions about the mutations that Prof. Zeller's team used, and my own concerns about how well test team's manage to make their test suites model after user's behaviours, there is no doubt that this research area is very interesting.  The team is as yet unable to make any conclusions about the effectiveness of these techniques over existing coverage metrics.

My own perspective here is that anything that gets people thinking more about the test effectiveness, and less about specific numbers and metrics in things like statement coverage, the better.  I'm continually asked about "what number of coverage should I shoot for" and "how much coverage does Microsoft aim for".  This is a fruitless task in my opinion.  Our coverage tells you one thing and one thing only - what code was not exercised at all by my test bed.  You then need to make a judgement call about what to do about it.  Is the functionality important?  Is it really dead code?  All of these go into the mix when making the decision to do work and write new tests.

Whatever your own opinion on these, Prof. Zeller's work continues to be thought provoking, so please go check it out,

John