Pushed by a response to an earlier post, I decided to stop procrastinating and get out my thoughts on the 'schools' of testing before others try to put a label on me.
I am not a big fan of the ‘so-called’ 4 schools of testing. I don’t like segregation of any form because it can lead to biased opinions, incorrect assumptions, and a general disregard for things that are “different.” But, most importantly segregation stifles innovative thoughts, creative collaboration, and the ability to expand a person’s knowledge and in-depth understanding of the ‘system’ as a whole.
Bret Pettichord stated schools are based on a relationship or attraction rather than specific principles or doctrine, and that each school is defined by standards of criticism, exemplar techniques, and hierarchies of values. I think Bret’s definition of ‘school’ is good primarily because it dismisses the idea of basing a school on dogmatic teachings. But, I still think the notion of 4 distinct ‘schools’ of testing condones an “us versus them” sort of debate. Bret lists the 4 schools as:
Reviewing the descriptions of the different ‘schools’ I don’t particularly align myself with any single school. Instead of affinity to one ‘school’ we should understand the values, techniques, and standards of all four ‘schools.’ The testing community needs to embrace the diverse values and mores of these various ‘schools’ of thought in order to extend the impact of testers, and mature testing into a professional discipline in field of computer science.
People are paramount to any successful software endeavor. It has been said that Microsoft is an innovation company. I think that to be generally true of any software company because building and testing software require individuals with an incredible amount of creativity, ingenuity, innovative prowess, and technical competence.
Repeatable processes that are not restrictive in the ability of an organization to innovate significantly improve product quality and are a good trait of mature organizations. I hate to compare software development with manufacturing, so I won’t go on that tangent. However, implementing processes such as static code analysis that detects certain classes of defects prior to code check-in has resulted in higher quality and reduced costs.
Quality measurement is an important objective of any testing endeavor. Managers are no longer content with feel-good or best guess opinions and tracking ‘bug-finding’ rates and waiting for a downward trend to assess good-enough quality is an immature metric and highly unreliable. No doubt that software metrics are difficult, and this is perhaps one of the hardest challenges the discipline of testing faces. The ability to quantify testing effectiveness and qualify quality in terms of meaningful metrics will become increasingly important.
Technical testing is not limited to academia, in fact, rigorous and technical testing is demanded in many software projects. I don’t imagine many of us would feel very comfortable flying on airplanes in which the avionics systems were not tested rigorously at a deep technical level. Microsoft, IBM, and others have been doing low-level ‘technical’ testing for years. Professional testers must be able to perform API level tests, analyze code coverage results and design structural tests to efficiently exercise code segments previously untested, engage in formal code reviews (the single most effective method of early detection of security and other classes of defects), and review developer designed unit tests.
So, isolating oneself, or a group of people, into one ‘school’ simply doesn’t make much sense. One of the greatest characteristics of professional testers is their ability to excel in diverse and dynamic situations, and their skill and knowledge of all the values and exemplar techniques and the ability to objectively critique assumptions and assertions.