It never ceases to amaze me that every time we see a calamity involving software the immediate reaction of the sensationalist media types and other people who are generally misinformed is to blame inadequate testing. Recently, it seems that Joel Spolsky not only fell victim to rumors and misinformation, but also to the lunacy of blaming the testing organization for the rather lack luster adoption by individual consumers to move to Windows Vista.
Sales of Windows Vista are slow for a myriad of reasons. The UI is a radical departure (which most design experts applaud), security features are annoying (which most security experts applaud, yet many individual consumers dislike), machine requirements (means many would have to purchase new hardware), performance issues, etc. But inadequate testing or too much automation? Pah...leaze! I usually like reading Spolsky's blog, but must admit this is the first time I can recall in which he has used unsubstantiated rumors and innuendo to reach such a faulty conclusion as suggested in his post. Too bad...
Joel's attack was not about testing per se, but more about his assumption that Windows Vista suffered because of too much test automation. He based his conclusion on the information from a blog noted for its petty bitching about things inside of Microsoft. (Let me say this...there are a lot of things about MS and corporate policy that I don't like. Bitching about those things is easy and usually non-productive. But, working with the diversity of people who make up one of the most influential software companies of our time to effect change is difficult and challenging. I mostly choose the later route.) And the non-productive bitch-fest blog based its disinformation on the observations of Chris Pirillo's blog post written in May of 2006 exposing several critical issues such as:
This is not to say that Chris' feedback was ignored or not valued (I also enjoy reading Chris' blog from time to time). Personally, I would like to think we investigate a lot of the feedback we get from our customers (we simply don't act on all of it because...we'll we have bigger problems to solve other than allowing users to change the color of the close button.)
Spolsky stated, " The old testers at Microsoft checked lots of things: they checked if fonts were consistent and legible, they checked that the location of controls on dialog boxes was reasonable and neatly aligned, they checked whether the screen flickered when you did things, they looked at how the UI flowed, they considered how easy the software was to use, how consistent the wording was, they worried about performance, they checked the spelling and grammar of all the error messages, and they spent a lot of time making sure that the user interface was consistent from one part of the product to another, because a consistent user interface is easier to use than an inconsistent one. And, None of those things could be checked by automated scripts."
Unfortunately, I guess Joel hasn't remained abreast of advances in test automation and perhaps assumes that automation is still primarily record/playback or simple rote scripts in some proprietary or limited scripting language. As software becomes more complex, and as the capabilities of test automation improve the fact is that quite a lot can be done with automation.
For example, checking the consistency of fonts in each dialog throughout a product is more effectively and efficiently accomplished via automation than by human eyes. Automation can also detect differences in the alignment of controls on dialog boxes and also detect clipping and truncation of controls or text in controls more efficiently than humans (and we actually have empirical data to prove it). Spelling and politically sensitive words are more efficiently detected via automation. Most testers and developers that I know are not experts in English grammar and I would never hire an English language major as a tester just to test grammar. We made our documentation/content experts responsible for messages and other 'string' content thus pushing quality upstream!
(Also, as a side note...the majority of the testers that worked on Windows Vista were the "old testers at Microsoft" doing they same work they did shipping Windows Xp.)
Isn't if funny that when certain things don't meet our personal expectations (Vista sucks), or we are fearful of change (test automation is evil and we should continue to hire hoards of non-technical people to bang on keyboards) it is so easy to jump to faulty conclusions, base assumptions on unfounded hearsay and rumor, constantly utter petty unproductive complaints, and generally play the victim. Of course, being a victim provides you with attention from others, and useless bitching and wild speculation is way more fun than facts and research...besides...it makes people laugh, and when people laugh they are happy, and the role of the tester is to make people happy...right?