I'm a software development engineer, I like to write code. I remember when I got the job offer for Microsoft I told Professor Khurshid, my Master's degree supervisor, that "I will just go and automate things in a couple months then enjoy the free pay". Yea, right. (Btw, do you realize this is a postive+postive=negative phrase?) 1) I didn't automated everything, 2) the pay was not free.

Let's be realistic, now I simply don't believe we can automate everything. To be honest, I don't think we can really automate that much. Let's consider the following:

  1. We usually start test development at about the same time or later than product development starts. As the product developers implement and re-implement their code, there are usually start to deviate from the original spec. This is the healthy thing to do when done right. But it does put pressure on SDETs to follow changes in the test. In short, we can't react as quickly as manual testing. (Let's talk about test driven development another day. But I believe even in that case, test has to change accordingly when design or implementation changes in the product.)
  2. I read in some research paper years ago that about 3 lines of test code is required to test 1 line of code. (Correct me if I'm misquoting here.) We simply are not staffed to churn out that many lines of test code, if we want the code to be of good quality. Microsoft (Windows) actually probably already has a above average SDET:SDE ratio. (At least in our team, over the course of 10-20 years we did manage to churn out more code than the devs, but not all of them are the best code in the world.)
  3. It's not always cheaper to automate test. Some people thinks that once you get a test automated, then it's all goodness. Aside from the two points above, truth is test code is the same as product, they need maintenance and bug fixes. And a lot of cases that are hard to automate are cheaper to test manually. For example, I needed to test Bluetooth link loss once, basically simulate when the Bluetooth headset goes out of range. I investigated a few different solutions: a faraday cage with a motor to control the door to cut off the wireless transmission, a toy train that carries the device away from the PC, and a simuator based on Device Simulation Foundation (DSF) that completely simulates the Bluetooth device stack. The first two are fun but hard to deploy widely in the test labs, the last one simply would take 2 man-years to develop, and I had 6 weeks to write this test an many many others for the product. Taking the device and walking away from the PC to test once a while doesn't sound like a bad solution after all.
  4. Last but not least, the software we ship are for human beings, not computers. We need human beings to look at the real experience.

I'm sure there are many other reasons, pros and cons with manual testing. There are still folks that think "thou shall automate". But to ship a good product, a healthy combination of both automation and manual testing is needed. I think the right ratio really depends on the specific feature area you are in. Here you go, my first rant (blog) about my work.