Modern software developers bear little resemblance to our forebears. We’ve forsaken their jackets and ties in favor of hoodies and t-shirts. We’ve quit their offices and cubicles to occupy hacker hostels and corner cafés. They had floppies and sneakernet. We have Github. They printed and stored; we share and post. They worked for big companies with distribution channels. The world is our distribution channel. Where, with all these changes, do we stand with software testing?
Let’s face it, the 1990s were the golden age of software testing. As an industry we were still figuring things out. Global or local data? File and variable naming conventions. Time constraints versus memory utilization. Library, procedure or inline code? Use or reuse? And the granddaddy of them all: how do we fix bugs that occur in the field when the only way to get bug reports is by phone or email and the only way to update our software is by mailing a new set of floppies? We were at once not very experienced in writing code and after shipping that code, fixing it was a really, really painful proposition.
No wonder we put so much time and effort into testing. We had no choice but to double check developers’ work and try to ensure that as few bugs as possible made it into the released product. Like I said, it was a golden age for software testers. Small chance of getting it right, large chance of expensive rework. Testers were the insurance policy no company could afford to decline.
But then the world changed. First it was the web which made software updates a small matter of refreshing a web page. All those floppies were F5-ed into oblivion. And then along came mobile apps which could collect their own user telemetry, create their own failure reports and prompt a user to update them when necessary. At the same time the risk of shipped defects was decreasing dramatically. So-called waterfall software development models were replaced with agile methods that created better code out-of-the-box. A collective intelligence around how to code and the body of knowledge of coding practices matured. The art of coding has become downright pedestrian.
Quality is no less important, of course, but achieving it requires a different focus than in the past. Hiring a bunch of testers will ensure that you need those testers. Testers are a crutch. A self-fulfilling prophesy. The more you hire the more you will need. Testing is much less an actual role now as it is an activity that has blended into the other activities developers perform every day. You can continue to test like its 1999, but why would you?
You can’t test in quality, but you can code it in.
And at the tail end of the lifecycle, testing can now involve users at a level that it never could in the past. Who, after all, is the better judge of a bug: the user who is honestly trying to use the software to get work (or pleasure) done or a tester who has a preconceived (and unavoidably biased) notion of how the software is supposed to work? Why must a tester serve as the intermediary between the developer and the user when the user is only a click away? Can you imagine the impact on quality when developers and users have no middleman getting in their way?
Quality, and therefore testing, is not something separate from software development unless your software is going into a nuclear power plant, medical device or an airplane where it is difficult (for now) to recall post-deployment. For the vast majority of app development on this planet, software testing is an activity within the development process and keeps happening after the software is released. Modern testing is an activity and doesn’t require a separate role to perform it. It’s time to bring quality into the 21st century where testing is such an integral part of software development that you’ll often forget you are doing because it has become so familiar. Hey wouldn’t that be awesome … that testing gets done without making such a big fuss about it?
This is not your father’s application development process. It’s yours. Own it.
Hey James, glad to hear you talking about testing and quality again!
I will point out that even airplanes get Tested in Production, it’s called a test flight. :-)
What you are saying is very well aligned with the message Ken Johnston is delivering on EaaSy (Everything as a Service), most recently at ALM forum (www.alm-forum.com/.../pr_test). Myself, I’ve been preaching similar gospel about Data-Driven Quality (www.setheliot.com/.../your-path-to-data-driven-qualityalm-forum-2014) which focuses on production or near-production
Not to monopolize the comments, but perhaps even better thematically aligned with your message, I cautioned against returning to the 90’s with our testing practices www.softwaretestpro.com/.../5683
I'll tend to disagree with the viewpoint that 'testers' may not be needed and that we leave it all up to the developer to do. That is letting the Fox into the Hen house. Due to time pressures of our modern era, and even agile methods/practices are complicit, the problem of lack of testing and escape of defects to the wild is still high. Unfortunately testing is always the last thing done and gets shorted when push comes to shove.
Agile alone is not the cure, there are too many companies & teams that claim they're agile when in fact they are more fragile & 'wagile' (still doing things in a waterfall style model with super short cycle times). You need to have highly disciplined and collaborative teams to do justice to an 'agile' approach. Iterative/Spiral models (RUP, MSF) work well (which agile is with very short iterations), but the key to any of these is the 'team'. In both of these the whole team starts at the same time and at different points the team emphasis will change, but the end goal of delivering a working product is the same throughout.
Now the influence of agile has had positive impact on development and helping to 'bake' in the testing and 'quality' of the code. Unit test harnesses and TDD have really been great, but they have to be used all the time. Developers have always had this as a task to perform, just too many have been lazy about it.
Back in the mainframe and mid-range days developers had to do Desk Checking and Unit Testing of their code before submission into the system. They worked with the business people to do 'system' testing. With the advent of the PC generation and cowboy programming the task of Unit testing fell to the wayside. Thus the 'professional' tester arose in the mid-80's to early 90's.
Now what we need to do is have a more cooperative & collaborative effort between Development and Test. Two sides of the same coin approach. This allows for a multi-layer approach that will have checks and balances that over the long run will help to make a better product and lower the risks of defects.
It's like a football team; the developers are the quarterbacks and testers are the offensive line. You need them to help protect the quarterback.
Based on the assumption that development is way more simple than it is and a smooth process? This is fancy from an purely idealistic world. The risk is reduced 100 fold when releasing bugs but watch bugs in any software break trust and respect for the users.
Saying that testing can become just an activity for programmers is like saying dentists should pull their own teeth. There are a myriad of reasons why that notion is unrealistic and impractical.
I agree with Jim. Tasking developer to do the testing themselves will just make there's no testing at the end because of time pressure.
And the managements don't want to invest in testing too. I've had a project that taken about a year to develop. When I try to add a week for writing tests to it, they said "no" because no budget has been assigned there. So it's unrealistic burden for most of us here.
I love the sentiment of this post, but struggle with some of the execution practicalities. With the right quality culture and leadership, teams are more than capable of testing more and earlier - reducing the need for independent testing as part of the process. But the right culture and right leadership are key.
Most of the points raised seem to be geared towards delivery of software as a service - where the environment is tightly controlled and updates almost seamless. Where this breaks down a little for me is in the rest of the software industry where we are delivering desktop applications, or enterprise software for customer configuration and deployment. I can't get my head around an approach where it is ok to put the burden of bug detection on the customers shoulders under any of those environments.
Which leaves us requiring some intermediary representing the customer - aka test as a role.
I would love to live in a world where all we deliver is a service others consume - but there are huge chunks of the software industry where this isn't true, and wont be for a while. I would love to hear any advice on how we can apply some of your thinking James to these areas of software development.
Hello James, I thought you left the testing field, but you still have thoughts around testing. Thanks for your post.
One thing I like to call out here is about software distribution mechanism. For web application and web service application, the software company have total control over what customer gets and experience. Even for the Chrome browser, google updates the software for the user whenever they can. By having this control, the software company can utilize users as testers. Of course, the software company should have super detailed monitoring system, tracing every single activity of each user, rollback mechanism, different versioning(web service case) and etc. DevOps are developed in this areas because of this distribution control.
I don't think mobile application will give the software company as much control as web application does. There will be opt-out options and providing update all the time will annoy users. If the mobile becomes the next big thing, then every update should be carefully tested since it will be hard to rollback. It will not be like delivering CD, but similar hassle will follow for update.
Finally, I think it's totally up to companies to hire testers or not. Maybe testers will go away just like waterfall model disappears as agile model produces more and more success stories. Software industry will decide eventually whether testers are needed or not. I don't think a few individuals or companies can change it. What you say might work in Google or Microsoft, but I think the software industry is far bigger than Google and Microsoft.
I am wondering, what happened to your realization in 2008 that not all things could be automated. Maybe Microsoft with their slower more annoying deploys in the exception list. Either case, there are non-functional components that can't be automated completely at present. Unless you are suggesting that things like usability/design, security, localization, documentation, scalability, compatibility and performance are all in the area of A/B testing or part of development (automation)? I hate to mention that even the F5 key fails for caching and that unit testing doesn’t tell you everything you need to know.
I think testing is changing into something that it wasn’t in the 90s, and that isn’t a bad thing, but that doesn’t mean testing is dead as a profession. Automatic bug reports and metrics tell you about crashes and perhaps something about the usage of users, but how do you know if it is usage patterns or bugs? A/B testing tells you which is preferred, but it doesn’t tell you if either is actually right. A/B testing on Amazon.com where A ships with 10% off will obviously be preferred to B, but might be wrong functionally. Automated reports tell you nothing unless discount rate is a metric you capture and isn't affected by said functional change. Millions of dollars of losses at the end of the month seems too late to find out. Who finds this out? Customers of course, but they aren’t going to tell Amazon.
I should note that I do agree with your sentiment, testing is changing and the test profession might shrink in some areas: www.stickyminds.com/.../why-testers-need-get-used-change . But calling it dead is premature I think.
Testing has definitely changed since the 90's, but it is far from dead or unnecessary. In a true agile environment where sprints are as short as a week, most companies work on very thin margins and can't risk even a few hours of degraded service. The downside of a bad customer experience can have a snowball effect - a user that experiences poor performance on Monday may not bother to come back on Tuesday when the software is fixed. Users are pickier now, and when their app doesn't perform as expected, they just install the competing app and are back work a few seconds later. In such an environment, companies can't afford to use their customers as their testers. They have to get it right, or as close to right as possible.
Imagine a publisher saying they don't need editors anymore because their authors use spell check, and their big data solution tells them whenever a customer finds a paragraph confusing. The author can just rewrite the paragraph as the customer reads it! Nonsense. Customers don't put up with that kind of experience, especially with so many competing choices available. Software developers need test to find the issues first, to tell them what their own tests missed, what data their BI solution is failing to capture, and what the feature looks like to the end customer.
Test needs to change, and is changing. Testers need to be more self driven, more professional and more knowledgable than before. They need to be able to pivot faster and respond to product changes on an hourly basis. They need a new, less jaded perspective on the customer experience. Most of all, as Dave said earlier, the culture and the leadership need to change. I think everyone knows that the old "Test vs Dev" paradigm that ruled the 90's was a bad model, but it's a lethal poison in an agile environment. Leadership must foster a culture of respect and a passion for shipping.
Today's successful tester is a combination of many skills. The expertise required is a kaleidoscope of differing strengths and perspectives. A tester must at once understand the code repo architecture, the details of building a correct environment, product vision, customer outlook, clear and concise communication, people skills, deployment process, business intelligence, planning, international considerations and product security. And they need to be able to find bugs, too. It's a tall order, a role which fewer and fewer people seem to be able to fill. But it doesn't mean the discipline is dead.
Thank you for the article.
I happened to work for a medical device company.
Should we stick for waterfall?
Is there a better way to recall our product and fixed the results for our customer?
Is there a better way to utilize Agile method so that we can have a more balanced approach?
Thanks again for your insight.
I enjoyed most of your books and youtube videos.