Not every bug is the same. A bug that frequently freezes an app gets more attention than an extra line of green pixels in a border. An embarrassing typo in a prominent feature is more urgent to fix than an inappropriate exception thrown by a misused API. Customer data loss or compromise requires instant attention, while a rare app crash can be fixed later. We adjust our approach to bugs based on the bug.
Engineers get bug priorities, so why are they clueless about test priorities? We approach testing credit card transactions the same way we approach testing web page layout. Of course, we spend more time scrutinizing credit card flows—we’re not complete idiots—but we go through the same test planning, test case development, test automation, and test triage for both. That’s crazy.
Not all features are viewed by customers the same way. Some are critical to quality—like those protecting customers or ensuring a smooth customer experience. Some are what my dad calls “fruit salad”—nice to do well, but not as important. In these days of the Internet and cloud services, there’s no reason to continue testing everything the same way.
It used to be that we got one shot at releasing a quality product. Even then, our packaged product releases would have numerous issues that wouldn’t receive attention until the first service pack was released six to nine months later.
These days, our packaged products are much better. They are of higher quality from the start thanks to instrumentation, early bug fixes, and integration of only those features that are complete and tested using practices like Scrum and feature crews. Packaged products now go through multiple layers of internal and external previews and betas that resolve most issues before final release. These products also integrate with services that can be updated and improved after release.
Our websites and cloud services are even more flexible. Many of them update monthly or even daily. This rapid iteration and release cadence unveils new approaches to quality and testing that previously were not practical. But we have to break with the past before we can embrace our future.
Customers aren’t shocked when a beta is less than perfect. Some imperfection here and there is fine, as long as those imperfections don’t compromise customer trust and don’t disrupt the overall customer experience.
Customers take the same “beta” attitude toward the Web in general. Connectivity is shoddy and browsers are finicky, so customers expect Web properties and applications to be a little flaky. That’s fine as long as websites add value, perform quickly, and protect customers from hackers or misuse.
Customers are particularly enamored with betas and Web properties that improve regularly in response to customer needs—spoken and unspoken. You see this all over forums and customer feedback.
There lies the opportunity to change our approach to quality and testing for betas, websites, and cloud services.
We can’t compromise on security, privacy, or reliability (trustworthy computing areas), so we must continue their thorough, in-depth, pre-release testing. The overall customer experience must be delightful and compelling, so end-to-end scenario testing is critical.
Therefore for rapidly updated products and services, the test team should focus its in-depth work on trustworthy computing and the overall customer experience—including penetration testing, injection testing, stress testing, and end-to-end scenario testing. All other areas can rely on design and code reviews, code analysis (like compilers and FxCop®), unit and component testing, and system sanity checks.
Yes, this puts more responsibility for initial quality on developers. That’s good. Testers should be focused on critical areas and overall quality assurance, and developers should take more direct responsibility for quality. This change is not a big risk for betas and Web properties because customers prefer that we be responsive to what they truly care about rather than perfect about everything. Now we can ship more frequently and actually experiment with different solutions for customers—just like Amazon has been doing for years.
Microsoft old-timers might ask, “But what about those nasty bugs that take so long to find?” That’s why we differentiate between flaky issues (Heisenbugs) and repeatable issues (Bohrbugs). Heisenbugs are the nasty ones because they are so unpredictable. However, they are also the issues that customers dismiss and that websites and cloud services can easily retry and resolve.
The terms Bohrbug and Heisenbug date back to Jim Gray’s 1985 paper, "Why Do Computers Stop and What Can Be Done About It?"
So instead of dragging every feature through test planning, test case development, test automation, and test triage, we focus on being responsive to customer needs, protecting customer trust, and assuring an overall great experience. We ship frequently with constantly improving customer solutions that are well instrumented, so the product team can analyze usage patterns and continue the virtuous cycle.
It’s the old-timers again: “Wait a minute! You can’t be serious about releasing untested code!” First of all, it’s not untested. It’s design and code reviewed, code analyzed, unit and component tested, and sanity checked. The trustworthy computing areas are fully tested pre-release and the end-to-end scenarios are validated. That’s more testing than most of our competitors do.
However, the old-timers are right. There is still significant functionality being released without the traditional heavily planned, managed, and executed test pass. Shouldn’t we be worried? No, we should be embarrassed to think all that extra overhead was worth it. Our test labs aren’t the same as customer systems. As I wrote last year, There's no place like production.
We roll out changes to only part of the whole customer base at a time through betas or exposure control. We validate that features are working in real-world scenarios. We roll back if there’s a serious issue. And we focus on continuously improving, using the real customer feedback and usage data that would otherwise be guesswork.
We can’t change our testing approach for all products. The change works best for betas, websites, and cloud services. However, we owe it to our customers to utilize modern, feedback-based, rapid iteration methods; grow our testers into quality assurance, trustworthy computing, and experimentation experts; and put more responsibility on our developers for quality.
It’s time Microsoft joined the modern era of software development. With all our years of enterprise experience, we can lead the industry with quality, while delighting our customers with rapid updates that address their every spoken and unspoken need.