In a blog post, Brent Jensen relays a conversation he had with an executive mentor. In this conversation, his mentor told him that, "Test doesn't understand the customer." When I read this, my initial reaction was the same as Brent's: "No way!" If test is focused on one thing, it is our customer. Then as I thought about it more, my second reaction was also the same as Brent's: "I agree. Test no longer cares about the customer." We care about correctness over quality.

Let me give an example. This example goes way back (probably to XP), but similar things happen even today. We do a lot of self-hosting at Microsoft. That means we use an early version of the software in our day-to-day work. Too often, I am involved in a conversation like the following.

Me: I can't use the arrow keys to navigate a DVD menu in Windows Media Player.

Them: Yes you can. You just need to tab until the menu is selected.

Me: That works, but it takes (literally) 20 presses of the tab key and even then only a faint grey box indicates that I am in the right place.

Them: That's what the spec says. It's by design.

Me: It's by bad design.

What happened here? I was trying out the feature from the standpoint of a user. It wasn't very fit for my purposes because if I were disabled or my mouse was broken, I couldn't navigate the menu on a DVD. The person I was interacting with was focused too much on the correctness of the feature and not enough about the context in which it was going to be used. Without paying attention to context, quality is impossible to determine.  Put another way, if I don’t understand how the behavior feels to the user, I can’t claim it is high quality.

How did we get to this point? The long version is in my post on the history of testing. Here is the condensed version. In the beginning, developers made software and then just threw them over the wall to testers. We had no specifications. We didn't know how things were supposed to work. We thus had to just to try use the software as we imagined an end user would. This worked great for finding issues, but it didn't scale. When we turned to test automation to scale the testing, this required us to better understand the expected cases. This increased the need for detailed specifications from which to drive our tests.

Instead of a process like this:

clip_image001

We ended up with a process like this:

clip_image002

As you can see in the second, the perspective of the user is quickly lost. Rather than verifying that the software meets the needs of the customer, test begins to verify that the software meets the specifications. If quality is indeed the fitness for a function then the needs of the user is necessary for any determination of quality. If the user's needs could be completely captured in the specification, this might not be a problem, but in practice it is.

First, I have yet to see the specification that totally captures the perspective of the user. Rather, it is used only as an input to the specification. As changes happen, ambiguities are resolved, or the feature is scoped (reduced), the needs of the user are forgotten.

Second, it is impossible to test everything. A completely thorough specification would be nearly impossible to test in a reasonable amount of time. There would be too many combinations. Some would have to be prioritized, but without the user perspective, which ones?

Reality isn't quite this bleak. The user is not totally forgotten. They come up often in the conversation, but they are not systematically present. They are easily forgotten for periods of time. The matching of fitness to function may come up for one question or even one feature, but not for another.

Test has lost its way. We started as the advocate for the user and have become the advocate for the specification. We too often blindly represent the specification over the needs of real people. Going back to our roots doesn't solve the problem.

Even in the best case scenario, where the specification accounts for the needs of the user and the testers always keep the user forefront in their mind, the system breaks down. Who is this mythical user? It isn't the tester. We colloquially call this the "98052" problem. 98052 is the zip code for Redmond, where Microsoft is located. The people that live there aren't representative of most of the other zip codes in the country or the world.

Sometimes we create aggregated users called personas to help us think about other users. This works up to a point, but "Maggie" is not a real user. She doesn't have real needs. I can't ask her anything and I can't put her in a user study. Instead Maggie represents tens of thousands of users, all with slightly different needs.

Going back to our roots with manual testing also brings back the scale problem. We really can't go home. Home doesn't exist any more. Is there a way forward that puts the users, real users, back at the center, but scales to the needs of a modern software process? I will tackle one possible solution in my next post.