When we testers find yet another "Did you even run this?" bug, it's easy to believe developers purposely inject bugs just to taunt us. I have worked with a lot of developers over the years, and I've found that they generally do try to test their code but simply don't know how to go about doing so effectively.

As testers gain experience they build up a checklist of common problems, classes of bugs that crop up over and over, and areas that tend to be troublesome. Developers don't have our experience, so they don't have anything similar.

If developers don't know how to test very well, and testers have a simple checklist we use to find the most common bugs, then giving our checklist to developers should help out both sides.

That's my theory anyway, and this is my checklist. I'm testing it right now with my developers. Test it with yours/yourself and let me know what happens!

  • Customize This List: If you get a bug, determine what test (or even better, what general class of tests or testing technique) would have caught the bug, then add it to this list.
  • Use Your Tester: Brainstorm tests with your tester. Review your (planned/actual) tests with your feature team. Coordinate your testing with your tester, especially with respect to tests they have already written/are currently writing.
  • Focus On Your Customer: Think, "Where would the presence of bugs hurt our customers the most?", then let your answers drive your testing.
  • Test Around Your Change. Consider what it might affect beyond its immediate intended target. Think about related functionality that might have similar issues. If fixing these surrounding problems is not relevant to your change, log bugs for them. To quote a smart person I know, "Don't just scratch it. What's the bug bite telling you?"
  • Use Code Coverage. Code coverage can tell you what functionality has not yet been tested, but don't just write a test case to hit the code. Instead, let it help you to determine what classes of testing and test cases that uncovered code indicates you are missing.
  • Consider Testability. Hopefully you have considered testability throughout your design and implementation process. If not, think about what someone else will have to do to test your code. What can you do / do you need to do in order to allow proper, authorized verification? (Hint: Test Driven Design is a great way to achieve testable code right from the get-go!)
  • Ways To Find Common Bugs:
    • Reset to default values after testing other values (e.g., pairwise tests, boundary condition tests).
    • Look for hard coded data (e.g., "c:\temp" rather than using system APIs to retrieve the temporary folder), run the application from unusual locations, open documents from and save to unusual locations.
    • Run under different locales and language packs.
    • Run under different accessibility schemes (e.g., large fonts, high contrast).
    • Save/Close/Reopen after any edit.
    • Undo, Redo after any edit.
  • Test Boundary Conditions: Determine the boundary conditions and equivalency classes, and then test just below, at, in the middle, and just above each condition. If multiple data types can be used, repeat this for each option (even if your change is to handle a specific type). If multiple data types can be used, repeat this for each option. For numbers, common boundaries include:
    • smallest valid value
    • at, just below, and just above the smallest possible value
    • -1
    • 0
    • 1
    • some
    • many
    • at, just below, and just above the largest possible value
    • largest valid value
    • invalid values
    • different-but-similar datatypes (e.g., unsigned values where signed values are expected)
    • for objects, remember to test with null and invalid instances
  • Other Helpful Techniques:
    • Do a variety of smallish pairwise tests to mix-and-match parameters, boundary conditions, etc. One axis that often brings results is testing both before and after resetting to default values.
    • Repeat the same action over and over and over, both doing exactly the same thing and changing things up.
    • Verify that every last bit of functionality you have implemented discussed in the spec matches what the spec describes should happen. Then look past the spec and think about what is not happening but should.
  • "But a user would never do that!": To quote Jerry Weinberg, When a developer says, "a user would never do that," I say, "Okay, then it won't be a problem to any user if you write a little code to catch that circumstance and stop some user from doing it by accident, giving a clear message of what happened and why it can't be done." If it doesn't make sense to do it, no user will ever complain about being stopped.


Thanks to all who contributed to this checklist, especially the SHAPE forum, where this checklist spawned a thread jam-packed with great advice for both developers and testers.


*** Want a fun job on a great team? I need a tester! Interested? Let's talk: Michael dot J dot Hunter at microsoft dot com. Great coding skills required.