Recently I saw a way to describe any particular stress test that I liked:

First, define the sets of qualities you care about:

  • Input
  • Execution entry point
  • Resources
  • Validation

for example.

Next, define the qualities within each set you are interested in; maybe:

  • Input
    • Concurrency of operations
    • Data quality / noisiness
    • Load / frequency of operations
    • Velocity / speed of operations
  • Execution entry point
    • Public APIs
    • Private APIs
    • User interface
  • Resources
    • CPU
    • Hard drive space
    • Memory pressure
    • Network flakiness
    • Duration of test
  • Validation
    • Application health
    • Assertions
    • Crashes
    • Data coherency
    • Deadlocks
    • Global expectations of system state
    • Live locks
    • Operation-specific expectations
    • System coherency

Finally, rate each quality from 0 to 10, where 0 means "This test does not target <quality>" and 10 means "This test is *all* about <quality>".

Now you have a point in a five-dimensional space (I'd give you a picture if I knew a good way to portray it!) that defines this particular stress test. Do this for all the rest of your stress tests and you can easily see how well you are covering the qualities you care about - and what tests you're missing!

I'm finding this framework helps me talk stress with people: Every person seems to have a different idea of "stress test", and this framework helps us clarify what exactly we mean by that. For example, I often hear dissension over the amount of validation stress tests should do: "Everything a functional test does!" "Nothing other than watching for crashes and asserts!" "Basic functional testing plus catching crashes and asserts!" These are three different types of stress tests, all of which can be useful - it depends on the intent of the test.

Which is perhaps the most important use of this framework: clearly defining the intent of that stress test you're about to write!