Way back in November 2008 I mentioned a code review of some test scripts.  I remembered something I saw many months ago (more than a year?) and this reminded me I wanted to revisit it.

 

The purpose of most of our automation scripts is to verify that the functionality we expect to be in the product actually works.  It's a pretty simple concept to understand so I won't belabor it too much.

 

Suppose, though, I had a script that adds a section to a notebook.  One of the verifications I would need to perform would be to ensure the number of sections increased by exactly one.  I also need to add this information to my log file so that I can have some trail to investigate if something goes awry.

 

Here’s some pseudocode to do this:

 

Int numberOfSectionsBefore = Notebook.GetSectionCount();

Notebook.AddSection("My New Section Name");

Int numberOfSectionsAfter = Notebook.GetSectionCount();

 

A verification I have seen some people use is

 

If( numberOfSectionsBefore != numberOfSectionsAfter-1 )

{

Log.Append("Number of sections does not match");

//fail the test, exit

}

 

It turns out seeing this type of verification is a pet peeve of mine.  While it works, it does not tell me the whole picture about what happens if the values do not match.  I can think of three ways this could fail:

  1. The new section did not get added, so the number of sections remains unchanged.
  2. The new section got added, but somehow an existing section got deleted, so the number of sections remains the same.
  3. Two (or more sections) got added, so the counts are off by 2 instead of the expected 1.

 

And there are a ton of variations of these cases you can imagine for yourself.

 

To help isolate what might have gone wrong, I would add some code to log the values as well:

If( numberOfSectionsBefore != numberOfSectionsAfter-1 )

{

Log.Append("Number of sections does not match");

Log.Append("The number of sections before adding was " + numberOfSectionsBefore );

Log.Append"The number of sections after adding was "+ numberOfSectionsAfter);

//fail the test, exit

}

 

And then I would probably also log the section names that exist to help out if potential error number 2 was hit (to help differentiate it from case 1).

 

Kind of boring and basic, I know.  But I think this needs to be done all the time.

 

Oh, and we have methods in our automation system that will log values being compared automatically, so there really is no reason to miss logging something obvious like this.

 

Questions, comments, concerns and criticisms always welcome,

John