The way our automation is structured is roughly done in two parts.  The first part is our automated script.  This is the part of the test that closely mimics user behavior and provides a test that should transition from one known and expected state into another. 

The second part of our automation system is our task library.  It provides the actions to actually transition from one state to another.

As an example, let's say I wanted to write a test to verify I can embed an attached file on a page.  What I  would need to is

  1. Create (or copy or otherwise procure) a file to be copied
  2. Start OneNote,
  3. create a test notebook,
    1. section and
    2. page
  4. copy the file to the clipboard,
  5. paste it on the page and
  6. verify the file was copied correctly.

In this case, steps 2-5 would be handled by the task library and only 1 and 6 would be in my test script.  The logic here is that the task library would handle the actions needed to create the notebook and paste on so on, and should be written well enough that many tests could use them (especially the code that creates the notebook, section and page).  And I would only log a pass or failure in step 6 - the final verification of all the actions above.  I should be left in a state with the file copied onto the active page.  If so, I log a pass.  If not, I log a failure.

So now suppose I wanted to write a test that checks our error handling - I want to copy a file and paste to OneNote, but want to paste when there is no active page on which to paste.  Maybe I want to test that password protected sections that have not been unlocked.  In this case, the test would be similar, but I would insert step:

      3c - add a password and lock the current section.

The rest of the test would be the same, but the verification would change.  In this case, my script would log a pass if there was NO file on the page, and log a failure if somehow the file was pasted.

Now, going back a bit, I could have logged a failure in step 5 - whether or not the paste command was able to complete.  But the paste command does not know what I am trying to test and should just return "I pasted" or "Could not paste" and leave it up to the person that calls it to determine what a test pass or test failure should be.

A subtle point, perhaps, but key to having a solid and reusable task library.

Questions, comments, concerns and criticisms always welcome,
John