As you know I test Windows XP Media Center edition - specifically the area of extensibility (which includes all the public APIs and the SDK) and the OnlineSpotlight area.  I thought I'd write a bit about what a typical day is for me.  What I test and how I do it...

Typically I'll arrive at the office sometime between 8am and 10am.  That's a pretty big window, but it is not that uncommon around Microsoft, most people get in by 10am, but there always seems to be some people that get in later; they always leave later as well.  The first thing I do when I get in the office is unlock my pitifully slow PC that I use for email and admin stuff and let everything update itself - Outlook to display my email, MSN Messenger logs in, and the few other apps I have running to update themselves (RSS reader, stock ticker, bug tracking app).  Once all that's taken care I'll start going through my mail - starting with my inbox and anything I might have flagged that needs my attention.  I read my email from home and will flag items that need a response from me that I can't necessarily provide from home - in general I won't have all the info.  Once my inbox is out of the way that leaves all the rest of the overnight mail to deal with.  Everything gets filtered into folders and grouped by conversation so I can easily see what to read and what to just mark as read without actually reading it.

Assuming there's nothing critical in my mail that requires me to actually do something I'll check a few news sources, read http://blogs.msdn.com , then I'll sync up the latest posts to the Media Center newsgroups.  Most of the time there's nothing for me to respond to - the MVPs do a good job of answering the questions, but now and then I'll post an answer of follow-up with some more info.

By this time I haven't really done any work, so I'll continue in that trend and have some breakfast while I have a look at the bugs assigned to me - starting with the resolved bugs.  These are bugs that either I've opened and have had action taken against them by someone else - they'll come back to me resolved as "Fixed", "Not Repro" (the developer or program manager couldnt' reproduce the problem), "Duplicate" (there's already a bug on the issue), "External" (the bug is caused by a component outside my group and a bug has been sent to them) or "By Design" (in the view of the developer or program manager the behaviour described isn't a bug, it's supposed to be like that).  I start by looking at the ones that aren't "Fixed" to see if I agree with the resolution, if I don't I'll reactive the bug with more info or in the case of "By Design" and argument as to why the design shouldn't be the way it is.  Next I'll check the active bugs - these are bugs that I need to do something about, these are bugs against my test cases - for the most part caused by a change in the product which makes the test case out of date; I'll update the cases and resovle the bug back to the tester who runs the tests.

I still won't have actually tested anything yet (unless I've checked out a "No Repro" bug).  If a build has been released I'll install it on my test machine so I can check that the bugs that have been fixed are actually fixed (because they're aren't always really fixed...).  I say "if a build has been released" becuase before I get my hands on a build, it has to pass a set of "build verification tests".  If it fails any one of the tests it doesn't get released and a developer gets paged to fix the bug.  This process saves a lot of time in that the whole organisation doesn't install a build that's horribly broken and all run in to the same problems and therefore wasting a lot of time.  Of course it also means that a small bug in one area can delay everyone getting hold of a build that is otherwise OK.

While the build is installing I'll work on some admin stuff, updating test cases, writing new test cases, surfing the web, responding to email, etc...  Once the build is installed I'll get through the resolved bugs and mark them as closed if I'm satisified that they've been fixed or re-activate them if I'm not.  Often going through the resolved bugs will result in my finding one or more new bugs that I'll write up a report on and send them to the appropriate program manager to decide what to do with.

Everytime a new bit of code or a change to some existing code is checked in to the product the responsible person will send a "check-in mail" that lists the files they changed and a description of what they changed.  I have a Outlook search folder that filters the folder that all these mails are delivered to - it filters it to just the developers that work on my area.  I check that folder to see what has changed in my area and what I should look at first that morning.  I'll quickly smoke test the area (no smoke, no fire!) and report any issues I come up with.  If it's some new code I might need to write some new test cases to ensure it gets tested during a test pass.  My team stores it's test cases in a database with a web-based front end.  A test case will include an outline of the goal of the case - that is a brief description of the case, the required steps and then a section that explains the expected outcome and how to verify it - it's a bit like a reverse bug report.

Throughout the day I can be interupted by email, meetings and questions from colleagues and sometimes that'll be all I do in a day and sometimes there'll be no distractions, every day is different.

I'll ad hoc a bit in my area (and other areas if there's something new and exciting to play with - I try not to ad hoc watching TV too much ;-0) and see if I can find a new way to break the system that I hadn't thought about.  Sometimes it's not clear if something is a bug or not in which case, depending on the severity of the bug I'll either email the program manager/developers for the area or walk over to their offices to talk them about it.  If I know it's a bug I'll repro it a few times and be sure I know where the problem is and what caused it - this can often take a while to narrow down the minimum steps to reproduce the problem - but it's critical in writing a good bug report.  Once I've got the steps down to what I think is the minimum required I'll open a bug.  If the bug is an API bug - that is if I have to call an API from an HTML page in Media Center I'll quickly produce a page that reproduces the problem - easy to do as I keep a template with all the required framework pre-written, I just have to plug in the API call that causes a problem.  I'll link this page in the bug report.  So what should be in a good bug report?  Well...

  • A great title - I like to prefix the title with the area so it's easy to see where the bug is and then a brief description of the bug and if it's not obvious a description of why this is a problem for the user.
  • A summary of the problem - describe the problem, how it occurs, why it occurs and why it's a problem.
  • Clear repro steps - the steps need to be easy to follow and shouldn't assume any knowledge of how things work in the area.
  • A description of what happens when the steps are followed - if it's a crash or a hang then a call stack should be included.
  • A description of what the expected behaviour was.
  • A screenshot if it's a UI bug - a picture speaks a thousand words.
  • Any other information that's related to the bug, for example circumstances under which the bug won't repro.
  • Depending on the area - hardware config.  For example if I were to open a bug about TV playback I'd include the hardware config I was using including the driver versions.

For me that's what makes a good bug report.  For other teams there's different requirements and each person does things slightly differently - this is what works me for though.

With what time I've got left in the day I have a few other things going on.  I could be smoke testing a fix from a developer - they give me a private set of binaries that they believe fixes a bug and I'll test it out, verify it does fix the bug and doesn't appear to introduce and new ones.  If it does introduce some major new ones I'll talk with the developer and see if they can fix it and the cycle repeats.  Once we've got to a stage that the change hasn't introduced any major new bugs and has fixed the original bug then we're done and we say the smoke has passed.  There's always some automation that can be worked on.  Automating tests so they can be run without having to have a tester run them by hand.

That's what I manage to cram into a day.  I'll leave the office anytime between 4pm and 11pm depending on what happens on that particular day.  Next up, how I test Media Center...