You might be wondering what exactly an "SDET" is. Microsoft has a few core "disciplines" who participate in software development: Program Managers (PMs), Software Design Engineers (SDEs, or Devs), and Software Design Engineers in Test (SDETs, or Testers). There is a fourth discipline, Software Test Engineers (STEs) which is currently being phased out.

Program Managers are the designers who define the features, represent the users, manage progress on the product, and evangelize the product. Software Design Engineers do the detailed design (code design) of the features, write the product code, and fix defects which are found. Software Design Engineers in Test manage the quality of the product, which means that they find the defects to be fixed, and decide when the product is ready to ship.

In the past, manually scouring the product for issues might've been enough, but many software products (especially those Microsoft develops) are too complex these days for that to do the job. At Microsoft, the SDET role is centered around defining, creating, and managing automated tests. I'm sure that doesn't appeal to everyone, but it seemed very interesting to me. There are a few unique benefits to the SDET role which I value highly, and which I expect will keep me in this role for quite some time.

First, the SDET role is highly technical. I spend the majority of my time writing and debugging code, which I don't think I could give up. Automated testing has some unique challenges as well - it's essentially writing code which will interact with the product code, but rather than using direct programmatic interfaces, network communication, or another often used communication mechanism to work with the other code, you are using the user interface. This is especially interesting because all applications have a user interface, so once you learn how to manage the interface you can work with any application (to some extent).

Second, I like the mix of breadth and depth I get in my job. PMs have a very wide view of the product, and can generally walk you through a quick introduction to almost any feature in the product. Unfortunately, being responsible for so many features means that they don't get to explore them in the same depth that I do. Developers, on the other hand, have incredible depth - complete knowledge of the workings of the features they own. However, each developer can only own a few features, so they often don't know as much about the rest of the product. I'm sure that each technical person has a unique mix of the two they prefer, and for some reason the mix of depth and breadth I have appeals to me. I feel like an expert user of the product - I'm very familiar with some features, and the familiarity is centered around the way I use them rather than the way they might work underneath. There are features I know very little about as well, though I do get a rough overview of them.

Third, playing with the product is a part of my job. We don't spend a huge amount of time this way, but there is formal calendar time in the schedule when testers are asked just to play with the product to come up with something neat (App-Building). For Visual Studio, this means write some code and play with the VS features. In addition, there are specific days allocated where we just wander the surface of the application looking for (usually) smaller bugs (Bug Bashes). These days aren't as much like play as App-Building, but it's really nice to be able to go over the product and scrub out the spots that don't look right, or don't make sense in some cases. The usual result of a Bug Bash is that a few weeks later, we have a much more polished-looking product. (And I didn't have to fix all of those nasty little things).

Finally, testing is something of an "unexplored frontier". There are many areas of Computer Science which have been well explored (and documented) by academic and business programmers of the past. Testing, and automated testing in particular, is not one of them. Few courses and books exist that talk about testing, especially automated testing, and they have much less detail than you'd find in a Compiler book, for example. As a result, I have the opportunity to shape the way testing software happens in my job - I can figure out what a good test looks like, how it works, and how it communicates with the outside world. This fourth point, by the way, is one major reason I have this blog. I am working on the problems of defining what good tests are and looking for strategies and tools for making them work well, and I intend to publish some of those efforts here both to share my hard-won experience and to try to pick up experience from others who stop by and have been where I'm going.

How about a different perspective? Our group recently picked up a few college interns called "Explorers", who spent a day or so job shadowing full time members of our team in each discipline. Here are some things they had to say about the experience of watching SDETS specifically:

* Most of the internal tools written here at Microsoft are written by SDETs. Due to the nature of the SDET job, they might have spare time with which they are encouraged to put towards the use of personal projects. These can range from doing community support on the forums, maintaining an employee blog, or writing a whole new tool to make your life (and those of others in the company) easier. Often, people will collaborate informally on a tool, and when it goes ‘public,’ (ie, others in the company start using it), other users are usually allowed to make their own changes or implement their own features.

* There are about 55,000 test cases in ASP.NET, with several thousand per feature (>4,000 for the database connectivity stuff alone). Although that sounds like an enormous number, it becomes more plausible when you consider the scope of each feature; the data connection stuff, for instance, can connect to a SQL database, an object file, XML, etc, etc. Then there are sub-features of each, and you have to test the way everything interacts with each other. After a while, the numbers start to add up, and you start wondering if 55,000 really is enough.

* I need to get comfortable with using a debugger.

* Testers actually do quite a bit of coding! There is far more coding involved than I expected, and the system used to capture errors or defects is very elaborate. There is also much less ‘hands-on’ work with the product being tested than I anticipated – they can run a battery of tests and verify that a feature is not working correctly without manually interacting with the product. Very cool, and I definitely learned a lot.