I'm at the build conference in Anaheim this week, and I was in the platform booth when a customer asked me a question I'd not been asked before: "How do you get started with test driven development". My answer was simply "just start - it doesn't matter how much existing code you already have, just start writing tests alongside your new code. Get a good unit test framework like the one in Visual Studio, but it really doesn't matter what framework you use, just start writing the tests".
This morning, I realized I ought to elaborate on my answer a bit.
I'm a huge fan of Test Driven Development. Of all the "eXtreme Programming" methodologies, TDD is by far the one that makes the most sense. I started using TDD back in Windows 7. I had read about TDD over the years, and was intrigued by the concept but like the customer, I didn't really know where to start. My previous project had extensive unit tests, but they really didn't use any kind of methodology when developing them. When it came time to develop a new subsystem for the audio stack for Windows 7 (the feature that eventually became the "capture monitor/listen to" feature), I decided to apply TDD when developing the feature just to see how well it worked. The results far exceeded my expectations.
To be fair, I don't follow the classic TDD paradigm where you write the tests first, then write the code to make sure the tests pass. Instead I write the tests at the same time I'm writing the code. Sometimes I write the tests before the code, sometimes the code before the tests, but they're really written at the same time.
In my case, I was fortunate because the capture monitor was a fairly separate piece of the audio stack - it is essentially bolted onto the core audio engine. That meant that I could develop it as a stand-alone system. To ensure that the capture monitor could be tested in isolation, I developed it as a library with a set of clean APIs. The interface with the audio engine was just through those clean APIs. By reducing the exposure of the capture monitor APIs, I restricted the public surface I needed to test.
But I still needed to test the internal bits. The good news is that because it was a library, it was easy to add test hooks and enable the ability to drive deep into the capture monitor implementation. I simply made my test classes friends of the implementation classes and then the test code could call into the protected members of the various capture monitor classes. This allowed me to build test cases that had the ability to simulate internal state changes which allowed me to build more thorough tests.
I was really happy with how well the test development went, but the proof about the benefits of TDD really shown when it was deployed as a part of the product.
During the development of Windows 7, there were extremely few (maybe a half dozen?) bugs found in the capture monitor that weren't first found by my unit tests. And because I had such an extensive library of tests, I was able to add regression test cases for those externally found tests.
I've since moved on from the audio team, but I'm still using TDD - I'm currently responsible for two tools in the Windows build system/SDK and both of them have been developed with TDD. One of them (the IDL compiler used by Windows developers for creating Windows 8 APIs) couldn't be developed using the same methodology as I used for the capture monitor, but the other (mdmerge, the metadata composition tool) was. Both have been successful - while there have been more bugs found externally in both the IDL compiler and mdmerge than were found in the capture monitor, the regression rate on both tools has been extremely low thanks to the unit tests.
As I said at the beginning, I'm a huge fan of TDD - while there's some upfront cost associated with creating unit tests as you write the code, it absolutely pays off in the long run with a higher initial quality and a dramatically lower bug rate.
I'm a huge fan of "Working Effectively with Legacy Code" by Michael Feathers for getting started with TDD if you already have an existing code base. Ignore the title, it's a book that teaches you how to go from the code base you have to the code base you want while introducing unit tests all along.
There's a lot to like about the "test early, test often" approach you describe, but it isn't TDD.
Just wanted to pile on with violent agreement. Back in 2007, I went to refresh SafeInt to version 3. We had a fairly good test harness for it that was internal, and the new work went out as verifiably creating no new regressions. It isn't often you can take 5000 LOC, turn it upside down, and change most of it, and not create regressions. I also recognized that the 64-bit support in the test harness wasn't all that great, so when I made some big changes to support using intrinsics on x64, I wrote a 64-bit specific test harness for multiplication, which caught an existing runtime bug. More recently, Jeffrey Wall of OWASP picked it up and extended the test harness to all the operators, which caught one more runtime bug. Two bugs in 4 years out of 5000 LOC is a pretty good record.
Then another wrinkle showed up - we wanted to support the Intel compiler, which is seriously hard core about optimizing things. It started throwing out some of our checks in addition and subtraction, and best of all, the test harness caught the problem. When changes were made to force the compiler to do the right thing, the test harness verified that the problem went away on the Intel compiler, and that we hadn't regressed on the Microsoft compiler and gcc.
Without a solid test harness, this class would be a complete nightmare.
Another point - you sometimes want to unit test individual functions. For example, I want to feed UTF8 from a file to MultibyteTowideChar, and it won't tell me that it didn't use all the input, or where. So I need to detect partial characters, which is a bunch of tedious bit flipping. So i wrote the function, made a debug-only test function to drive it with, and now I can step into the unit test for that one utility function to make sure it works. like I did with SafeInt, I can also prove that I can go down all the code paths, and get 100% coverage of the code. Testing this little function from outside would be a nightmare - we should still do it, but this is more comprehensive.
I find the tests almost equally valuable for forcing the code to be testable, which has the automatic effect of making the code much better designed: isolated, componentized, etc. It's much more difficult to write testable spaghetti code. Unfortunately, the converse is true: it's much more difficult to test spaghetti code.
@Ben Voigt, "Instead I write the tests at the same time I'm writing the code. Sometimes I write the tests before the code, sometimes the code before the tests, but they're really written at the same time."
Sounds a bit more like he's bouncing back and forth. so unless he never lets the tests drive development he's at least some of the time doing some form of TDD.
A few years back I wrote my own ultra-lightweight plain-old-C unit testing library. I've migrated my web site to Google Sites so I'm not quite sure how to make it easily viewable in pieces, but you can see it on some of my previous hosts where I still have accounts:
I'd like to get your opinions on its usability and usefulness.
Test before development is an important part of the TDD concept, because it allows you to write your tests before you start making decisions about how the code will work. You will, of course, still run into the issue that your development decisions will change the behaviour of your code such that the tests no longer work, but you will be able to modify your tests to handle that, and still maintain the lack of assumptions in your tests.
I'm tempted to suggest that it might be nice to write the documentation first, too.
@Alun: The decisions about how the code will work should have been made when you were writing the dev spec for the feature. IMHO a dev spec should contain enough information that a knowledgeable developer can completely implement the feature, which means that all the algorithmic decisions should have already been made.
+1 on TDD, I've also been using it for a few years now and it certainly reduces "bug production." :)
One challenge I have had though is keeping my tests organized in a logical structure, given that adding new tests is really cheap. I've found some guidance around this in Gerard Meszaros book "xUnit Test Patterns" - a must read once you have a few months of TDD under your belt.