During my recent brownbag on TDD one comment was that having a lot of unit tests makes it harder to refactor code. First of all the word refactoris in my opinion misused as much as mocking. There is a difference between refactoring and rewriting. Refactoring means you change the code to be "better" without changing any functionality. Rewriting means you potentially change things completely including changing interfaces in APIs. So first of all, if you refactor your code and that means you have to change your unit tests your unit tests are too tightly coupled with your code. Lesson learned; somebody screwed up when writing the tests in the first place. This happens all the time when you learn TDD. It is just a matter of biting the bullet and learn form the mistake you made.
So back to the original statement; it should really be unit tests makes it harder to rewrite code. Think about it. You want to change an API and some functionality and as a result of that you have to update hundreds of unit tests. That's a big pain right? But wait... All these tests that now fail means I have manually verify that things I didn't want to change still works and things I wanted to change are changed to the correct thing. That is all good isn't it? All the failing unit tests are really like a buddy asking; Did you really want to change this? Is it really a good idea to change this? Consider the alternative with no unit tests... Sure you don't have to "waste time" updating old unit tests but neither do you know if your code works as expected.
So if you're ending up changing tests when you refactor code; somebody screwed up. And changing unit tests when you rewrite code is a good thing!
A while ago I was asked how to handle the case where setting up unit tests became complex because you wanted to test at a lower level. Since that question itself confused me I tried to understand the problem and it turned out that person started by writing a test for some high level abstraction faking all dependencies. Once that was done he then tried to fake some lower level dependency using a real middle implementation and still calling things from the highest level. When I explained how he should think instead we came up with what we ended up calling the TDD stairway in our discussions. Take a look at the image (click on it for a larger version).
To the left i have just made up a number of abstraction layers you may have in your application. The blue arrows represent where your test code should make the call, i.e. how you invoke your code. The yellow boxes indicate parts you need to fake. The green boxes represent the code being executed in each case. As you can see you first have a three step stairway with unit tests (UT). When writing the unit tests you always have your test code pretend to be the abstraction above the code you want to test and you always fake the abstraction below the code you want to test. This is the kind of tests you should be writing when you do TDD/BDD.
At least most of the time. Because sometimes it makes sense to write an integration test that uses the real thing all the way. These integration tests may be written by your test team if you have one but most of the time I think they would be written by the same person who write the unit tests. The interesting aspect of the integration test is that it uses almost all abstraction layers and may have a fake at the bottom.
Then there are also end-to-endtests (or scenario tests or acceptance tests or whatever you want to call them). These use the real thing all the way. No faking and no shortcuts! This is definitely something you'd have a separate test team working on.
And as ususal, no matter what type of test you write, you want it to be automated. Anyway, back the the stairway... I think it explains pretty good how to think when deciding what to test and how to test it (and what to fake) at different levels of abstraction. I've used this analogy multiple times and it seams to work pretty well so I wanted to share it. What do you think?
Just for the record I think face to face code reviews (or pair programming) is a much better idea than sending off a code review with an electronic tool and get some feedback back. But reality is that most teams use a tool where the reviewer and reviewee doesn't necessary talk to each other, just exchange comments. A problem with all code reviews is; how do you know if the reviewer understands the code? A good reviewer always asks and asking is easier done face to face. But what if the reviewer thinks they understand but it is wrong? To handle this I learned a neat little trick from a colleague; as a reviewer you should always write a comment explaining what you think the code/change is supposed to do. Just one high level comment with a few sentences is enough.
If you ever want to know how much a row in your SQL Azure database would cost you if you did not pay a fixed price... You want to read this.
Some banks are really paranoid. Which is good since it means they want to protect my money. The problem is that the time difference and distance to Sweden makes it a little bit cumbersome at times... I wish somebody told me to give my parents power of attorney before I moved. That would have saved me some trouble. Don't make that mistake too.
Yesterday I had my first Coding dojo with my new team. We did FizzBuzz in the same way I've done before. Turned out to work pretty well as a first dojo where nobody had participated in a coding dojo before. If I could do it again I would probably not have the faked dependency in the Printer class. And an interesting idea in the retrospect was to present the different requirements (i.e. adding Fizz, Buzz, lots and Cowabunga) in random order.