I've attended a few coding dojos where the MineSweeper Kata was used. It has been dojos with both experienced TDD practitioners and TDD beginners. The interesting thing is that both types of dojos have resulted in the same behavior.
The first thing that typically happens is that the group creates a test and implementation for an empty mine field. From here a natural continuation is to add a test for a one by one sized mine field with no mines. At this point there are a few different ways we can go. We can add more mine fields, mine fields of different sizes or we can add mines to the mine field. Each of these paths adds complexity to the solution. What have happened in all the dojos I've attended is that mines are added before one (or sometimes both) of the other complexities.
Before adding mines the code have been very much focused on parsing input but once mines are added we need a good internal representation and a sweeper algorithm to produce the correct output. What I've seen at this point is huge refactoring steps that are hard to complete and the progress of the solution is brought to a halt. I think this happens because the group is so eager to get a complete algorithm so they forget to makes small steps and concentrate on finishing one part (i.e. one dimension of complexity) before moving on to the next one.
But does it have to be this way? I think not. What have happened at the dojos I've attended is that the group every time have started writing end-to-end tests; One input should result in one output. And the group have stayed focused on this even when recognizing the need for a separate parser object and so on. The group have forgotten that they can stub certain parts in order to add functionality in smaller steps and making refactoring possible.
So I think the lessoned learned is that sometimes the best thing is not to start writing separate tests for separate objects (which might happen when the need for a parser is recognized) but rather to stub the parser in order to continue work on the feature you want to add. I think this sounds like the obvious thing to do but still I've seen the same behavior in multiple dojos with both experienced and novices so I guess it is easy to get carried away. But whenever refactoring feels painful you should probably stop what you're doing, take a step back and look at the situation and make sure you're doing the right thing. There might be a less painful way forward that you're missing.
So I thought it was time to write a little about my lightning talk from the last Swedish ALT.NET unconference. First of all we must decide on what we mean when we say we're mocking. Do we mean "using mock objects in our tests" or do we mean "using a mocking framework"? I prefer to use the definition of mock used by Martin Fowler which I've talked about earlier. From that definition it is clear that mock frameworks can be used to create any kind of test double, not only mocks. And you can use mock objects without using any mock framework. So personally I tend to stick with the interpretation that "mocking" means "using mock objects".
Another interesting observation is that stubs according to Martin Fowler may record information about what calls are made. This makes stubs just as powerful in verifying actions as mocks. It also makes stubs very similar to mocks. So anything that can be mocked can also be stubbed. And vice versa.
So when looking at mocking I think mocking forces me to think about the implementation of the code I'm testing. I think this is bad since one of the great benefits from BDD/TDD in my opinion is the fact you don't think about implementation until you really have to. Mocking frameworks of today are also so powerful that they can be used to mock objects and designs that are a little difficult to test. And if the mocking framework rescues you from a bad design I think it is questionable if you will ever learn and design your code better. Why do it if you don't have to...
When I speak with people who have mocked a lot and then stopped mocking everybody says they stopped mocking because the code got difficult to refactor because so much of the implementation had to be coded into the mocks making refactoring a pain. Refactoring is one of the most important steps in the TDD cycle so if you make that difficult I would say you're doing something wrong.
But mocks are not all evil. They are great for protocol testing where you want to verify a certain protocol. Mocks also encourage the "tell, don't ask"-principle which is great.
So should we be mocking? Well, I think a good developer should use the best tools available and there are times when mocking is the best option. But mocks are also very powerful and as they said in one of the Spider-man movies: "With great powers comes great responsibility". Personally I believe that everybody who learns BDD/TDD will look at documentation and tools and eventually find mocks. They will try mocks and mocking frameworks typically teach mocking in their tutorials so these BDD/TDD novices will start mocking. And they will probably mock more than is healthy for them ending up with code that is hard to refactor and they will think that BDD/TDD sucks. With that in mind I think we who want to convince more people to use BDD/TDD maybe would benefit from not pushing so hard for mocking. We should probably just mention them and explain the pros and cons. And probably encourage people to try BDD/TDD without mocking first and then add mocking as a powerful tool once they mastered the basics.
Yesterday the Swedish chapter of ALT.NET held its second unconference. It all started with six lightning talks (of which I held one) followed by an opens space session. It was a great experience to meet so many dedicated developers at once. And even though an unconference tries to be a single long coffee break (since most interesting conversations at a regular conference typically happens during the breaks) it was during the breaks I had the most stimulating conversations.
My lightning talk was on "should we not be mocking" where I talked about the good and bad things that might happen if you're using mock objects and/or mock frameworks. Other talks covered MSpec, OpenTK, db4o and continuous integration (CI). A funny quote from the CI-session is "you can argument against the SOLID principles and TDD but you can't argue against CI".
The open sessions I attended covered TDD of WPF applications, Conventions vs configurations and working with legacy code where CloneDetective was mentioned. I also facilitated an open session on BDD without BDD frameworks. The day ended with a session where we discussed what ALT.NET should be doing. I'm looking forward to see what is going to happen with the "mentor list" that was mentioned.
When I last wrote about TDD and GUI I was talking quite vaguely about how TDD of GUIs can work. Since then I've thought more on the practical details and by chance the book I was reading at the time (Testdriven by Lasse Koskela) covered this topic. Inspired by that book I've done a few experiments.
First of all we must decide on what we want to test in the GUI. Usability can never be tested only by applying TDD/BDD. So we don't want to test the looks of the GUI. We should test the functionality of the GUI. There is also two types of graphical components we will be testing. There are user controls where we want to test that certain pixels turn out the way we want and there are other types of graphical components.
Let's first look at graphical components where we actually want to check pixel colors. What could this be? Well, it could be a custom graph component or a progress bar. Something where we easily can know from how we setup the test that certain pixels will be of a certain color. To test this we should render the user control to an in-memory bitmap and check the color. Using WPF this is pretty easy using the Render method. Using WinForms we have to use different methods depending on if it is a form or control we're testing. The reason forms are different is that depending on the layout it can be impossible to know the exact offset that should be used so the same method as used for controls cannot be used. On the other hand you should probably not test complete forms by drawing them to a bitmap anyway...
1: public void PaintControl(Control control, Bitmap b)
3: Point offset = GetClientOffset(control);
4: foreach (Control c in control.Controls)
6: c.DrawToBitmap(b, new Rectangle(new Point(offset.X + c.Location.X, offset.Y + c.Location.Y), c.Size));
7: PaintControl(c, b);
The offset code would not be needed if we can use the protected InvokePaint method.
So what about all controls where exact pixel match is not needed, what should we test? Well we can always test relative positioning if it makes sense. Things like "the OK button is always above all other buttons" could be tested if we find that useful. Other than that we should focus on testing that the functionality of the user interface is correct. In order to do this and to make our GUI easily tested I think the MVC (Model-View-Controller) pattern is a great candidate. And probably the best is the variant called MVP (Model-View-Presenter) with a passive view. The model is a no-brainer from a TDD perspective. The presenter should be as easy to test. And with a passive view that one should be pretty easy to test too.
Last June I presented a link where P&P showed off their fancy office. Now there is a new film with a budget version from the Visual Studio Team Architect team. Probably more realistic so it serves as good inspiration too I think.
I think the most interesting thing about the clip is that almost all team members previously had offices with windows and bad experience with co-location but once co-located with the feature team they all liked it.
If you've read your Scrum literature you'll definitely heard of pigs and chickens. Ade Miller however has identified a few behaviors that you might see in your team that are bad or really bad. And just for the fun of it he came up with a number of animals to represent the different behaviors. The "new" animals in the Scrum farm are:
You know you're a geek when you get excited over the fact the the epoch time will pass 1234567890 tonight (when this post is published). What did you do during this event?
Agile (and especially Scrum) is very hyped in the software industry at the moment. And I think it is because of the hype that one of the most common questions I hear from people that want to start being agile is "what tools are there to help me?". First of all I think many tool seekers forget to ask them self why they want a tool. Because most teams don't need fancy tools. Actually teams not using fancy tools tend to perform better in my experience. Teams with fancy tools tend to spend a lot of time updating stuff in their tool and less time communicating. I guess that is why the agile manifesto says "Individuals and interactions over processes and tools".
This does not mean we should avoid tools. Sometimes tools are helpful and even necessary to a team. Personally I recommend each team I've worked with to try low-tech tools like post-its and white-board at least one iteration before sticking to a tool. Most teams tend to stick with the low-tech... But if you want to try different tools out then Mike Cohn has compiled a list of tools to help you out.
A while ago I stumbled over an interesting blog post that showed the evolution from traditional design to a design using dependency injection and IoC containers. As a great side effect it is also shown how dependency inject is a good design pattern for other aspects than testability. And that is the whole point with BDD and TDD; when letting the behavior (or tests) drive your development you will write "better code" auto-magically because many patterns considered good are also make the code more testable.
Yesterday I met with a few former colleagues for the last time. We've met for a few hours three times discussing how to start with TDD and then continuing with BDD. We have had several long discussion on why you benefit from using BDD and how to overcome a number of common mistakes made by BDD novices. It has been vary interesting and I want to share a few of the things we've talked about.
For example we talked about the economics of BDD. The very first time you start using BDD you'll probably spend a lot of extra time to complete your first release as compared to not using BDD (and probably not writing any unit tests either). So how do you motivate this extra cost? First of all I think we can all agree that the behavior driven code will have less defects. This means less time you have to work for free (to fix defects) and you can also focus more on your next project making more money there since you're less interrupted with bugs in the old code. Defects will also typically be found earlier and defects found early when you're still working on the code are fixed faster (i.e. less cost). Second, when the customer wants new features you will probably be able to add new features more quickly with all the test in place than without them. But you still ask for the same money as before so you make a larger profit on all additions. All in all tis should make your software more profitable over time.
Other than economic incitements I think the fact that you as a developer will know if a change breaks the code and not just guess is a great improvement on working conditions. And the fact that you know you're releasing high quality code (instead of hoping you do that) should be enough for every developer to jump on the BDD train...
We've also had long discussion on how to test drive GUIs but I'll cover that again in a future post coming in a few days.