A former colleague once said that there are only two types of people; those who want to do a good job and those who want to go home. And it has nothing to do with working over time. The statement is obviously provocative but you should not interpret it literally I think. What it means is that some people go to work, do what they're asked and then go home without necessarily having a passion for what they do. Others are very passionate about what they do and go to work because they love what they do there.
This to me is just another way of describing software craftsmanship which is being more and more popular lately. And think there are two important aspects of craftsmanship to consider. One is the integrity of being a craftsman. If you want to do a good job you should refuse to do something that you think is a poor solution. Compare yourself with a great chef. A great chef would never serve you something that is not prepared and presented in an excellent way. If you on the other hand just want to go home, just do whatever you need to get home early.
The other aspect is being proud of what you do. It doesn't mean you have to be proud of code you wrote a year ago or even a few months ago. I've written code I wasn't proud of even a few weeks later. The important part is that you're proud of what you do at least when you write the code. I'd even say that it's a good thing that you're not proud of old things you've done because it means you've improved yourself and that is important. If you however just want to go home, then you're probably not always feel proud about stuff you just did and you're probably proud over things you did ages ago. Because if you just strive to go home then you're probably not improving yourself very fast.
So yesterday I wrote about being proud about what you do, but also improving yourself. It made me think about a pretty scary thing that happened to me a few months ago. I was attending this training where we were divided into four teams and worked together for a few days to create an application. At the end of the class at the retrospect I suggested that in future classes each team could present some part of their solution they were proud of and wanted to show to the rest of the class. I said this because I thought my team had learned something and had done a few smart things that I wanted to share. We had some time to spare so the trainer suggested I show what I was thinking about and then the other teams could do the same thing.
So I showed our stuff and then no other team wanted to show anything. Maybe I'm a little extreme in my search for improvements and maybe I'm over confident in the excellence of our solution but in a room of 25 people I would assume at least somebody felt the same way about something. I don't believe they were all shy. Instead the class went silent and everybody went home. The thing that really scares me is that I think that there is a good chance that the reason for not wanting to show anything actually was that the other teams didn't feel proud about their solution as a whole. Because there was an element of competition there and it's easy to get carried away because it's not real code in the class room...
Finally I must point out that I do not think you have to do the ultimate solution each time to be proud of your work. I'm sure a master chef sometimes throws together something fast when at home. But I think the mast chef quick lunch probably tastes better than my quick lunch. So the effort needed to be proud might be different for different situations but that hunger for improvements is definitely something I think differentiates the good from the great. So if you do not take pride in anything else, at least take pride in striving to be better and be proud of each improvement.
I was involved in a discussion on what it means when a team commits to something. In this particular case one person thought that it meant the team will do what it takes to deliver this. I on the other hand I meant the team thinks it can deliver all this and will try to do it. A big difference. Yesterday I read an interesting post on this topic. I like how he talks about soft and hard commitments and this was exactly what I think the difference was in the argument I was involved in. I was thinking soft commitments, the other person hard commitments. And I must agree that in my ears commitment by default sounds hard.
One thing I don't quite agree with in that blog post however is that Kanban a no-commitment framework. It may look like that on the surface but I doubt a team doing Kanban where there is no real team spirit will do a very good job. So even though Kanban does not prescribe any kind of commitments it self, the kind of soft commitments that are mentioned, i.e. an emotional commitment to the team is essential to be successful in my opinion.
And the same kind of argument goes for Scrum I guess. I do not think scrum prescribes any kind of hard commitments. Only bad implementations of Scrum have a PO that demands hard commitments and teams working with hard commitments will also not be very successful over time. This is also mentioned in the blog post linked above so I'm not really disagreeing. I just think that well implemented Scrum is already in the "soft commitment only area" of the graph.
So I guess the conclusion you can draw from all this is that you'll get better results with only soft commitments. And that is hardly any news. Teams that work well together can perform well under most conditions. And teams with no cooperation will probably fail under all circumstances. So getting the team to work well together should be your absolute top priority. And also the use of words are important. Choose your words carefully and make sure that everybody has the same understanding of what a certain word means.
When I first started using xUnit.net the only feasible way to run the tests for me was to use the console test runner as a post build event in visual studio. The only annoying thing about the default settings in visual studio with this setup however is that whenever there was a test failure visual studio switches to the error window which doesn't tell me more than the number of failed tests (since the return code is the number of failing tests). But there is a way to tweak your environment to always show the output window where all the interesting things about the tests are printed.
Take a look at the options dialog showed to the right. By disabling the "Always show Error List if build finishes with errors" and also checking "Show Output window when build starts" you'll make sure you'll have the console output handy if the build fails with unit tests. The drawback however is that for all compilation errors (and warnings) you'll have to switch to the error list window. But personally I see more test failures than actual build failures when I develop code so all in all it's a win for me with these changes.
This time we diverted from what I think is a basic dojo rule; you start over each time. So we continued with our BankOCR variant from last time and actually finished it including extending the number of digits and having three different checksum variants. This time we made one mistake. Or maybe I did since I maybe should moderate the session harder. And that was that we got a little carried away to get things done rather than refactoring at certain times. In the end the code turned out pretty OK but the way there was shaky with pretty complex implementations at some times.
In the end it sounded like the participants were pretty happy with the session and we decided to actually continue the next dojo session with the same kata and code base but next time focusing on refactoring the code into something more object orientated using the nine Object Calisthenics rules.
So I previously mention a neat way to set up visual studio to work with a console test runner so you don't switch to the error list all the time. There is a potential problem here. Sometimes you add a compilation error to the code. In this case the post build step will still be executed with the old binary and since we don't switch to the error list and just look at the output window we might miss the compilation errors. This is not a good thing...
The solution is however pretty easy. In your assembly with all your tests you have a post-build step to run the test runner. What you need to do is to add a pre-build step to your production assembly, i.e. the assembly you're testing looking like this: del $(TargetPath)
This way, if there is a compilation error in your production code you'll get an error when you try to run the tests since the production assembly will be deleted for sure.
The interoperability connectors for System Center have now been released:
I find this interesting since this was done by one of the other teams under the system center cross platform umbrella.
When talking about Scrum you often hear that the sprint backlog should be broken down into tasks no greater than 16 hours in size. The reasoning behind this, in my opinion is to force the team to breakdown things. I know of several teams that decide on even lower limits. And even if you do not estimate the size in hours because you only burn down the number of tasks in the sprint it is still common to have some kind of guideline for a maximum task size. And there are at least three good reasons to break things down into small things.
There a certainly more reasons for why you should break things down into small tasks and probably a few why you should not do it. Even though I think the key when looking at when not to make small tasks is more of a when thing. Breaking things down into one day tasks and then saving it for half a year and then start working with them is not a good idea.
Dan North recently wrote about a situation I regretfully recognize too well. Imagine you're asked to take a number of user stories and estimate them using story points. You spend some time breaking the stories up and start estimating and then you get the question; so how much can be done in one year? You respond it is impossible to know since you don't know your velocity yet. And so everybody is frustrated. The customer have no idea and thinks story points is stupid because it doesn't help him. And you feel frustrated because the customer didn't tell you in the first place that he wanted to know how much could be done in one year.
I believe the use of story points is good since people don't estimate in time very well. The relative nature of story points and velocity is in my opinion a better way of updating release plans over time. So the effort is not wasted. And after a few iterations the customer will be happy about story points. But you have to do something else in the beginning. I favor making really rough estimates in man-months or man-iterations and using that as an indicator to the customer about the release plan for the next year. But I also tell the customer that this is just a rough estimate and as soon as we have our velocity we will have better prediction of the actual content of the release. Personally I also throw the initial estimates (using time) as soon as they have been presented. And I try to make the customer do the same thing.