The single responsibility principle is generally considered to be a good principle when designing software. My experience is that code written using this principle turns out to be easier to understand, test and maintain. But what is a responsibility? Could making it easy to develop X be a good responsibility? No that is not a good responsibility since almost all design and abstraction in your software has one goal; making it easy to write your application. So if you choose this as your definition of single responsability you'll basically end up with a single class with lots of methods because the single class' responsability is to make it easy to implement the application.
So what is a good definition of single responsibility. I like the definition that responsibility is reason to change. If the object have one reason to change it has a single responsibility.
Some news about System Center Cross Platform extensions. It will be in the box with the next release of System Center but the biggest news I think is that not only the open source taken into the product will be open source. The code used to discover and monitor parts of the UNIX operating systems such as processors, disks and network adapters will also be open source. That's something I wouldn't have bet on a few years ago.
I was recently looking at a web-cast and it was not related to refactoring at all, but the presenter said something about refactoring that just blew my mind. The code he started with looked something like this:
public ClassD GetD()
ClassA a = new ClassA();
ClassB b = a.GetB();
ClassC c = b.GetC();
ClassD d = c.GetD();
He then had the nerve to refactor for better performance into this:
public ClassD GetD()
return (new ClassA()).GetB().GetC().GetD();
The only thing he did was make the code harder to read. If you use a disassembler like this one to actually look at the generated code. And yes there is a difference in the number of lines of IL that is generated but I feel confident enough that those differences will be removed when the code is compiled to native code before execution. So the only thing accomplished by this refactoring is that the code is harder to read (original example had much longer method names) and the possibility to say "look how easy this is to use, it's only a single line of code". hate to break this to you but I can write anything and everything on a single line if that is important for you. But that doesn't make it better.
Don't refactor for performance unless you know you have a performance issue. Don't refactor to get fewer lines of code. Refactor to remove duplicate code and to make the code easier to understand, not making it harder to understand.
A little video to explain pointer basics in C++ to your kids... If you for some weird reason want to do that...
I recently found this interesting article on TDD when using T-SQL. It describes how a new unit test framework form T-SQL, called tsqlunit, works. Even though it feels like the framework lacks a lot of the fancy asserts and that is probably because of limitations in T-SQL I still welcome it as a new addition in the family of unit test frameworks. Definitely something I'll use next time I have to do T-SQL work.
Using coding katas is a good way to learn and fine tune TDD/BDD skills. A common way to perform the code katas is in a coding dojo. But a coding dojo involves a lot of people and doing katas on your own might feel a little bit boring. At least I think doing katas alone is boring. So recently I tried out Project Euler. Project Euler is a site where you solve different more or less difficult problems and you solve them any way you like. I've started solving the problems using BDD style specs with the xUnit.net framework. Even though most problems are mathematical in nature I think it is a fun alternative to doing code katas on your own. And you get a chance to dust off a few of the mathematical skills from school that you thought you'd never need again.
If you have a Project Euler account and is logged in you can use this link to view my stats.
Many (agile) projects uses a burn down chart to track remaining time. It is a commonly used visualization "tool" for progress. But there are also teams that keep track of completed time for each task. Typically they do this in order to see how the completed time corresponds to the initial estimate. using this information the team hopes to improve their ability to estimate things correctly.
So even though the purpose is often the same, the reason is often one of two things. If the team is lucky it was the team that decided to track completed effort because the team believed this was the best way to improve the team's estimation skills. On the other hand I think a much more common scenario is that management want the team to track completed effort because they want to know how good the team is at estimating and management thinks that tracking completed effort will help the team improve estimation skills.
In the past I've had no problem when the team decides to track completed effort since it is a team decision. But when it is enforced by management all kinds of alarms are triggered in my head and I do my best to prevent it. I guess this is how many people see this and many other things. As long as it is a team decision to do something it is alright. The team will stop doing it if it turned out to be a bad idea.
So if you're working as an agile coach and the team comes up with the idea of tracking completed effort in order to gather data and improve their estimation skills, should you encourage them to do so? Up until recently I would have said yes every time. But now I've changed my mind since I recently was involved in a discussion regarding this and I was presented with an argument that made me change my mind completely. take a look at the following burn down chart.
The chart consists of an iteration with 20 working days. The team starts the iteration with the assumption it will complete 20 story points (or what ever you want to call them). So in essence they believe they will complete one story point per day. So if the team is making correct estimations they will complete just that and there is no need to track completed effort since the estimate is correct.
Now look at the green (over estimate) line. The team have over estimated the effort and completes 20 story points in just 14 days. This means their velocity is not 1 story point per day but rather 1,43 story points per day. So they have over estimated every thing by approximately 40%. Without having to look at each task we can see that the estimates should be reduced by 40% if we want to do 20 story points each day.
Looking at the red (under estimating) line we see that the team completed 14 story points in 20 days giving us a velocity of 0,7 story points per day. Adjusting the estimates means we have to add 40% to each existing estimate if the goal is to do 20 story points in each iteration.
When using story points you don't typically change the estimates - you change the velocity for the next iteration. But I think this example shows that tracking completed effort is not really needed if you want to improve estimation skills. Some estimations will always be higher and some lower. And looking at the totals there is no need to track completed effort since the total completed effort is a known value. It is the length of the iteration. Using this technique you'll probably have enough data to improve without adding the overhead of tracking completed effort or risking the team thinking tracking completed effort is just a way of tracking who does a good job and who doesn't.
So now for the advanced part of this dilemma. I still think there is one situation where tracking completed effort may be of value. First of all it must be a team decision. Second, it must be a team decision. And third, the purpose is to identify tasks that grow extremely much in order to evaluate the reason for such growth during the retrospect. The team might even be good at estimating on average, but they have a feeling (or just know) that some things are extremely over- and/or under estimated. And the team wants to make sure they identify these tasks in order to discuss them specifically during the retrospect. But the team must remember the purpose here. The completed effort does not need to be very accurate. Just accurate enough to identify the extremes. And in order to do that I don't think the team have to track effort on every task.
The day started with Ken Schwaber talking on the topic: It is not Scrum if... Basically he tried to correct a number of common misunderstandings from his books. I then attended a breakout session on retrospect concepts. It was nothing new but a few things I've seen very common in retrospects where taken up as examples on not so good retrospect routines. Last for the day was a Q&A session with Ken & Jeff. They answered a number of questions and there where a few interesting things mentioned that I hadn't thought about before.
After the last session I was engaged in a very interesting discussion with Arto Eskelinen from Reaktor Innovations. Soon two of his colleagues turned up as did a friend of mine and we all had a very interesting discussion on how to implement Scrum in large organizations. The most funny thing was that one of Arto's recommendations was actually a thing I've thought of the last few days as something we needed in our project. Since Lasse Koskela was one of the colleagues that joined us (the other one was Jukka Lindström) I took the opportunity to talk a little about TDD and why I don't like mocks.
So this means the conference has come to an end. I have not only learned a lot and gotten to see some things from a new viewing point. I've also had fun in the progress and I've got boosted with a lot new energy and inspiration.
So the first day of Scrum Gathering has come to an end. I just wanted to check in with a few first impressions. The opening session with Jeff Sutherland did not only cover the announced topic; Secret Sauce for Making Scrums Hyperactive. He also talked quite a lot on a Distributed Scrum, a topic that is very popular at the moment. Guess it has to do with companies trying to actually profit from outsourcing by using Scrum. Ken Schwaber talked on what the Scrum Alliance is doing to support Scrum practitioners around the world.
When it came the breakout sessions I chose topics that was of interest to me and held by speakers that I didn't know much about or had never heard before. The reason for this is that in my experience the well known speakers often say more or less the same thing in a new package so by choosing a less known speaker I gamble a little in the hope of learning something new. And even in the less interesting sessions I learned at least something.
First breakout sessions was Agile Transformation: what to do with managers. It was interesting to hear how others work to involve the middle managers in the change toward an agile organization and how to handle the fears these managers typically have. Following that session I attended The Day After Retrospective. Retrospectives being one of my favorite topics it was very interesting to hear a few new views on retrospectives and how to handle situations where the Scrum Master turns out to be the actual problem. The last session for the day was a reserve alternative since the session I intended to attend got canceled. So the last session was Does Scrum Stifle Creative Innovation. Wish I had chosen something else. It was not at all about what the title said (which actually is an interesting topic). Instead it was a lot of talk about how to survive and adapt when the product owner is doing a really bad job and not understanding what he is supposed to do; order the product backlog from a user experience perspective. So far that is OK, but I got a feeling the speaker did not recognize this as a major problem at all. To him it was normal since nobody can know everything.
So that was a short summary of the first day.