Being Cellfish

Stuff I wished I've found in some blog (and sometimes did)

July, 2009

Change of Address
This blog has moved to blog.cellfish.se.
Posts
  • Being Cellfish

    Pair programming in the army

    • 0 Comments

    I've lately been thinking about more things that the military do that is either something agile teams typically do or something agile teams can learn from. So one thing I've been missing in my day to day work lately is how I, as a platoon commander, worked with my platoon sergeant (i.e. the person second in command in the platoon). As platoon commander I gave orders for some activity and the platoon sergeant then lead the work and sorted out all the details while I planed our next activity. When in combat me and my platoon sergeant worked a little different. In combat the platoon commander leads the main force focusing on the most important target. The platoon sergeant leads any supporting missions such as outflanks or suppressive fire to support the main force. This type of cooperation is not seen at platoon level only. The same type of cooperation between the two commanders are seen from squad level all the way up to the supreme commander.

    The same type of cooperation I experienced in the military is what I see when I do pair programming. The only difference is that there is no commander. The commander role switches all the time. There is two ways to see this. Either the person at the keyboard is the platoon sergeant carrying out the last order (i.e. making a failing test pass or refactoring). In the mean time the person not at the keyboard is thinking about the next step. Or the person at the keyboard is leading the most important part of the mission (making a failing test pass or refactoring) while the platoon sergeant next to him do supportive tasks.

    Either way you look at it the way a squad, platoon or company is lead by its commanders is very similar to how pair programming works. And if that is very similar then we should be able to compare how things work when you're not doing pair programming. If I lost my platoon sergeant and did not appoint a new one my work as a platoon commander became much more cumbersome and less fun. Also I tended to make more mistakes and wrong decisions when I did not have my platoon sergeant to help me out with practical things nor having him as somebody to discuss possible solutions with. And I think the same applies to not doing pair programming. You tend to make worse design decisions and more mistakes if you sit by your self all day. Naturally you can get some of the synergies without pair programming if you make sure you discuss all design decisions with somebody. But pair programming will give it to you auto-magically in both big and small decisions.

  • Being Cellfish

    System Center OpsMgr X-Plat Providers source code available

    • 0 Comments
    So today we published some of the code for the cross platform implementation as open source. It's the code that implements the providers getting all the things needed for monitoring Solaris, Linux, AIX, HPUX etc. The fact that we were going to release this code as open source has been communicated for a long time but I think it kind of got lost in the whole cross platform message since it is kind of rare that Microsoft releases code as open source. Especially code for products and not just examples and frameworks.
  • Being Cellfish

    Instead of triggers

    • 0 Comments

    I've never been a big fan of triggers in a database since the existence of triggers is easily overlooked. I've also seen the use of triggers in situations where they where used to fix a bad database design. Specifically trigger were used to enforce constraints on other tables. I've also seen triggers used to move data to another table to keep history of records. I think that use is debatable since a stored procedure can be used to do that and the application should use the stored procedure and not do delete and updates directly. So in this case the trigger was used to protect the system from developers not following the application design. This should not really be needed but...

    Anyway, since I've not been working with triggers for quite a while I've never really thought about the options available but today I saw this video (requires free registration) which covers instead of triggers. I think it's a nice feature which simplifies the implementation of protect-from-developers type triggers since you can basically redefine a delete to not do an delete at all and vice versa (if you have the urge to delete data instead of updating it).

    But in my opinion triggers is not the best way to handle things in your application. Use a stored procedure since a trigger easily is forgotten but a used stored procedure is not. Also it is easier to follow the flow in a stored procedure over trigger execution. And if you have a nice stored procedure everybody should use but your developer keep screwing things up by using inserts, updates and deletes directly in the database then I would consider adding a trigger preventing direct use. This can be hard since the stored procedure it self will cause the trigger to execute but in my experience the stored procedure typically updates only one record at the time so an easy way to try to find developer "abuse" is to prevent updates of more than one row at the time. That has been enough for me so far.

    Preemptive comment response: I know instead of triggers is nothing new and has been available for some time but I've just not been interested in triggers for the last 9 years so please forgive me...

  • Being Cellfish

    Coding Dojo 3

    • 0 Comments

    Third MSFTCorpDojo today and MineSweeper and MicroPairing again. We did almost no design this time and started with only end-to-end tests. In my opinion the purest way of executing a kata. We had very good progress in the first hour but started refactoring our tests quite hard, something I've never seen to this extent before in any dojo session I've ever attended. The result was that we did not complete the kata but we ended up with eleven really nice tests with a few nice helper methods. And if we've spent another 20 or 30 minutes we would probably have seen classes (or at least one) emerge from the code all by them selfs, just as I expect it to be. All in all a great session.

    I especially like the fact that one of the participants did not do any coding in his daily work and had never tried TDD at all before the session. And he felt he really learned something and that he felt he could contribute and participate as any other person in the room. I guess that proofs how well the dojo format works for all kinds of skill levels.

    I only really experienced one big setback today and that was when we did a 10+minute refactoring that involved more than one person to complete. The refactoring was in the test code and created a very general method to generate input and output data for our tests. I'm personally not sure that refactoring really added value in terms of readability. I would personally have sticked with a much less generic and simple way of implementing my tests even though it would have left me with a few more hard coded strings. But I think that is OK in the test code if it increases readability. On the other hand we got a really nice generic method to use in future tests so I guess it's a trade off as usual and you should do what you think is best in each situation.

  • Being Cellfish

    Remember why you make estimates

    • 0 Comments

    Dan North recently wrote about a situation I regretfully recognize too well. Imagine you're asked to take a number of user stories and estimate them using story points. You spend some time breaking the stories up and start estimating and then you get the question; so how much can be done in one year? You respond it is impossible to know since you don't know your velocity yet. And so everybody is frustrated. The customer have no idea and thinks story points is stupid because it doesn't help him. And you feel frustrated because the customer didn't tell you in the first place that he wanted to know how much could be done in one year.

    I believe the use of story points is good since people don't estimate in time very well. The relative nature of story points and velocity is in my opinion a better way of updating release plans over time. So the effort is not wasted. And after a few iterations the customer will be happy about story points. But you have to do something else in the beginning. I favor making really rough estimates in man-months or man-iterations and using that as an indicator to the customer about the release plan for the next year. But I also tell the customer that this is just a rough estimate and as soon as we have our velocity we will have better prediction of the actual content of the release. Personally I also throw the initial estimates (using time) as soon as they have been presented. And I try to make the customer do the same thing.

  • Being Cellfish

    Three good reasons to make small tasks

    • 0 Comments

    When talking about Scrum you often hear that the sprint backlog should be broken down into tasks no greater than 16 hours in size. The reasoning behind this, in my opinion is to force the team to breakdown things. I know of several teams that decide on even lower limits. And even if you do not estimate the size in hours because you only burn down the number of tasks in the sprint it is still common to have some kind of guideline for a maximum task size. And there are at least three good reasons to break things down into small things.

    1. In my experience, the more things you break something down into, the larger the total gets. The positive side of this is that things tend to get larger as a whole because when you break things down you remember to take all kinds of small things into account. The only real downside with this is that each estimate will probably include some buffer. On the other hand people generally underestimate things a lot so breaking things up often adds the "buffer needed" to get everything as a whole completed. And I think this is a better way than just having a fudge factor (i.e.multipying everything with some constant) since each task adds value when it says what to do. A fudge factor does not tell you about all the small things to remember.
    2. Jeff Sutherland has pointed out a few times that teams that burn down stories tend to perform better (example). I think the next best thing is burning down the number of remaining tasks. And with smaller tasks you'll get a steady progress each day since each team member will be able to complete one or two tasks each day.This will make the motivation for burning down hours less of an issue since the task number burn down will be virtually identical to an hour burn down with slightly bigger tasks. Also not only the burn down will reveal a flow.
    3. As a team member working with tasks that takes around a day to complete you can find yourself in one of two very stimulating situations. Either you complete a task each day before you go home which leaves you with a nice feeling of satisfaction each day. I personally like this very much since it means I can start with whatever is necessary the next morning, a new task or something else that has come up. The other situation is that you complete a task during the day and starts working on something new. I think this leaves you with the same kind of satisfaction of completion but also the warm feeling of knowing what to start working on the next morning.

    There a certainly more reasons for why you should break things down into small tasks and probably a few why you should not do it. Even though I think the key when looking at when not to make small tasks is more of a when thing. Breaking things down into one day tasks and then saving it for half a year and then start working with them is not a good idea.

Page 2 of 2 (16 items) 12