So the new blog engine also works very well with feedburner so if you subscribe to this blog you should start using this feed URL instead: http://feeds.feedburner.com/BeingCellfish.
Sometimes a team come to the conclusion that the daily scrum (a.k.a. the daily stand-up) is a waste of time. I think the most common reason for this is that the team is not having an effective informative meeting. I've seen a lot of daily scrums turn in to a status report where everybody tries to justify what they've done in the last day and especially bring up all the good things they've done. But that's not what the daily scrum is for. It is to bring up problems and let everybody know what will happen in the next 24 hours. or at least that's what I think is the most important part. You should view the daily scrum as your daily planning meeting.
The other type of team that wants to skip the daily scrum is typically a team sitting together in one room and performing very well with a lot of communication. If the team is doing and communicating well, why waste a few minutes every day in a daily scrum that will not bring up anything not already communicated? Well, as I mentioned before; different teams (and times) call for different measures. I think it is OK for a team that has embraced agile software development so well to stop having daily scrums if the communication is good without it. And with such a mature team it will notice when the daily scrums are needed again.
But be warned; most teams who want to skip the daily scrum wants to do it for the wrong reason. And even if the team has excellent communication and sitting all together, the cost of everybody stand up (no need to even leave the desk) and confirm that there are no big issues that needs to be raised is better than trying to skip them. The potential benefits by far outweighs the cost in my mind. So I would do the safe thing; keep it and just make it really, really short most of the time.
Yesterday I attended a class about Pex and Moles held by the authors of named tool(s). The first part was about Moles which is kind of like Stubs (which I've covered before) but more powerful. With Moles you can fake any method in any class. I'll say that again. Any method in any class. That is very powerful when working with external dependencies and especially when working with legacy code trying to add some more tests for it. Even while Moles is a very powerful tool that can result in bad code of used in the wrong situations it is very simple and straight forward to use. Not really any fancy API to learn as with most mocking frameworks; just simple delegate properties that can be set to override any method's behaviour. I think it's just as neat and clean as it is powerful. So when to use Moles? Simple; as little as possible. For code you own there is really no need to use Moles (but you may want to use Stubs which is now part of Moles). And for external dependencies you probably want to wrap them as usual but with Moles you can get pretty high coverage even on those thin wrappers.
Second part of the class was about Pex. Simply put, Pex is a profiler that tries to break your code by analysing execution and try several different inputs. It can also generate parametrized unit tests for this. While Pex can be used to create a minimalistic test suite that covers all paths of the code I think there are a few things that prevents me from using Pex for this; First of all I don't want to generate my tests after the fact. The minimalistic test suite is not very good as documentation for expected behaviour. And even though the code is covering all paths it is not covering all possible conditions in the code so I end up with one or two tests missing if there are complex conditional statements in the code. However Pex can be used to analyze a single method without actually generating any tests. That is where I think I'll find Pex useful in the future since it is a good way of finding out if there are any test cases that should be written.
Apparently the blog engine got updated last night. And in the process of doing that a few of my latest posts disapeared. Turned out I had the main page open and could easily add all the missing blog posts again. We'll see if there is a backup restore so I get everything doubled. In the mean while I'll experiment with look and feel for this blog.
I had a discussion the other day with somebody about the uselessness of measuring cyclomatic complexity of methods. In short the discussion went along the lines that using just a single measurement does not give you anything else than a number. And some methods with high cyclomatic complexity may still be easy to understand. This reminded me of a tool I looked at a long time ago. At that time only crap4j was mature and crap4net was in an early alpha (looks more mature now). So what do these tools do? Well they calculate CRAP where CRAP stands for Change Risk Anti-Patterns. The idea is to use both cyclomatic complexity and code coverage to find methods that are likely to be dangerous to change in the future. Complex methods are more likely to be difficult to change without introducing new defects. And the same goes for even simple methods without automated test coverage. Hence CRAP is calculated using the following formula:
CRAP = COMP^2 * (1 - COV)^3 + COMP
COMP is the cyclomatic complexity for a method and COV is the code coverage for that method. Preferably the path (or condition/decision) coverage should be used for code coverage. Only code coverage generated using automated tests should be used (i.e. no manual tests). If CRAP is more than 30 it is considered a high risk method. CRAP of 5 or below is considered good.
I like how the use of these two metrics that are easy to get together give you a much better tool to find methods that need some attention. That's what the Uncle is doing. And as he says; these metrics are so easy to get and helps you focus on what code needs to be fixed for the future. Just do it!
Recently I set up a project where i wanted to use xUnit.net as a unit test framework and I also wanted code coverage so I thought it should be easy using VS2010. Even though the final solution turned out to be fairly simple there were a few bumps on the road. basically I had to do a variant of what I did for native coverage before. Basically I ended up with a batch file looking like this:
del *.dll *.orig *.pdb *.exe *.config *.coverage
copy ..\ProjectUnitTests\bin\Debug\*.dll .
copy ..\ProjectUnitTests\bin\Debug\*.pdb .
copy PATH_TO_XUNIT\xunit.console.exe .
copy PATH_TO_XUNIT\xunit.console.exe.config .
copy PATH_TO_XUNIT\xunit.runner.utility.dll .
vsinstr.exe ProjectAssemblyTested.dll /COVERAGE
start vsperfmon.exe /COVERAGE /OUTPUT:Project.coverage
The only real bump in the road was that the error message when I used x86 profiler on a x64 assembly did not say anything meaningful. It just refused to gather any coverage data. So the trick was to open a VS2010 x64 command prompt and then everything worked smoothly. Easy enough to get coverage after all.
Today we continued where the last dojo ended. Since last time we've created a small backlog with user stories using this tool. And we implemented one user story; As a robot developer I want to be able to turn my robot. Apart from a long interesting design discussion we also covered another interesting topic; if you want to test an algorithm, do you want to duplicate the algorithm in the test code or just use hard coded values. Let me give two examples:
1: Assert.AreEqual(Robot.TURN_SIZE * 2, robot.Heading);
2: Assert.AreEqual(180, robot.Heading);
On the first line we use a constant to calculate the expected value and in the second case we use the value we're actually expecting. The good thing with the first alternative is that if the constants change the test will still pass. But they danger is if we cut-n-paste the "algorithm" both the test code and the production code may have the same error. If you write the algorithm twice (once in the test and once in production) it is probably OK but you don't know. The second variant will break if we change the constant changes but on the other hand the test is very clear in what the intention of the developer was. And if the constant changes, maybe it is good that some tests fail if you change the constant since it gives the developer a good chance revisit the intention of each test.
Parkinson's law, if you have not heard it before, states that Work expands so as to fill the time available for its completion. That means that if you estimate a task to take one day to complete it will take one day to complete. This is a potential problem with using hours as estimates for your tasks. because if your team is very defensive/pessimistic when doing the estimates you will get less things done than if your team is optimistic. The only upside is that an over estimating team will feel happy that they always finish things in time while a team that constantly under estimates will feel bad for not being able to complete everything planned. But from the outside (and supported by Parkinson's law) the under estimating team will complete more things than the over estimating team.
But having an unhappy team because of constant under estimation is not good either so what should you do? The obvious alternative is; don't estimate your tasks (at least not in time). Having small tasks even of different sizes will be just as good for tracking progress with burn-downs as if you use hours to estimate remaining time. But even while that may be a little bit better than using hours Parkinson is still lurking bringing down your velocity. And working on on a budget is dangerous since you don't know if you last task will be completed "on budget". Actually i think that for most teams at least a few tasks takes longer than estimated and if everything else is completed on budget you will never be able to complete everything on time.
I think the only thing that can work against Parkinson's law is a mature team where everybody wants to complete more and more tasks since it is a proof of improvement (sounds a lot like a Kaizen mind doesn't it). And with a mature team where there is a hunger to complete as much as possible all the time it doesn't really matter how you estimate your tasks. And if it doesn't matter i suggest you make it as easy as possible so skip the hours.
Now that my team have used Malevich for little over a month I thought it was time for an update. On the down side the diff view is not as good as with Araxis merge (a tool I use for diff/merge) especially when there are just small changes to a line Araxis is superior to Malevich. But I guess that is all I can say for things that could be improved. On the positive side we have:
So all in all I'm very pleased with the benefits my team have seen since we started to use Malevich. And the benefits out-weights the fact that the diff view could be better.
Räksmörgås is Swedish for prawn sandwich and is often used when testing computer systems since it contains all three of the "extra" Swedish characters (ÅÄÖ). While that may be an interesting test I stumbled over this old thing called a Turkey test today and I think it is brilliant. Especially since I've experienced a few interesting limitation after moving to the US: