Recently I set up a project where i wanted to use xUnit.net as a unit test framework and I also wanted code coverage so I thought it should be easy using VS2010. Even though the final solution turned out to be fairly simple there were a few bumps on the road. basically I had to do a variant of what I did for native coverage before. Basically I ended up with a batch file looking like this:
del *.dll *.orig *.pdb *.exe *.config *.coverage
copy ..\ProjectUnitTests\bin\Debug\*.dll .
copy ..\ProjectUnitTests\bin\Debug\*.pdb .
copy PATH_TO_XUNIT\xunit.console.exe .
copy PATH_TO_XUNIT\xunit.console.exe.config .
copy PATH_TO_XUNIT\xunit.runner.utility.dll .
vsinstr.exe ProjectAssemblyTested.dll /COVERAGE
start vsperfmon.exe /COVERAGE /OUTPUT:Project.coverage
The only real bump in the road was that the error message when I used x86 profiler on a x64 assembly did not say anything meaningful. It just refused to gather any coverage data. So the trick was to open a VS2010 x64 command prompt and then everything worked smoothly. Easy enough to get coverage after all.
Sometimes a team come to the conclusion that the daily scrum (a.k.a. the daily stand-up) is a waste of time. I think the most common reason for this is that the team is not having an effective informative meeting. I've seen a lot of daily scrums turn in to a status report where everybody tries to justify what they've done in the last day and especially bring up all the good things they've done. But that's not what the daily scrum is for. It is to bring up problems and let everybody know what will happen in the next 24 hours. or at least that's what I think is the most important part. You should view the daily scrum as your daily planning meeting.
The other type of team that wants to skip the daily scrum is typically a team sitting together in one room and performing very well with a lot of communication. If the team is doing and communicating well, why waste a few minutes every day in a daily scrum that will not bring up anything not already communicated? Well, as I mentioned before; different teams (and times) call for different measures. I think it is OK for a team that has embraced agile software development so well to stop having daily scrums if the communication is good without it. And with such a mature team it will notice when the daily scrums are needed again.
But be warned; most teams who want to skip the daily scrum wants to do it for the wrong reason. And even if the team has excellent communication and sitting all together, the cost of everybody stand up (no need to even leave the desk) and confirm that there are no big issues that needs to be raised is better than trying to skip them. The potential benefits by far outweighs the cost in my mind. So I would do the safe thing; keep it and just make it really, really short most of the time.
I had a discussion the other day with somebody about the uselessness of measuring cyclomatic complexity of methods. In short the discussion went along the lines that using just a single measurement does not give you anything else than a number. And some methods with high cyclomatic complexity may still be easy to understand. This reminded me of a tool I looked at a long time ago. At that time only crap4j was mature and crap4net was in an early alpha (looks more mature now). So what do these tools do? Well they calculate CRAP where CRAP stands for Change Risk Anti-Patterns. The idea is to use both cyclomatic complexity and code coverage to find methods that are likely to be dangerous to change in the future. Complex methods are more likely to be difficult to change without introducing new defects. And the same goes for even simple methods without automated test coverage. Hence CRAP is calculated using the following formula:
CRAP = COMP^2 * (1 - COV)^3 + COMP
COMP is the cyclomatic complexity for a method and COV is the code coverage for that method. Preferably the path (or condition/decision) coverage should be used for code coverage. Only code coverage generated using automated tests should be used (i.e. no manual tests). If CRAP is more than 30 it is considered a high risk method. CRAP of 5 or below is considered good.
I like how the use of these two metrics that are easy to get together give you a much better tool to find methods that need some attention. That's what the Uncle is doing. And as he says; these metrics are so easy to get and helps you focus on what code needs to be fixed for the future. Just do it!
Now that my team have used Malevich for little over a month I thought it was time for an update. On the down side the diff view is not as good as with Araxis merge (a tool I use for diff/merge) especially when there are just small changes to a line Araxis is superior to Malevich. But I guess that is all I can say for things that could be improved. On the positive side we have:
So all in all I'm very pleased with the benefits my team have seen since we started to use Malevich. And the benefits out-weights the fact that the diff view could be better.
Yesterday I attended a class about Pex and Moles held by the authors of named tool(s). The first part was about Moles which is kind of like Stubs (which I've covered before) but more powerful. With Moles you can fake any method in any class. I'll say that again. Any method in any class. That is very powerful when working with external dependencies and especially when working with legacy code trying to add some more tests for it. Even while Moles is a very powerful tool that can result in bad code of used in the wrong situations it is very simple and straight forward to use. Not really any fancy API to learn as with most mocking frameworks; just simple delegate properties that can be set to override any method's behaviour. I think it's just as neat and clean as it is powerful. So when to use Moles? Simple; as little as possible. For code you own there is really no need to use Moles (but you may want to use Stubs which is now part of Moles). And for external dependencies you probably want to wrap them as usual but with Moles you can get pretty high coverage even on those thin wrappers.
Second part of the class was about Pex. Simply put, Pex is a profiler that tries to break your code by analysing execution and try several different inputs. It can also generate parametrized unit tests for this. While Pex can be used to create a minimalistic test suite that covers all paths of the code I think there are a few things that prevents me from using Pex for this; First of all I don't want to generate my tests after the fact. The minimalistic test suite is not very good as documentation for expected behaviour. And even though the code is covering all paths it is not covering all possible conditions in the code so I end up with one or two tests missing if there are complex conditional statements in the code. However Pex can be used to analyze a single method without actually generating any tests. That is where I think I'll find Pex useful in the future since it is a good way of finding out if there are any test cases that should be written.
So the new blog engine also works very well with feedburner so if you subscribe to this blog you should start using this feed URL instead: http://feeds.feedburner.com/BeingCellfish.
Apparently the blog engine got updated last night. And in the process of doing that a few of my latest posts disapeared. Turned out I had the main page open and could easily add all the missing blog posts again. We'll see if there is a backup restore so I get everything doubled. In the mean while I'll experiment with look and feel for this blog.
Räksmörgås is Swedish for prawn sandwich and is often used when testing computer systems since it contains all three of the "extra" Swedish characters (ÅÄÖ). While that may be an interesting test I stumbled over this old thing called a Turkey test today and I think it is brilliant. Especially since I've experienced a few interesting limitation after moving to the US:
Recently there was a discussion on an internal mailing list about when a sprint was was considered a failure. Specifically the question was asked if seven out of nine user stories completed meant that it was a failure. Naturally that should be considered a failure! NOT! As far as I'm concerned there is no such thing as failure in Scrum. Scrum is not a method that can be used to measure success versus failure. Scrum is a method that helps you understand what your real problems are. As long as you learn something and continue to improve you cannot fail. If you ignore all signals and refuse to learn from your mistakes (and successes) then you fail.
One thing that probably is the source of confusion is the sprint reset. That is before the end of a sprint the team (or product owner) realizes that the current plan is just wrong and that the best thing to do is to stop the running sprint and plan a new sprint instead. While this should be considered an exceptional case and if it happens a lot you need to understand why. But it should not be considered a failure, it's a great opportunity to learn.