Being Cellfish

Stuff I wished I've found in some blog (and sometimes did)

Change of Address
This blog has moved to blog.cellfish.se.
Posts
  • Being Cellfish

    Merging coverage reports with Bullseye

    • 22 Comments

    I've previously recommended Bullseye. And there is another nifty feature with Bullseye you should know; the ability to merge reports. This is pretty useful when you have one report from your unit tests and one from some other type of test run. Use this command to merge reports with Bullseye:

    covmerge.exe -c -fMergedData.cov file1.cov file2.cov
  • Being Cellfish

    Native C++ Code Coverage reports using Visual Studio 2008 Team System

    • 18 Comments

    The code coverage tool in Visual Studio 2008 Team System is quite easy to use from within the IDE unless you want code coverage for your native C++ code. In order to generate a code coverage report for native C++ you have to use the command line tools. This is how you do it:

    1. First of all your project must be compiled using the /PROFILE link option. If you bring up your project properties it can be found here:
      Configuration Properties -> Linker -> Advanced -> Profile
    2. The profiler tools can then be found in the following directory:
      C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools
    3. You need to add some instrumentation code to your EXE or DLL file and that is done with this command:
      vsinstr.exe <YOUR_EXE_OR_DLL> /COVERAGE
      This will copy the original file to an ".orig"-file and create a new file with the original name that contains instrumentation code needed to gather coverage data.
    4. Now start the listener with this command:
      VSPerfMon.exe /COVERAGE /OUTPUT:<REPORT_FILE_NAME>
    5. Now run your EXE or some test suite that uses the file you want to calculate coverage for.
    6. The listener started in step four (4) will not stop by it self once your test suite is finished so you have to stop in manually using this command (from a second command prompt):
      VSPerfCmd.exe /SHUTDOWN
    7. When the listener has stopped you just drag-n-drop the created ".coverage"-file into Visual Studio and you can view the results.
  • Being Cellfish

    Beginners should not use LINQ

    • 12 Comments

    When I first heard of LINQ I got really scared. I could see that it was a very powerful tool but also a tool that would be easy to abuse. Most experienced developers tend to agree that putting SQL statements into your logic or even GUI code is a bad design. Beginners tend to realize this pretty soon too. But with LINQ we get the opportunity to do the same kind of bad design once again. And since it is ".Net code" rather than "SQL statement" I had a fear that it would take longer before the beginners learned their lesson. And why do I care? I don't know. Guess I'm just one of those guys who want everybody to do a good job.

    Since LINQ was introduced I have not really seen LINQ in action very in any real code so I kind of forgot about this. But then I read this. Looks like I was right. LINQ is being abused just in the way I feared. But I'm happy to see that a person who I've heard speak warmly about LINQ so many times finally points out how LINQ can be your gate to bad design. And remember to read the comments too when you follow the link above. The topic unfolds...

    UPDATE: Not only did the topic unfold in the linked thread. It unfolded here. If you just found this page I strongly suggest you read all the comments.

  • Being Cellfish

    Recursively delete empty directories

    • 10 Comments

    I recently had to find a neat way to remove all empty directories recursively on a Unix machine. In the world of UNIX you can expect to find a way to do things like this pretty easy. When I started to search for a neat way to do it (rather than reading a bunch of MAN-pages) I came across a really funny story on The Old New Thing. Windows users are so used to having to use an application to do simple things like this, they forget about scripting possibilities. Guess that will change with Power shell.

    However this was about how to do this on Unix. Well, this is my solution:

    #!/bin/sh
    find $1 -type d | sort -r |
    while read D
    do
      ls -l "$D" | grep -q 'total 0' && rmdir "$D" 2>/dev/null
    done

    That script takes one argument; a directory you want to remove if it and all its sub-directories are empty. Any directories encountered where files exists are preserved.

  • Being Cellfish

    MFC is not dead

    • 8 Comments

    When I wrote my master's thesis was the first time I came in to contact with VC++ and MFC. I worked with MFC and VC++ quite a lot for a number of years but the last four or five years have not had much MFC work in it. When the .Net framework came along with the possibility to write managed C++ applications I thought MFC was dead and didn't give it much more thought.

    Then I read about the new VC++2008 feature pack here and watched the channel 9 video referenced. I noticed how the reporter also questions the MFC developer about the presumed death of MFC. Apparently MFC is not dead as many, including me, thought. The reason I thought so was that I just assumed that with managed C++ no one would bother writing them unmanaged in MFC any more. But when I think a little more about it I guess there are lots of people still out there who are familiar with the MFC framework and who do not have the time and/or interest to learn the new managed equivalents.

    Some might say that you want to write MFC applications in stead of managed ones because the users do not have the .Net framework installed. A fair point but frankly I do not think that the user that will benefit from this MFC update (which for example includes office ribbon support for office 2007) also already have the .Net framework installed.

    Update: I noticed that this post was referenced by this one: http://blog.stevienova.com/2008/04/12/why-is-mfc-not-dead/. I realized I was not 100% clear in my post. I must say that I agree with the author of that post - MFC will be used for many years for the reasons mentioned: People want to make applications that work across many different versions of Windows without installing the .Net framework. There was never a doubt in my mind that was the case. However I was surprised there was so much new development put into new versions of MFC since I guess all the new stuff will not work very well anyway. And much of the new stuff in MFC is there to mimic stuff seen in the managed framework and/or in latest versions of office/windows. And if you run Vista or Server 2008 I guess you will have the .net framework already installed and hence do not need to use MFC.

  • Being Cellfish

    I don't like Mock objects

    • 8 Comments

    When looking at BDD and TDD examples it is very common to see the use of mock objects in the code. This however strikes me as a little bit strange since I think you should use stubs and not mocks when working with BDD/TDD. A mock is basically a stub with the added knowledge that a mock object knows how it is expected to be used. In my opinion this fact disturbs the focus when  applying BDD/TDD. Additionally I think maintainability is decreased since the tests will now have a dependency on implementation internals and you risk having to change your test not because the API has changed but how things are done internally.

    Let me explain what I mean with an example. Consider a data object used to handle transactions and the ability to read and modify the amount money on a given bank account. Now assume this data object is used as a parameter to a transfer method that is used to transfer funds between two accounts. One way to implement the transfer method could be:

    1. Start a transaction.
    2. Get amount from the to-account.
    3. Get amount from the from-account.
    4. Set new account amounts for both involved accounts.
    5. Check that the from-account does not have a negative balance.
    6. End the transaction.

    From a maintainability point of view I think mocking the data object will be risky since the mock will verify that the correct number of calls are made in the correct order. It wouldn't surprise me if we some day saw a new transfer implementation looking something like this:

    1. Start a transaction.
    2. Get amount from the from-account.
    3. Check that the amount is enough.
    4. Get amount from the to-account.
    5. Set new account amounts for both involved accounts.
    6. End the transaction.

    The example may seem a little far-fetched but still I think it points toward a maintainability risk involved when using a mock. But OK, let's assume that maintainability will not be a problem. We're still shifting focus from what is important I think. As soon as a mock is involved when I write my test/scenario I do not only have to think about the behavior I expect. I must also think about how my mock will be used since that may or may not affect my test code. Using stubs instead of mocks will reduce that focus shift since stubs are just doing what they're told without regard of context.

    Sure one might argue that using stubs will also involve having to think about how the stub is used. That is true but without the risk of having order or number of calls to the stub mess up your test/scenario setup. And the less time you spend on thinking about setup and the more you spend on defining behavior, the better I think. And thinking about behavior of internals and/or dependent objects should just be kept to an absolute minimum if you ask me.

  • Being Cellfish

    Unit tests makes it harder to refactor code

    • 8 Comments

    During my recent brownbag on TDD one comment was that having a lot of unit tests makes it harder to refactor code. First of all the word refactoris in my opinion misused as much as mocking. There is a difference between refactoring and rewriting. Refactoring means you change the code to be "better" without changing any functionality. Rewriting means you potentially change things completely including changing interfaces in APIs. So first of all, if you refactor your code and that means you have to change your unit tests your unit tests are too tightly coupled with your code. Lesson learned; somebody screwed up when writing the tests in the first place. This happens all the time when you learn TDD. It is just a matter of biting the bullet and learn form the mistake you made.

    So back to the original statement; it should really be unit tests makes it harder to rewrite code. Think about it. You want to change an API and some functionality and as a result of that you have to update hundreds of unit tests. That's a big pain right? But wait... All these tests that now fail means I have manually verify that things I didn't want to change still works and things I wanted to change are changed to the correct thing. That is all good isn't it? All the failing unit tests are really like a buddy asking; Did you really want to change this? Is it really a good idea to change this? Consider the alternative with no unit tests... Sure you don't have to "waste time" updating old unit tests but neither do you know if your code works as expected.

    So if you're ending up changing tests when you refactor code; somebody screwed up. And changing unit tests when you rewrite code is a good thing!

  • Being Cellfish

    Can the daily scrum be a waste of time?

    • 7 Comments

    Sometimes a team come to the conclusion that the daily scrum (a.k.a. the daily stand-up) is a waste of time. I think the most common reason for this is that the team is not having an effective informative meeting. I've seen a lot of daily scrums turn in to a status report where everybody tries to justify what they've done in the last day and especially bring up all the good things they've done. But that's not what the daily scrum is for. It is to bring up problems and let everybody know what will happen in the next 24 hours. or at least that's what I think is the most important part. You should view the daily scrum as your daily planning meeting.

    The other type of team that wants to skip the daily scrum is typically a team sitting together in one room and performing very well with a lot of communication. If the team is doing and communicating well, why waste a few minutes every day in a daily scrum that will not bring up anything not already communicated? Well, as I mentioned before; different teams (and times) call for different measures. I think it is OK for a team that has embraced agile software development so well to stop having daily scrums if the communication is good without it. And with such a mature team it will notice when the daily scrums are needed again.

    But be warned; most teams who want to skip the daily scrum wants to do it for the wrong reason. And even if the team has excellent communication and sitting all together, the cost of everybody stand up (no need to even leave the desk) and confirm that there are no big issues that needs to be raised is better than trying to skip them. The potential benefits by far outweighs the cost in my mind. So I would do the safe thing; keep it and just make it really, really short most of the time.

  • Being Cellfish

    MineSweeper Kata reflections

    • 5 Comments

    I've attended a few coding dojos where the MineSweeper Kata was used. It has been dojos with both experienced TDD practitioners and TDD beginners. The interesting thing is that both types of dojos have resulted in the same behavior.

    The first thing that typically happens is that the group creates a test and implementation for an empty mine field. From here a natural continuation is to add a test for a one by one sized mine field with no mines. At this point there are a few different ways we can go. We can add more mine fields, mine fields of different sizes or we can add mines to the mine field. Each of these paths adds complexity to the solution. What have happened in all the dojos I've attended is that mines are added before one (or sometimes both) of the other complexities.

    Before adding mines the code have been very much focused on parsing input but once mines are added we need a good internal representation and a sweeper algorithm to produce the correct output. What I've seen at this point is huge refactoring steps that are hard to complete and the progress of the solution is brought to a halt. I think this happens because the group is so eager to get a complete algorithm so they forget to makes small steps and concentrate on finishing one part (i.e. one dimension of complexity) before moving on to the next one.

    But does it have to be this way? I think not. What have happened at the dojos I've attended is that the group every time have started writing end-to-end tests; One input should result in one output. And the group have stayed focused on this even when recognizing the need for a separate parser object and so on. The group have forgotten that they can stub certain parts in order to add functionality in smaller steps and making refactoring possible.

    So I think the lessoned learned is that sometimes the best thing is not to start writing separate tests for separate objects (which might happen when the need for a parser is recognized) but rather to stub the parser in order to continue work on the feature you want to add. I think this sounds like the obvious thing to do but still I've seen the same behavior in multiple dojos with both experienced and novices so I guess it is easy to get carried away. But whenever refactoring feels painful you should probably stop what you're doing, take a step back and look at the situation and make sure you're doing the right thing. There might be a less painful way forward that you're missing.

  • Being Cellfish

    PAL to NTCS converter

    • 5 Comments

    Finding a good PAL to NTCS converter turned out to be harder than expected. I needed something for my Wii, DVD player and digital video camera. And also something that would work with the TV I got. Turned out I had to dig a little bit more into my pocket than expected but so far I'm satisfied with my purchase (CMD-HDX98). The only downside is that if we decide to move back to Sweden it cannot be used for NTCS to PAL conversion.

    So a few pointers to you if you're looking for a PAL to NTCS converter. The cheap ones do not convert sound so you might notice unsynced sound. Second the cheap ones might convert between PAL and NTCS but does not change the update frequency (from 50Hz to 60Hz) which is needed unless your TV can handle that for you.

Page 1 of 49 (481 items) 12345»