Being Cellfish

Stuff I wished I've found in some blog (and sometimes did)

November, 2008

Change of Address
This blog has moved to blog.cellfish.se.
Posts
  • Being Cellfish

    Native C++ Code Coverage reports using Visual Studio 2008 Team System

    • 18 Comments

    The code coverage tool in Visual Studio 2008 Team System is quite easy to use from within the IDE unless you want code coverage for your native C++ code. In order to generate a code coverage report for native C++ you have to use the command line tools. This is how you do it:

    1. First of all your project must be compiled using the /PROFILE link option. If you bring up your project properties it can be found here:
      Configuration Properties -> Linker -> Advanced -> Profile
    2. The profiler tools can then be found in the following directory:
      C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools
    3. You need to add some instrumentation code to your EXE or DLL file and that is done with this command:
      vsinstr.exe <YOUR_EXE_OR_DLL> /COVERAGE
      This will copy the original file to an ".orig"-file and create a new file with the original name that contains instrumentation code needed to gather coverage data.
    4. Now start the listener with this command:
      VSPerfMon.exe /COVERAGE /OUTPUT:<REPORT_FILE_NAME>
    5. Now run your EXE or some test suite that uses the file you want to calculate coverage for.
    6. The listener started in step four (4) will not stop by it self once your test suite is finished so you have to stop in manually using this command (from a second command prompt):
      VSPerfCmd.exe /SHUTDOWN
    7. When the listener has stopped you just drag-n-drop the created ".coverage"-file into Visual Studio and you can view the results.
  • Being Cellfish

    Dangers of using Visual Studio 2008 Team System Code Coverage Tool for Native C++

    • 2 Comments

    So now you know how to get coverage reports for native C++ using Visual Studio 2008 Team System (if not - read this). There are a few things you need to know before you get excited. First of all the only metrics you get are line and block coverage. A block is basically a statement and each line typically consists of one or more blocks. Unless you have 100% coverage I think these metrics both are pretty useless when measuring quality. For example consider a function consisting of ten lines of code. There is an IF-statement checking for an error and it throws an exception if the error occurs. If the error never occurs during the test-run you still get 90% line coverage since the other nine lines are executed. I think this is pretty common in production code. Most of the code is for the common state and fewer lines are used to handle errors. So you get pretty high line coverage even if you do not test any of error cases.

    Block coverage is even worse. For example consider the following line:

    SimpleClass* o = new SimpleClass();

    That line produces two blocks of which only one is covered. And there is no reasonable way test the uncovered block since it probably has to do with when the process runs out of memory.

    Identifying functions that are not called at all is often considered an important part of the code coverage report. Here we have another problem with the visual studio tool. Functions never referenced by the code will be excluded from the report completely (I suspect this is the case since the linker will remove all unreferenced symbols as part of the optimization at link time). This means the following class will report 100% coverage it it is instantiated and only GetA is called.

    class SimpleClass
    {
    public:
        SimpleClass(int a, int b)
            : m_a(0) , m_b(b)
        { }
    
        int GetA() { return m_a; }
        int GetB() { return m_b; }
    
    private:
        int m_a;
        int m_b;
    };

    So with all these potential problems there is another tool I'd recommend you consider. It's called BullsEye. It is a more "puristic" tool so there is no way to get block or line coverage, basically since those metrics are bad. In stead you can get Decision and Condition/Decision coverage. Basically Decision coverage checks that each conditional evaluates to both true and false and Condition/Decision coverage is when each part of a boolean expression is evaluated to both true and false. Consider the following line:

    if(a || b)

    There are two different decisions (either "a || b" is true or false) but four different conditions (both "a" and "b" must evaluate to true and false). BullsEye also adds instrumentation at compile time so the GetB method in the example above will not be lost but be part of the report as an uncovered function even if not referenced anywhere in the code. In the initial example (ten lines with 90% line coverage) we would get 50% decision coverage which is a much better indicator of quality.

    And on using code coverage as a quality metric...I must insist you read one of my previous posts if you haven't done that already...

  • Being Cellfish

    The 2008 Advent Calendar situation

    • 2 Comments

    So for this year's Advent calendar I'll focus on a made up file utility object. The object is called FileUtil and it implements an interface called IFileUtil which looks like this:

        public interface IFileUtil
        {
            void Create(string content);
            void Delete();
            string Read();
            string Path { get; }
            bool Readable { get; set; }
        }
    
    I think the methods are quite straight forward but a quick walk-through:
    • Create creates a file with given content.
    • Delete deletes the file.
    • Read returns the content of the file.
    • Path returns the path to the file.
    • Readable indicates if the file is readable or not. It is also possible to change permissions (i.e. readable or not) using this property.

    The test I will write (in 24 different ways) is a test where I want to verify that the correct exception is thrown when I try to read a file that is not readable. And before I do that I want to make sure the file actually exists and is readable. So basically the test consist of the following steps:

    • Create a test file.
    • Make sure I can read that file.
    • Change the file from readable to not readable.
    • Make sure I can no longer read the file.
    • Remove the test file.
    All the tests are written using Xunit.net
  • Being Cellfish

    Constraints as User Stories

    • 1 Comments

    A few months ago I wrote a little about constraints as an alternative to user stories. Constraints are what many of you know as non-functional requirements. Today I read an interesting post by Mike Cohn where he argues that you should write your constraints in the form of user stories. Writing your constraints as user stories is a great suggestion. It reminds me of when I was writing requirements at Ericsson a few years ago. There each requirement had to end with "because ..." which forced the author of the requirement to actually explain the purpose of the requirement. Much like what user stories tend to do.

    Mr Cohn also writes a little about when these user stories should be added to the project (especially in the comments and we're promised a future blog post on that topic). From one point of view I think he is correct. Once you add any user story you have committed to deliver it. It has nothing to do with if it is a constraint or a new button doing something nifty the user wants. In the same way you cannot forget about performance once you start taking it into consideration you cannot remove that button once it is added. The difference lies in that constraints (regardless of if they are written as user stories or not) generally takes more time to implement each iteration while a typical user story costs almost nothing once it is completed. So as soon as you start taking more and more constraints into consideration your velocity might drop. But don't see that as a bad thing. It's OK. A drastic velocity drop is probably an alarm signal that something is wrong. Probably you designed your software without taking the constraints into consideration. That is not different from having a number of completely different user stories added to the backlog. The only difference is that since the constraints probably was known from the start you failed to take it into account and probably decided to implement it too late.

    And that's where I think you must be careful. If constraints are treated as constraints it is obvious that you have to think about them all the time. I also think it is easier to get closure for the team since user stories added in order to comply to the constraints can be estimated and completed much like any other user story. I'm thinking about user stories created to add automatic performance reports for example. I still think it is a great idea to write your constraints as user stories. But since they're written as user stories the team might treat them as a user story and gets frustrated when they cannot get closure since the user story will just go on and on for every iteration. That's why you have to be careful. In some teams this is not a problem and in other teams it might be. But if you make sure the team understands that any story, once it is committed by the team, will affect all their future work. Some user stories almost nothing while others a lot. If the team truly understands this I think you have nothing to loose and everything to gain by following Mr Cohn advice and write constraints as user stories.

  • Being Cellfish

    System Center Cross Platform and Open Source

    • 0 Comments

    Some news about System Center Cross Platform extensions. It will be in the box with the next release of System Center but the biggest news I think is that not only the open source taken into the product will be open source. The code used to discover and monitor parts of the UNIX operating systems such as processors, disks and network adapters will also be open source. That's something I wouldn't have bet on a few years ago.

  • Being Cellfish

    Expensive fraud prevention fee

    • 2 Comments

    When you apply for a US Visa you have to pay a fee of $131 per applicant. Depending on type of visa you might also have to pay a $500 fraud prevention fee. I paid all this in advance and went happy to the US embassy this morning. It was pretty cold so standing in line outside for an hour with a one year old kid was not fun. When we reach the door to the security checkpoint the guard apologizes for not seeing we had a kid with us. If he'd known he'd let as go in before everybody else. That was the first lesson learned today.

    Second lesson learned was that I should read all instructions carefully... The $500 fee should not be paid in advance. It should be paid at the embassy. And the extra money paid in advance is lost. I don't think that had anything to do with it but we had a really short wait inside for our visa interview. At least we overtook a couple of people there...

  • Being Cellfish

    Getting your priorities right

    • 0 Comments

    I think an important thing about being agile a great developer is to make sure your backlog is correctly prioritized. This both means that everything should have updated priorities where business value is the key factor. It also means that each of your user stories should have different priorities. Two things are never equal in importance. Whenever a customer (or manager) tells you that two things are equally important, you (as a great developer) should be stubborn and ask "but if you only could have one of these, which one would you choose?". Most of the time you get the answer "well if I had to choose I would go with X but we can't ship without Y so they are both top priority". See? The customer is happy because they have communicated both are top priority but he has also communicated which one is more important so you're happy.

    Sometimes the customer is even more stubborn than you and just refuses to say if X or Y is more important. Then there are two other questions you can ask that will help you: "Which one of these would you like to test first?" or "which one do you think the users will like the most?" A really, really smart and stubborn customer might understand what you're trying to do and just refuse to give you an answer. In those cases I generally see the smaller of the two user stories as the one more important if they are equally complex. Otherwise I tend to see the one more complex more important since complexity and uncertainty (i.e. the risk of the user story growing unexpectedly) tend to go hand in hand.

    Here are two more articles for inspiration:

  • Being Cellfish

    Advent Calendar 2008

    • 1 Comments

    A few weeks ago I was involved in a discussion where we looked at a number of different ways to write the same test. I was amazed of how simple it is to write even a simple test in several different ways, all with their own pros and cons. So I decided to create an Advent calendar on this topic this year... I don't know how Advent calendars work in countries where you celebrate Christmas day rather than Christmas eve which is the case in Sweden but I'll stick to the Swedish tradition.

    So, starting from December 1st I'll present a new way of writing the same test each day ending on Christmas eve. So you have 24 different versions of the same test to look forward to. The tests will not be strictly better in my opinion. Actually I'm not sure which one of the versions I like the most.

    I will also start the whole thing off on November 30th and present the interface I'll be testing and what the test I'm writing is supposed to do.

  • Being Cellfish

    Refactoring for dummies

    • 0 Comments

    I was recently looking at a web-cast and it was not related to refactoring at all, but the presenter said something about refactoring that just blew my mind. The code he started with looked something like this:

    public ClassD GetD()
    {
        ClassA a = new ClassA();
        ClassB b = a.GetB();
        ClassC c = b.GetC();
        ClassD d = c.GetD();
        return d;
    }

    He then had the nerve to refactor for better performance into this:

    public ClassD GetD()
    {
        return (new ClassA()).GetB().GetC().GetD();
    }

    The only thing he did was make the code harder to read. If you use a disassembler like this one to actually look at the generated code. And yes there is a difference in the number of lines of IL that is generated but I feel confident enough that those differences will be removed when the code is compiled to native code before execution. So the only thing accomplished by this refactoring is that the code is harder to read (original example had much longer method names) and the possibility to say "look how easy this is to use, it's only a single line of code". hate to break this to you but I can write anything and everything on a single line if that is important for you. But that doesn't make it better.

    Don't refactor for performance unless you know you have a performance issue. Don't refactor to get fewer lines of code. Refactor to remove duplicate code and to make the code easier to understand, not making it harder to understand.

  • Being Cellfish

    Making it easy is not a single responsibility

    • 0 Comments

    The single responsibility principle is generally considered to be a good principle when designing software. My experience is that code written using this principle turns out to be easier to understand, test and maintain. But what is a responsibility? Could making it easy to develop X be a good responsibility? No that is not a good responsibility since almost all design and abstraction in your software has one goal; making it easy to write your application. So if you choose this as your definition of single responsability you'll basically end up with a single class with lots of methods because the single class' responsability is to make it easy to implement the application.

    So what is a good definition of single responsibility. I like the definition that responsibility is reason to change. If the object have one reason to change it has a single responsibility.

Page 1 of 2 (11 items) 12