Being Cellfish

Stuff I wished I've found in some blog (and sometimes did)

Change of Address
This blog has moved to
  • Being Cellfish

    Native C++ Code Coverage reports using Visual Studio 2008 Team System


    The code coverage tool in Visual Studio 2008 Team System is quite easy to use from within the IDE unless you want code coverage for your native C++ code. In order to generate a code coverage report for native C++ you have to use the command line tools. This is how you do it:

    1. First of all your project must be compiled using the /PROFILE link option. If you bring up your project properties it can be found here:
      Configuration Properties -> Linker -> Advanced -> Profile
    2. The profiler tools can then be found in the following directory:
      C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Performance Tools
    3. You need to add some instrumentation code to your EXE or DLL file and that is done with this command:
      vsinstr.exe <YOUR_EXE_OR_DLL> /COVERAGE
      This will copy the original file to an ".orig"-file and create a new file with the original name that contains instrumentation code needed to gather coverage data.
    4. Now start the listener with this command:
    5. Now run your EXE or some test suite that uses the file you want to calculate coverage for.
    6. The listener started in step four (4) will not stop by it self once your test suite is finished so you have to stop in manually using this command (from a second command prompt):
      VSPerfCmd.exe /SHUTDOWN
    7. When the listener has stopped you just drag-n-drop the created ".coverage"-file into Visual Studio and you can view the results.
  • Being Cellfish

    Merging coverage reports with Bullseye


    I've previously recommended Bullseye. And there is another nifty feature with Bullseye you should know; the ability to merge reports. This is pretty useful when you have one report from your unit tests and one from some other type of test run. Use this command to merge reports with Bullseye:

    covmerge.exe -c -fMergedData.cov file1.cov file2.cov
  • Being Cellfish

    Getting the logged on windows user in your apache server


    I was recently involved in a discussion where a company was developing an intra-net site using Apache and PHP on a Windows server. All clients were windows and they wanted to know who was connecting to the intra-net site (only accessible inside the company firewall). And they wanted a SSO (single sign-on) experience for the users. They refused to switch to IIS and using integrated windows authentication.

    Since they did not really wanted to authenticate users, just get a hint of who was connecting.  So faking a NTLM authentication request and then parsing the data would be enough. And the script for doing so is pretty easy too. Here is one script I copied from here.

    Note that this is nothing you can use to authenticate users since there is no authentication taking place. And the user will, with a standard installed browser be prompted for user name and password and can write anything. The script just prints whatever is sent by the user. And there is also no SSO feel to this. In order to get the SSO feel you have to do one of two things. Either the user must add the site using this script to his "Trusted Intra-net sites" in IE. This is done via Tools-Internet Options-Security. Or the company can add a group policy in the Active Directory enforcing this. For a situation as the described intra-net site, the latter is obviously the best solution.


  • Being Cellfish

    Sleep less than one millisecond


    On windows you have a problem you typically never encounter on Unix. That is how to get a thread to sleep for less than one millisecond. On Unix you typically have a number of choices (sleep, usleep and nanosleep) to fit your needs. On windows however there is only Sleep with millisecond granularity. You can however use the select system call to create a microsecond sleep. On Unix this is pretty straight forward:

    int usleep(long usec)
        struct timeval tv;
        tv.tv_sec = usec/1000000L;
        tv.tv_usec = usec%1000000L;
        return select(0, 0, 0, 0, &tv);

    On windows however, the use of select forces you to include the winsock library which has to be initialized like this in your application:

        WORD wVersionRequested = MAKEWORD(1,0);
        WSADATA wsaData;
        WSAStartup(wVersionRequested, &wsaData);

    And then the select won't allow you to be called without any socket so you have to do a little more to create a microsleep method:

    int usleep(long usec)
        struct timeval tv;
        fd_set dummy;
        FD_SET(s, &dummy);
        tv.tv_sec = usec/1000000L;
        tv.tv_usec = usec%1000000L;
        return select(0, 0, 0, &dummy, &tv);

    All these created usleep methods return zero when successful and non-zero for errors.


  • Being Cellfish

    20 tips to write a good stored procedure (is really just 12)


    A few days ago there was an article with 20 tips to write a good stored procedure (requires free registration to read). The problem is that there are really only 12 good tips (and 4 bad and 4 neither good or bad). So let me go over the tips one by one and comment on them:

    1. Capital letters for keywords and proper indentation. With todays code editors with syntax high lighting I don't see why you want to high light keywords with capital letters. The code editor will do that for you. And suggesting proper indentation is not really a tip to write a good stored procedure. It's common (coding) sense! So I don't think this one counts... Score (good-not really-bad advice IMHO): 0-1-0.
    2. Use SQL-92 syntax for joins. If MS SQL server drops support for the old syntax this is good advice. Score: 1-1-0
    3. Use as few variables as possible. The article mentions cache utilization as an argument. Sounds like premature optimization to me. I'd say use as many variables as makes sense to make the code most readable. If that turns out to be a problem, then you optimize. So in general I found this advice to be bad. Score: 1-1-1
    4. Minimize usage of dynamic queries. Kudos to the article to pointing out how to minimize the bad of dynamic queries and I guess technically minimizing could mean zero but that is really the only good advice; don't use dynamic queries. So once again a bad, or at least misleading advice IMHO. Score: 1-1-2
    5. Use fully qualified names. If you don't do this you might end up with some weird behavior so this is a good advice. Score: 2-1-2
    6. Set NOCOUNT on. Good advice: Score: 3-1-2
    7. Don't use sp_ prefix. Score: 4-1-2
    8. KEEPFIXED PLAN. Learn from this article and use it correctly. Hard to argue with "learn something and use it right". Score: 5-1-2
    9. Use select instead of set. Once again performance is mentioned as a motivator. However the potential bad side effects of using select rather than set are more important in my opinion. The problem with select is that the variable might not be set if a query returns no rows and the set gives you an error if the select returns more than one row. Read more about it here. I'd definitely prefer set over select. Score: 5-1-3
    10. Compare the right thing in the where clause. This advice just confuses me. The article talks about what operators are the fastest and then refers to this page talking about preference. Even though the article is confusing on this point the basic idea is correct. For example using IN is generally faster than NOT IN. So I'll call this one a draw. Score: 5-2-3
    11. Avoid OR in WHERE clause. This is good advise for good performance. Score: 6-2-3
    12. Use CAST over Convert. CAST is SQL92 standard. Convert is not. Score: 7-2-3
    13. Avoid distinct and order by. Once again this is common sense. Don't do things you don't need... Score: 7-3-3
    14. Avoid cursors. This falls into the same category as dynamic queries to me. The only good advice is don't use cursors. Score: 7-3-4
    15. Select only the columns you need. Common sense! Score: 7-4-4
    16. Sub queries vs joins. Article lists a few good rule of thumbs. I think you should use whatever is most readable. Score: 8-4-4
    17. Create table vs select into. Article points out important differences. Score: 9-4-4
    18. Use variables instead of temporary tables. Score: 10-4-4
    19. Use proper indexes. Score: 11-4-4
    20. Use profiler. Many tips in the article suggest you do things to improve performance. But I think doing so before you know you have a problem is a waste of time and resources. So this advice is actually one of the best advices in the article. Score: 12-4-4
  • Being Cellfish

    Robot competitions

    Even though I never played crobots, I have always been intrigued by the idea. I even played RoboForge for a few months a couple of years ago. And today I stumbled across a new robot game; RoboChamps. RoboChamps does not seem to be like all other classical robot combat games but introduces a number of robot challenges of which only one has been released so far. And navigating a maze avoiding traps does not sound like much action but I guess when the sumo challenge is released things will get a little bit more interesting. I think I just found myself a new spare time project...
  • Being Cellfish

    Kanban vs Scrum

    Yesterday Henrik Kniberg published this draft on Kanban vs Scrum. I think it is a great article describing the differences (and similarities) between Scrum and Kanban so you should take the time and read it since there are times when a Kanban approach is better than a non-Kanban approach (and vice versa). I also like the fact that Henrik ends his article by pointing out the importance of retrospects. And if you wonder about when to have retrospects in a team using the Kanban approach I suggest reading this.
  • Being Cellfish

    Object Calisthenics: First contact


    A few weeks ago I was introduced the the object calisthenics described by Jeff Bay in the book The ThoughtWorks Anthology. The object calisthenics is a way to practice writing object oriented code. The nine rules of are not intended to be used in your every day work. The rules are intended to be used on a small problem such as a coding Kata. The idea is that by applying these strict rules on a small problem you'll learn to write better code.

    So I decided to try this on the MineSweeper Kata. In the beginning I decided to try to conform to the rules all the time, but pretty soon I changed my mind and wrote a working solution and then started to refactored to conform to the rules. I think this was a mistake. Some design decisions turned out to require very big refactorings when conforming to the rules and I actually never got all the way. But this doesn't mean I didn't learn anything. First of all I think I experienced a variant of you can't test in quality, you have to build it in from the start. I should have stuck with the initial strategy and make sure the code followed the rules all the time. I also learned that classes that I felt were really small and doing only one thing actually could be split up when I had to in order to conform to the rules. Reminds me of when people thought atoms were the smallest building blocks of the universe and then it turned out to be something smaller...

    So all in all I think doing a coding Kata while applying the object calisthenics rules will improve my ability to write object oriented code. And it will be interesting to see how it works out in a coding dojo. By now you're probably wondering what the nine rules are. The rules are:

    1. Use one level of indentation per method
    2. Don't use the else keyword
    3. Wrap all primitives and strings
    4. Use only one dot per line
    5. Don't abbreviate
    6. Keep all entities small
    7. Don't use any classes with more than two instance variables
    8. Use first class collections
    9. Don't use any getters, setters or properties

    I'll go into more detail of these nine rules over the next few days.

  • Being Cellfish

    How Spotify Works


    While I wish I could write a long article on how Spotify works technically this is not what I want to tell you about today. Nor will I tell you how I would build Spotify if I had to, but that would be an interesting blog post. But today I want to tell you about a great article describing how Spotify has organized their teams, how they work and best of all; they have cool names for it too: Squads, Tribes, Chapters & Guilds!

  • Being Cellfish

    UPSERT in SQL server 2008

    I recently read this article on upsert functionality in SQL server 2008. I thought that the SQL server team finally had come to theri senses and added functionality similar to replace in MySQL, something that have been available in MySQL for ages. But no. The article describes a way to use the new merge command in order to perform an "upsert".


    An upsert is an operation where rows in the database are updated if they exists and inserted if they don't. This has been available as a replace statement in MySQL for quite some time. At first glance the merge seems much more cumbersome to use than the replace, but don't be fooled by this. One problem I've had with the replace command is knowing exactly what rows will be updated and which will be added. Especially when updating things that are part of a multi-column primary key. The merge however is really more clear in what it actually does and you have full control of what you update and what you insert.

Page 1 of 49 (482 items) 12345»