July, 2008

Posts
  • Eric Gunnerson's Compendium

    Why does C# always use callvirt?

    • 15 Comments

    This question came up on an internal C# alias, and I thought the answer would be of general interest. That's assuming that the answer is correct - it's been quite a while.

    The .NET IL language provides both a call and callvirt instruction, with the callvirt being used to call virtual functions. But if you look through the code that C# generates, you will see that it generates a "callvirt" even in cases where there is no virtual function involved. Why does it do that?

    I went back through the language design notes that I have, and they state quite clearly that we decided to use callvirt on 12/13/1999. Unfortunately, they don't capture our rationale for doing that, so I'm going to have to go from my memory.

    We had gotten a report from somebody (likely one of the .NET groups using C# (thought it wasn't yet named C# at that time)) who had written code that called a method on a null pointer, but they didn’t get an exception because the method didn’t access any fields (ie “this” was null, but nothing in the method used it). That method then called another method which did use the this point and threw an exception, and a bit of head-scratching ensued. After they figured it out, they sent us a note about it.

    We thought that being able to call a method on a null instance was a bit weird. Peter Golde did some testing to see what the perf impact was of always using callvirt, and it was small enough that we decided to make the change.

  • Eric Gunnerson's Compendium

    #region. Sliced bread or sliced worms?

    • 23 Comments

    (editors note: Eric gave me several different options to use instead of sliced worms, but they were all less palatable. So to speak.)

    Jeff Atwood wrote an interesting post on the use of #regions in code.

    So, I thought I’d share my opinion. As somebody who’s was involved with C# development for a long time – and development in general for longer – it should be pretty easy to guess my opinion.

    There will be a short intermission for you to write your answer down on a piece of paper. While you’re waiting, here are some kittens dancing

    Wasn’t that great. No relation, btw.

    So, for the most part, I agree with Jeff.

    The problem that I have with regions is that people seem to be enamored with their use. I was recently reviewing some code that I hadn’t seen before, and it had the exact sort of region use that Jeff describes in the Log4Net project. Before I can read through the class, I have to open them all up and look at them. In that case, regions make it significantly harder for me to understand the code.

    I’m reminded of a nice post written by C# compiler dev Peter Hallam, where he talks about how much time developers spend looking at existing code. I don’t like things that make that harder.

    One other reason I don’t like regions is that they give you a misleading impression about how simple and well structured your code is. The pain of navigating through a large code file is a good reason to force you to do some refactoring.

    When do I like them? Well, I think that if you have some generated code in part of a class, it’s okay to put it in a region. Or, if you have a large static table, you can put it in a region. Though in both cases I’d vote to put it in a separate file.

    But, of course, I’m an old dog. Do you think regions are a new trick?

  • Eric Gunnerson's Compendium

    Why does C# always use callvirt? - followup

    • 3 Comments

    I was responding in comments, but it doesn't allow me to use links, so here's the long version:

     Judah,

    Yes, marking everything as virtual would have little performance impact. It would, however, be a Bad Thing. It's #3 on my list of deadly sins... 

    ShayEr,

    cmp [exc], exc is the solution to the problem. It's there because that's what the JIT generates to do the null test.

    Sankar,

    Yes, you are missing something. Methods need to be marked as virtual for them to be virtual methods - this is separate from emitting the callvirt instruction.

    Allesandro,

    I'm not sure I understand your example (why are you calling ToString() on b when b is already a string?). It seems to me that what you want can be handled through a method that tests for null and returns either the result of ToString() or the appropriate default value (String.Empty seems to be the logical one in this case).

     

  • Eric Gunnerson's Compendium

    Taking on dependencies

    • 5 Comments

    A recent discussion on how to deal with dependencies when you're an agile team got me thinking...

    Whether you are doing agile or waterfall (and whether the team you are dependent on is doing agile or waterfall), you should assume that what the other group delivers will be late, broken, and lacking important functionality.

    Does that sound too pessimistic? Perhaps, but my experience is that the vast majority of teams assume the exact opposite perspective - that the other group will be on time, everything will be there, and everything will do what you need it to do. And then they have to modify their plan based upon the "new information" that they got (it's late/somethings been cut/whatever).

    I think groups that plan that way are deluding themselves about the realities of software development. Planning for things to go bad up front not only makes things smoother, you tend to be happily surprised as things are often better than you feared.

    A few recommendations:

    First, if at all possible, don't take a dependency on anything until it's in a form that you can evaluate for utility and quality. Taking an incremental approach can be helpful here - if you are coming up with your 18-month development schedule, your management will wonder why you don't list anything about using the work that group <x> is doing. If, on the other hand, you are doing your scheduling on a monthly (or other periodic) basis, it's reasonable to put off the work integrating the other groups work until it's ready to integrate (based on an agreement of "done" you have with the other group).

    That helps the lateness problem, but may put you in a worse position on the quality/utility perspective. Ideally, the other team is already writing code that will use the component exactly the way you want to use it.  If they aren't, you may need to devote some resources towards specifying what it does, writing tests that the team can use, and monitoring the component's process in intermediate drops. In other words, you are "scouting" the component to determine when you can adopt it.

     

  • Eric Gunnerson's Compendium

    HealthVault data type design...

    • 3 Comments

    From the "perhaps this might be interesting" file...

    Since the end of the HealthVault Solution Providers confererence in early June - and the subsequent required recharging - I've been spending the bulk of my time working on HealthVault data types, along with one of the PMs on the partner team. It interesting work - there's a little bit of schema design (all our data types are stored in XML on the server), a little bit of data architecture (is this one data type, or four?), a fair amount of domain-specific learning (how many ways are there to measure body fat percentage?), a bit of working directly with partners (exactly what is this type for?), a lot of word-smithing (should this element be "category", "type", or "area"?), and other things thrown in now and then (utility writing, class design, etc.).

    It's a lot like the time I spent on the C# design team - we do all the design work sitting in my office (using whiteboard/web/XMLSpy), and we often end up with fairly large designs before we scope them back to cover the scenarios that we care about. Our flow is somewhat different in that we have external touch points where we block - the most important of which is an external review where we talk about our design and ask for feedback.

    Since I'm more a generalist by preference, this suits me pretty well - I get to do a variety of different things and learn a variety of useful items (did you know that you can get the body fat percentage of your left arm measured?).

    Plus, I learn cool terms like "air displacement plethysmography"

  • Eric Gunnerson's Compendium

    Dr. Horrible...

    • 1 Comments

    Last night I fixed the quicktime player on my somewhat aging home machine by turning of DirectX acceleration, and the family sat down to watch Dr. Horrible's Sing-Along Blog, which I had purchased through iTunes for the princely sum of $3.99.

    Dr. Horrible, if you aren't "in the know", was created by Joss Whedon, of Firefly fame. Firefly was a sci-fi western, and Dr. Horrible is a internet comic book superhero musical. It stars Felicia Day as the love interest, Nathan Fillion (aka Mal) as Captain Hammer, and Neil Patrick Harris (who will always be "Doogie" to me...) as the title character.

    Both the writing and the music are top-notch, as are the performances by the main characters. I'm hoping there will be more than the initial 3 episodes.

    Recommended.

  • Eric Gunnerson's Compendium

    Progrography

    • 1 Comments

    Back when I first started listening to music - in the days before there were CDs - if you were cool you bought your music on records, and then taped them onto 90-minute cassettes. You did this because it was hard to flip a record while you were driving, records wore out, and pre-recorded cassettes sounded a bit like a 48kbs MP3 stream, when they worked. Sometimes they didn't, and you had a $8 cassette afro.

    Oh, and you had to worry about azimuth (an adjustment that, when wrong, could kill all your high end), Dolby B, and, if you were really cool, Dolby C (twice the Dolby!).

    And you listened to what was called "album rock" in my area, otherwise known as "progressive rock". Those who wish to debate the differences between those two labels are welcome.

    Then, I went off to college, and CDs were released, but the cost $900, so nobody had one (actually one guy in my dorm had one, connected to his $10K high-end system). So, we kept buying albums.

    Then prices came down, we got jobs, and we bought CDs of the groups we had listened to, and reveled in the CD experience. The sound was so much clearer than cassettes, that we didn't realize that a lot of the early transfers were awful.

    Over time, many of our albums got remastered - first by Mobile Fidelity Sound Lab (who had made some pretty killer vinyl recordings), and then by the labels as they realized there was some money in it.

    By this point, you're wondering if there is a point, or if it's just an onion belt.

    Anyway, I have a fair number of CDs that are early transfers, and I'd like to replace them, but it can be a bit of a pain to find out what specific albums have been remastered, when they came out, etc.

    Enter Progrography, which has a bunch of information about progressive rock, including re-releases. So, if I want to know what remaster to get for Who's Next, there's a page that gives me all the info. Well, some of the info - some of the pages are a bit out of date - but what's there seems to be pretty good.

    And if not, I can remember listening to AC/DC, which was the style at the time...

     

     

  • Eric Gunnerson's Compendium

    Seattle Century 2008 ride report

    • 2 Comments
    Seattle Century 2008 ride report
Page 1 of 1 (8 items)