• Eric Gunnerson's Compendium

    Holiday light project 2008...


    I've been searching for a new project to do for this season's holiday lights. I typically have four or five ideas floating around my head, and this year is no different.

    Lots of choices, so I've had to come up with a "rule" about new projects. The rule is that the project has to be in the neighborhood of effort-neutral. It already takes too long to put up (and worse, take down) the displays we already have, and I don't want to add anything that makes that worse. Oh, and they can't take too much power, because I'm already on a power budget.

    Unless, it's, like, especially cool.

    I had an idea that met all my criteria. It was small - small enough to be battery powered, if I did my power calculations properly, and was going to be pretty cool.

    It was, unfortunately, going to be a fair pain-in-the-butt to build - the fabrication was a bit complex, and the plan was to build a number of identical pieces. Oh, and it required me to choose the perfect LEDs from the 15 thousand that Mouser carries.

    So, I hadn't made much progress.

    Then, one day I was waiting for some paint to be tinted at my local home store, and I came across these.

    They're holiday lights. Jumbo-sized holiday lights.  The bulb part is made of colored plastic, and measures about 7" high. At the bottom there is a large fake lamp socket. Inside of all of it is a genuine C7 bulb of the appropriate color.

    I bought 3 sets, 15 in all.

    To be different, I wanted to build these as self-contained devices, with a separate microcontroller in each of the light bases. The microcontrollers I'm using cost about $1 each, so there isn't too much cost there, but the big challenge is a power supply. Generally, I build a linear power supply, which is simple and performs well, but you need an expensive and bulky transformer.

    There is a way around that, with the reasonably named "transformerless power supply". Realistically, a better name would be the "high-voltage shock-o-matic", because it involves hooking things directly to the AC line, can only supply a small amount of current, is inefficient, and is hard to troubleshoot. Oh, and if one component fails you get 150 volts instead of the 5 volts you were expecting.

    I decided to build one of these, so I ordered up some parts, wired it up, plugged it in, and immediately lost the magic smoke from one of the resistors. Turns out I miscalculated, and I needed a much-more-expensive power resister.

    Thinking about it some more, I decided that since I still needed power to each bulb - and therefore a wire to each bulb - it was simpler to just build a simple system with one microcontroller.

  • Eric Gunnerson's Compendium


    I've been doing some electronics recently, and perhaps I'm therefore more likely to treat this kindly, but I have to say that I think it's brilliant.
  • Eric Gunnerson's Compendium

    What's going on now...

    If you want to keep track of what's going on right now, you need this site...
  • Eric Gunnerson's Compendium



    Last Saturday, we were invited to a Halloween party at a friend of a friend. I only decided to go Friday night, so I'd put essentially zero effort into thinking about a costume.

    The wife was going as a vampire (we had a long discussion on what the feminine form of "vampire" was. I tended towards "vampress", mostly because of how silly it sounded), and I thought of doing something that fit together with that thematically. A lame costume that fits together thematically with another one is much better than a lame one that sits by itself.

    After a while, something suggested itself, and things came together pretty well. You can see the results here:

    (Like pilot in the 1950s who had eyepatches to preserve eyesight in their dominant eye in case of a nuclear explosion, I suggest covering one eye before clicking on the following link).


    I'm hoping that it's obvious who I am.

    The party itself was pretty good. The hosts hired a magician who walked around and did close magic to entertain the crowd. He was talented, though frankly given the sobriety of the majority of the guests it could have been me doing the tricks.

    While we were there a few of the ladies took it on themselves to vandalize me.

  • Eric Gunnerson's Compendium

    wireless LCD photo frame...


    I want to buy a couple of wireless LCD photo frames for my relatives. It needs to have a decent display, speak wireless, and hook up to a smugmug gallery.

    Any recommendations?

  • Eric Gunnerson's Compendium

    Versioning in HealthVault


    Download the sample code from MSDN Code Gallery.

    [Note that EncounterOld has been renamed to EncounterV1 in newer versions of the SDK]

    “Versioning” is an interesting feature of the HealthVault platform that allows us to evolve our data types (aka “thing types”) without forcing applications to be rewritten. Consider the following scenario:

    An application is written using a specific version of the Encounter type. To retrieve instances of this type, you write the following:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)


    That returns all instances of the type with the Encounter.TypeId type id. When the XML data comes back into the .NET SDK, it creates instances of the Encounter wrapper type from them, and that’s what’s in the items array.

    Time passes, cheese and wine age, clothes go in and out of style. The HealthVault data type team decides to revise the Encounter type, and the change is a breaking one in which the new schema is incompatible with the existing schema. We want to deploy that new version out so that people can use it, but because it’s a breaking change, it will (surprise surprise) break existing applications if we release it.

    Looking at our options, we come up with 3:

    1. Replace the existing type, break applications, and force everybody to update their applications.
    2. Leave the existing type in the system and release a new type (which I’ll call EncounterV2). New applications must deal with two types, and existing applications don’t see the instances of the new type.
    3. Update all existing instances in the database to the new type.

    #3 looks like a nice option, were it not for the fact that some instances are digitally signed and we have no way to re-sign the updated items.

    #1 is an obvious non-starter.

    Which leaves us with #2. We ask ouselves, “selves, is there a way to make the existence of two versions of a type easier for applications to deal with?”

    And the answer to that question is “yes, and let’s call it versioning”…


    Very simply, the platform knows that the old and new versions of a type are related to each other, and how to do the best possible conversion between the versions (more on “best possible” in a minute…). It uses this information to let applications pretend that there aren’t multiple versions of a type.

    Down Versioning

    The first scenario is existing applications that were written using the original version of the Encounter type (which we’ll call EncounterV1 for clarity), and what happens when the come across a health record that also has EncounterV2 instances in it. Here’s a graphical indication of what is going on:


    This application is doing queries with the EncounterV1 type id. When a query is executed, the platform knows that the encounter type has multiple versions, and converts the query for the V1 type to a query for all encounter types (both EncounterV1 and EncounterV2).

    The platform finds all the instances of those two types, and looks at each one. If it’s an EncounterV1 instance, it just ships it off to the application.

    But, if it’s an EncounterV2 instance, the platform knows (by looking at the application configuration) that this application doesn’t know what to do with an EncounterV2. It therefore takes the EncounterV2 instance, runs a transform on the EncounterV2 xml to convert it into EncounterV1 xml, and ships that across to the application. The data is “down versioned” to the earlier version.

    The application is therefore insulated from the updated version – it sees the instances using the type that the application was coded against.

    Down version conversions are typically lossy – there are often fields in the V2 version that are missing in the V1 version.  The platform therefore prevents updating instances that are down versioned, and will throw an exception if you try. You can look at the IsDownVersioned property on an instance to check whether updating is possible to avoid the exception.

    Up Versioning

    The second scenario is an application written to use the new EncounterV2 type:


    This time, the V1 data is transformed to the V2 version. An application can check the aptly-named IsUpVersioned property to tell whether an instance is up versioned.

    Higher versions typically contain a superset of the data in the old version, and the platform therefore allows the instance to be “upgraded” (ie updated to the new version).

    However, doing so will prevent an application using the V1 version from being able to update that instance, which may break some scenarios. The application should therefore ask the user for confirmation before upgrading any instances.

    This would be a good time to download the sample applications and run the EncounterV1 and EncounterV2 projects. Because this behavior is controlled by the application configuration, each application has its own certificate, which will need to be imported before the application can be run.

    Add some instances from both V1 and V2, and see how they are displayed in each application. Note that EncounterV1 displays both the instances it created and the down-versioned instances of the EncounterV2 type, and that EncounterV2 displays the instances it created and up-versioned instances of the EncounterV1 type.

    Version-aware applications….

    In the previous scenarios, the application was configured to only use a single version of a type.

    In some cases, an application may want to deal with both versions of a data type simultaneously, and this is known as a “version-aware” application.

    We expect such applications to be relatively uncommon, but there are some cases where versioning doesn’t do everything you want.

    One such case is our upcoming redesign to the aerobic session type. The AerobicSession type contains both summary information and sample information (such as the current heart rate collected every 5 seconds). In the redesign, this information will be split between the new Exercise and ExerciseSamples type. AerobicSession and Exercise will be versioned, but there will be no way to see the samples on AerobicSession through Exercise, nor will you be able to see ExerciseSamples through AerobicSession (there are a number of technical reasons why this is very difficult – let me know if you want more details). Therefore, applications that care about sample data will need to be version-aware.

    Writing a version-aware application adds one level of complexity to querying for data. Revisiting our original code, this time for an application that is configured to access both EncounterV1 and EncounterV2:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)


    When we execute this, items contains a series of Encounter items. The version of those items is constrained to the versions that the application is configured (in the Application Configuration Center) to access. If the version in the health record is a version that the application is configured to access, no conversion is performed, *even if* the version is not the version specified by the filter.

    That means that the items collection may contain instances of different versions of the type – in this case, either Encounter or EncounterOld instances. Any code that assume that instances are only of the Encounter type won’t work correctly in this situation. You may have run into this if you are using the HelloWorld, as it has access to all versions of all types.

    Instead, the code needs to look at the instances and determine their types:

    foreach (HealthRecordItem item in items)

         Encounter encounter = item as Encounter; 
         if (encounter != null) 

         EncounterOld encounterOld = item as EncounterOld;
         if (encounterOld != null)

    This is a good reason to create your own application configuration rather than just writing code against HelloWorld.

    Controlling versioning

    The platform provides a way for the application to override that application configuration and specify the exact version (or versions) that the application can support. For example, if my application wants to deal with the new type of Encounter all the time, it can do the following:


    which will always return all versions of encounter instances as the Encounter type. 

    EncounterBoth is a version-aware application in the sample code that handles the Encounter and EncounterV1 types. It will also let you set the TypeVersionFormat.

    Asking the platform about versioned types…

    If you ask nicely, the platform will be happy to tell you about the relationship between types. If you get fetch the HealthRecordItemTypeDefinition for a type, you can check the Versions property to see if there are other versions of the type.

    The VersionedTypeList project in the sample code shows how to do this.

    Naming Patterns





    There are a couple of different naming patterns in use. The types that were modified before we had versioning use the “Old” suffix (MedicationOld and EncounterOld) to denote the original types. For future types, we will be adopting version suffix on the previous type (so, if we versioned HeartRate, the new type would be named HeartRate, and the old one HeartRateV1). We are also planning to rename MedicationOld and EncounterOld to follow this convention.

    We will also be modifying the tools (application configuration center and the type schema browser) to show the version information. Until we get around to that, the best way is to ask the platform as outlined in the previous section.

  • Eric Gunnerson's Compendium

    100 skills every one should know...


    Popular Mechanics has a list of "100 skills every one should know". How many have you have?

     (I'll give my count later...)

  • Eric Gunnerson's Compendium

    Triathlon report

    For your "enjoyment", a report on the Triathlon I did last Sunday...
  • Eric Gunnerson's Compendium

    TV Calibration


    A few years, ago, I bought one of the last high-end rear-projection TVs based on CRTs - a Pioneer Elite 620HD. I did some basic calibration with the Avia disc and did some other minor adjustments, but never got around to getting a real calibration done.

    Calibration is the process of getting the TV to be as close as possible to NTSC settings - the same settings that were used when the program was created. That means getting colors and gray levels as close as possible to what they should be.

    But, if it's a high-end TV, why isn't it set from the factory to meet NTSC settings? Well, the simple fact is that TV manufacturers play games to make their TVs stand out in showroom settings. That generally means pictures that are far brighter than they should be (with corresponding poor black levels), colors are off, and the picture is over-sharpened.

    Some newer TVs let you choose a setting that's close to NTSC, but in most cases, calibration can make a big difference. If you have a LDC or Plasma set, start with a disc like Avia and see what you get out of it (Avia also has calibration for surround sound, which may also be useful).

    In my case, my set needed cleaning, focus adjustments (because it's a rear-projection set), convergence (because it has 3 CRTs in it, one for red, blue, and green), and geometry (because it uses CRTs). You can find local techs in my area who can do calibration, but because my set is fairly rare these days, I wanted an expert, and hooked with David Abrams from Avical, working out of LA, when he was on a trip in the Seattle area.

    David is a really nice guy and did a wonderful job. He spent 90 minutes on the geometry of the set (making sure straight lines are straight in all 3 colors), and about 60 minutes on the convergence (aligning all three colors). I only have about 5% of the patience he has when he's doing those kinds of things.

    So, after cleaning the set, setting the focus, setting the geometry and convergence, he was on to setting the gray levels and colors. With his test pattern generator (running through the TV's component inputs, which is all I use...) and his $18K color analyzer, that part went pretty quickly. He then worked through all my sources (Tivo HD, DVD, XBox 360) and verified that everything was set right. Finally, we looked at some source and he did some final tuning.

    The results were pretty impressive.

    The downside is that the differences between HD feeds is now really obvious - some look great, but others really fall short.


  • Eric Gunnerson's Compendium

    A box of toys...


    I am a box of toys and notions. Among other things, I contain a hard box full of legos and a gross of superballs. On my outside I have a series of labels that tell me where I have lived.

    As a proud corrugated-American, I'd like to share my story.


    My location for the last 24 hours. My owner has moved to this office to be closer to the rest of the HealthVault partner team, which he has joined due to a recent organizational optimization. I like this location because lots of people stop by, but I'm worried it will be loud because of the standby diesel generator right outside.


    I moved back to main campus to this office. It faces east and has a decent view of a parking lot. My owner works on the HealthVault partner team.


    I love this office, which is big and has a nice view to the south. I do worry about degradation of my structure because the sunlight makes it hot and sometimes the A/C fails. My owner works on the HealthVault partner team, and enjoys the "small team" atmosphere.


    This office is on west campus. It's okay, but the building isn't great. My owner works on the Windows Live pictures and video team.


    I moved from one end of the hall to the other, and my corners are getting banged up. My owner is on a newly-reorganized team and isn't sure what he works on right now.


    I moved two offices down to this one - I can look out through the window and watch my owner when he sits outside at lunch, though he does complain about the cafeteria now and then. My owner is on the Windows Movie Maker team working on the DVD Maker user interface.


    Welcome to the Movie Maker team, and to a new area of campus. Building 50 stands alone by itself and is a bit inconvenient to get to, but it seems nice enough, and there are lots of boxes next door that I can spend time with.


    A new office in the same building, this time looking at a wall of plants. My owner is a PM on the C# team, and is doing language design.


    Despair. My owner is happy with his assignment as the test lead on the C# compiler, but this office faces south, gets very hot, and has a lovely view of the top of a cafeteria.


    A pretty good office in a really nice building. I face north towards a set of vegetation. My owner co-owned office assignments on this move and got to choose his one before other people, so he got a nice one. He's a test lead for the C++ compiler front end, though there are rumors that there's something new coming along.


    My first window office, with a beautiful view of a forest. My owner is very happy to get such a nice window office, but he finds it hard not to get lost in the 1-4 building complex. He's a test lead for the C++ compiler.


    I was never in building 25, but my owner often tells me stories about it late at night. He says he had two different offices there, both of them pretty good.

  • Eric Gunnerson's Compendium

    Fun with HealthVault transforms


    It is sometimes useful to be able to display data in an application without having to understand the details of that data.

    To make this easier, HealthVault provides a couple of features: transforms, and the HealthRecordItemDataGrid class.

    Discovering transforms

    To find out what transforms are defined for a specific type, we just ask the platform:

        HealthRecordItemTypeDefinition definition =
                 ItemTypeManager.GetHealthRecordItemTypeDefinition(Height.TypeId, ApplicationConnection);

    In this case, we get back the definition of the Height thing type. In the returned HealthRecordItemTypeDefinition (which I’ll just call “type definition” for simplicity), you can find the schema for the type (in the XmlSchemaDefinition property), and two properties related to transforms.

    SupportedTransformNames lists the names of the transforms. In this case, the list is “form”, “mtt”, and “stt” (more on what these do later).

    TransformSource contains the XSL transforms themselves, keyed with the name of the transform.

    Applying transforms

    If you want to apply the transform, there are two ways to do it.

    The first way is to do it on the client side:

        string transformedXml = definition.TransformItem(“mtt”, heightInstance);

    That’s pretty straightforward, though you do need to fetch the type definition for the type first. You could also apply the transform using the .NET XML classes, if you wanted to do more work.

    The second option is to ask the platform to do the transform for you. You do this by specifying the transform name as part of the filter definition:

        HealthRecordFilter filter = new HealthRecordFilter(Height.TypeId);
        filter.View = new HealthRecordView();
        filter.View.Sections = HealthRecordItemSections.Core;

    and then the item will already have the transformed text inside of it:

        XmlDocument mttDocument = heightInstance.TransformedXmlData["mtt"];

    Available transforms

    Each type defines a set of transforms that let you look at a data instance in a more general way. The are named “mtt”, “stt”, and “form”.

    “mtt” transform

    The mtt (which stands for “multi type transform”) generates the most condensed view of data – it returns values for properties that are present on all data types, and a single summary string.

    For example, asking for the mtt transform of a Height instance returns the following:

    <row wc-id="8608696a-94b3-41a8-b9a6-219ecbbc87d1" 
         wc-date="2008-01-22 11:12:42" 
         wc-type="Height Measurement" 
         summary="1.9405521428867 m" />

    The “wc-“ attributes are the common data items across all instances. The most interesting piece of data is the summary attribute, which gives you the (surprise!) summary string for the instance.

    “stt” transform

    The stt (“single type transform”) is similar to the mtt, but instead of a single summary attribute there are a series of attributes that correspond to the properties on the data type. It will generally contain a attribute for every important property, but if the property is a less important detail and/or the type is very complex, this may not be true.

    For our Height instance, we get this for the mtt transform

      wc-date="2008-01-22 11:12:42"
      wc-type="Height Measurement"
      when="2008-01-22 11:12:42"
      display="1.9405521428867 m"
      height-in-m="1.9405521428867" />

    How do we know what attributes are here and what to do with them?

    That information is stored in the ColumnDefinitions property of the type definition. Each of these (an ItemTypeDataColumn instance) corresponds to one of the attributes on the row created by the STT transform.

    The following code can be used to pull out the values:

        XmlNode rowNode = item.TransformedXmlData["stt"].SelectSingleNode("data-xml/row");

        foreach (ItemTypeDataColumn columnDefinition in definition.ColumnDefinitions)
            XmlAttribute columnValue = rowNode.Attributes[columnDefinition.ColumnName];

    This is the mechanism that the HealthVault shell uses to display detailed information about an instance. There is additional information in the ItemTypeDataColumn that it uses:

    The Caption property stores a textual name for the column.

    The ColumnTypeName property stores the type of the column.

    The ColumnWidth contains a suggested width to use to display this information.

    The VisibleByDefault property defines whether the column is visible in the shell view by default (the wc-<x> ones typically are not, with the exception of wc-date).


    If you don’t want to decode all the column information yourself, you can use the HealthRecordItemDataGrid in your project.

    Put the following after the page directive in your .aspx file:

        <%@ Register TagPrefix="HV" Namespace="Microsoft.Health.Web"  Assembly="Microsoft.Health.Web" %>

    and then put an instance of the grid in the appropriate place:

        <HV:HealthRecordItemDataGrid ID="c_itemDataGrid" runat="server" />

    You then create a filter that defines the data to show in the grid in the page load handler:

        c_itemDataGrid.FilterOverride = new HealthRecordFilter();

    and the grid will be rendered using the STT transform view. If you want it to use the MTT transform view, you can set the TableView property on the grid to MultipleTypeTable, and it will show a summary view. You will also see this view if the filter returns more than one thing type.

    The form transform

    The final transform is the form transform. This transform exists on most, though not all types (we’re working to add form transforms where they’re absent). It provides an HTML view of the type.

    For our Height instance, we get the following from the form transform:

    <div class="xslThingTitle" id="genThingTitle">Height</div>
    <div class="xslThingValue">1.9405521428867 m</div>
    <table class="xslThingTable">
        <td class="xslTitleColumn">Date</td>
        <td class="xslValueColumn">2008-01-22 11:12:42</td>

    which, when rendered, looks something like this:

    1.9405521428867 m
    Date 2008-01-22 11:12:42

    Other transforms

    Some thing types will return a list that contain other transforms with names like “wpd-F5E5C661-26F5-46C7-9C6C-7C4E99797E53” or “hvcc-display”. These transforms are used by HealthVault Connection Center, for things like transforming WPD data into the proper xml format for a HealthVault instance.

  • Eric Gunnerson's Compendium

    Fantastic contraption


    I apologize ahead of time

    Fantastic Contraption...

  • Eric Gunnerson's Compendium

    In praise of boredom...


    Raymond wrote an interesting post about the erosion of the car trip experience.

    Along with the desired to shield our kids from any discomfort, I think there's a big desire to shield them from boredom.

    Boredom is part of being an adult, and I think learning to deal with it is an important part of growing up.

    Back when I was a kid, every year or so we took a long trip from Seattle to Boise to visit my grandparents. Though we usually made the trip over in the night (to avoid the heat (no AC, of course)), there were lots of hours of watching the "scenery" roll by (as an adult, I find the area around along the Columbia river to be striking, but as it kid it's a whole lot a nothin'), and then similar hours while we were there.

    That meant we have to learn how to amuse ourselves and not annoy each other too much.

    But if you have video every time your in the car, you don't learn how to deal with being bored, and (as a parent) you miss some great opportunities for conversation, not to mention the chance to inflict your musical tastes on your offspring.



  • Eric Gunnerson's Compendium

    Dr. Horrible...


    Last night I fixed the quicktime player on my somewhat aging home machine by turning of DirectX acceleration, and the family sat down to watch Dr. Horrible's Sing-Along Blog, which I had purchased through iTunes for the princely sum of $3.99.

    Dr. Horrible, if you aren't "in the know", was created by Joss Whedon, of Firefly fame. Firefly was a sci-fi western, and Dr. Horrible is a internet comic book superhero musical. It stars Felicia Day as the love interest, Nathan Fillion (aka Mal) as Captain Hammer, and Neil Patrick Harris (who will always be "Doogie" to me...) as the title character.

    Both the writing and the music are top-notch, as are the performances by the main characters. I'm hoping there will be more than the initial 3 episodes.


  • Eric Gunnerson's Compendium

    HealthVault data type design...


    From the "perhaps this might be interesting" file...

    Since the end of the HealthVault Solution Providers confererence in early June - and the subsequent required recharging - I've been spending the bulk of my time working on HealthVault data types, along with one of the PMs on the partner team. It interesting work - there's a little bit of schema design (all our data types are stored in XML on the server), a little bit of data architecture (is this one data type, or four?), a fair amount of domain-specific learning (how many ways are there to measure body fat percentage?), a bit of working directly with partners (exactly what is this type for?), a lot of word-smithing (should this element be "category", "type", or "area"?), and other things thrown in now and then (utility writing, class design, etc.).

    It's a lot like the time I spent on the C# design team - we do all the design work sitting in my office (using whiteboard/web/XMLSpy), and we often end up with fairly large designs before we scope them back to cover the scenarios that we care about. Our flow is somewhat different in that we have external touch points where we block - the most important of which is an external review where we talk about our design and ask for feedback.

    Since I'm more a generalist by preference, this suits me pretty well - I get to do a variety of different things and learn a variety of useful items (did you know that you can get the body fat percentage of your left arm measured?).

    Plus, I learn cool terms like "air displacement plethysmography"

  • Eric Gunnerson's Compendium

    Seattle Century 2008 ride report

    Seattle Century 2008 ride report
  • Eric Gunnerson's Compendium



    Back when I first started listening to music - in the days before there were CDs - if you were cool you bought your music on records, and then taped them onto 90-minute cassettes. You did this because it was hard to flip a record while you were driving, records wore out, and pre-recorded cassettes sounded a bit like a 48kbs MP3 stream, when they worked. Sometimes they didn't, and you had a $8 cassette afro.

    Oh, and you had to worry about azimuth (an adjustment that, when wrong, could kill all your high end), Dolby B, and, if you were really cool, Dolby C (twice the Dolby!).

    And you listened to what was called "album rock" in my area, otherwise known as "progressive rock". Those who wish to debate the differences between those two labels are welcome.

    Then, I went off to college, and CDs were released, but the cost $900, so nobody had one (actually one guy in my dorm had one, connected to his $10K high-end system). So, we kept buying albums.

    Then prices came down, we got jobs, and we bought CDs of the groups we had listened to, and reveled in the CD experience. The sound was so much clearer than cassettes, that we didn't realize that a lot of the early transfers were awful.

    Over time, many of our albums got remastered - first by Mobile Fidelity Sound Lab (who had made some pretty killer vinyl recordings), and then by the labels as they realized there was some money in it.

    By this point, you're wondering if there is a point, or if it's just an onion belt.

    Anyway, I have a fair number of CDs that are early transfers, and I'd like to replace them, but it can be a bit of a pain to find out what specific albums have been remastered, when they came out, etc.

    Enter Progrography, which has a bunch of information about progressive rock, including re-releases. So, if I want to know what remaster to get for Who's Next, there's a page that gives me all the info. Well, some of the info - some of the pages are a bit out of date - but what's there seems to be pretty good.

    And if not, I can remember listening to AC/DC, which was the style at the time...



  • Eric Gunnerson's Compendium

    #region. Sliced bread or sliced worms?


    (editors note: Eric gave me several different options to use instead of sliced worms, but they were all less palatable. So to speak.)

    Jeff Atwood wrote an interesting post on the use of #regions in code.

    So, I thought I’d share my opinion. As somebody who’s was involved with C# development for a long time – and development in general for longer – it should be pretty easy to guess my opinion.

    There will be a short intermission for you to write your answer down on a piece of paper. While you’re waiting, here are some kittens dancing

    Wasn’t that great. No relation, btw.

    So, for the most part, I agree with Jeff.

    The problem that I have with regions is that people seem to be enamored with their use. I was recently reviewing some code that I hadn’t seen before, and it had the exact sort of region use that Jeff describes in the Log4Net project. Before I can read through the class, I have to open them all up and look at them. In that case, regions make it significantly harder for me to understand the code.

    I’m reminded of a nice post written by C# compiler dev Peter Hallam, where he talks about how much time developers spend looking at existing code. I don’t like things that make that harder.

    One other reason I don’t like regions is that they give you a misleading impression about how simple and well structured your code is. The pain of navigating through a large code file is a good reason to force you to do some refactoring.

    When do I like them? Well, I think that if you have some generated code in part of a class, it’s okay to put it in a region. Or, if you have a large static table, you can put it in a region. Though in both cases I’d vote to put it in a separate file.

    But, of course, I’m an old dog. Do you think regions are a new trick?

  • Eric Gunnerson's Compendium

    Taking on dependencies


    A recent discussion on how to deal with dependencies when you're an agile team got me thinking...

    Whether you are doing agile or waterfall (and whether the team you are dependent on is doing agile or waterfall), you should assume that what the other group delivers will be late, broken, and lacking important functionality.

    Does that sound too pessimistic? Perhaps, but my experience is that the vast majority of teams assume the exact opposite perspective - that the other group will be on time, everything will be there, and everything will do what you need it to do. And then they have to modify their plan based upon the "new information" that they got (it's late/somethings been cut/whatever).

    I think groups that plan that way are deluding themselves about the realities of software development. Planning for things to go bad up front not only makes things smoother, you tend to be happily surprised as things are often better than you feared.

    A few recommendations:

    First, if at all possible, don't take a dependency on anything until it's in a form that you can evaluate for utility and quality. Taking an incremental approach can be helpful here - if you are coming up with your 18-month development schedule, your management will wonder why you don't list anything about using the work that group <x> is doing. If, on the other hand, you are doing your scheduling on a monthly (or other periodic) basis, it's reasonable to put off the work integrating the other groups work until it's ready to integrate (based on an agreement of "done" you have with the other group).

    That helps the lateness problem, but may put you in a worse position on the quality/utility perspective. Ideally, the other team is already writing code that will use the component exactly the way you want to use it.  If they aren't, you may need to devote some resources towards specifying what it does, writing tests that the team can use, and monitoring the component's process in intermediate drops. In other words, you are "scouting" the component to determine when you can adopt it.


  • Eric Gunnerson's Compendium

    Why does C# always use callvirt? - followup


    I was responding in comments, but it doesn't allow me to use links, so here's the long version:


    Yes, marking everything as virtual would have little performance impact. It would, however, be a Bad Thing. It's #3 on my list of deadly sins... 


    cmp [exc], exc is the solution to the problem. It's there because that's what the JIT generates to do the null test.


    Yes, you are missing something. Methods need to be marked as virtual for them to be virtual methods - this is separate from emitting the callvirt instruction.


    I'm not sure I understand your example (why are you calling ToString() on b when b is already a string?). It seems to me that what you want can be handled through a method that tests for null and returns either the result of ToString() or the appropriate default value (String.Empty seems to be the logical one in this case).


  • Eric Gunnerson's Compendium

    Why does C# always use callvirt?


    This question came up on an internal C# alias, and I thought the answer would be of general interest. That's assuming that the answer is correct - it's been quite a while.

    The .NET IL language provides both a call and callvirt instruction, with the callvirt being used to call virtual functions. But if you look through the code that C# generates, you will see that it generates a "callvirt" even in cases where there is no virtual function involved. Why does it do that?

    I went back through the language design notes that I have, and they state quite clearly that we decided to use callvirt on 12/13/1999. Unfortunately, they don't capture our rationale for doing that, so I'm going to have to go from my memory.

    We had gotten a report from somebody (likely one of the .NET groups using C# (thought it wasn't yet named C# at that time)) who had written code that called a method on a null pointer, but they didn’t get an exception because the method didn’t access any fields (ie “this” was null, but nothing in the method used it). That method then called another method which did use the this point and threw an exception, and a bit of head-scratching ensued. After they figured it out, they sent us a note about it.

    We thought that being able to call a method on a null instance was a bit weird. Peter Golde did some testing to see what the perf impact was of always using callvirt, and it was small enough that we decided to make the change.

  • Eric Gunnerson's Compendium

    Looj Review


    As many of you know, we have lived with a Roomba for a few years. He's been a faithful servent, though our cleaning people sometimes unplug the charger and don't plug it back in.

    A while back I got an email that said "Looj on Woot!". Woot!, as many of you know, is a website that specializes in selling something different every day. And Looj is a gutter-cleaning robot from iRobot, the maker of the Roomba. A week or so later, the Looj showed up at my door, but I only got around to reviewing it today.

    Unlike the Roomba, the Luje is not authonomous. So, strictly speaking, it's a radio-controlled gutter cleaner rather than a robotic one, but I'm willing to cut them so slack on this. It has two parts - there's the main body of the robot, which is about 12" long, 2" wide, and perhaps an inch and a half tall. On the front is the cleaning part, which has rubber flaps and some strong brushes.

    Attached to the top is the handle/remote control.

    I got up on my roof today to try it out. I have a few leaves, a few needles, a bit of moss, and a whole lot of maple tree seeds (aka "helicopters"). It's all fairly dry because of the time of the year.

    To use it, you set it in the gutter, turn it on, and detach the handle. Turn on the auger, and then you just drive it forwards and backwards like an RC car. In most cases, it will clear all the debris in the first pass, but sometimes there's enough that it climbs on top a bit, and you have to reverse and then move forward to clean it all out. Or, you can drive forwards in spurts when you hit a lot of debris.

    It works really well. I only ran into two problems. In one of my gutters, the Looj rolled over on its back, but since the tread design is symmetrical, it works fine on the back as well. The second problem I had was when I drove it over a short maple tree which got tangled in the auger and the auger clutch released (this doesn't damage the Looj). I untangled it, pulled out the tree, and finished that section of the gutter (the Looj manual tells you not to try to remove trees...)

    So, I did all the gutters - perhaps 120' - in about 15 minutes. Now, I did it from the roof, so I didn't have to move the ladder, but it was still remarkably painless. And it's pretty cheap for what it does - about $100 for the basic model.


  • Eric Gunnerson's Compendium



    I was really just trying to get people to smile when they read about it.

    Last Sunday night (the 9th), I played another game of indoor. I felt pretty good, and though I got run into fairly hard at one point, I thought that the guy who ran into me came away worse than I did. At the end of the game my right ankle was sore, but everything else was fine.

    Tuesday morning, I woke up, and my side hurt. I tried to deny it, but by Thursday morning, it was clear. The guy who hit me must have run into me with a knee, because I have another hurt rib (on the lower-left quadrant - one of the short ribs, another new spot for me). I'm not sure whether it's bruised or cracked, but I do know that it's pretty painful. The 30 miles I did on the bike on Sunday were about as pleasant as 7 hills was (ie not much), and I skipped this Sunday's soccer game (it's better to spread the injuries out rather than enjoy them all at once). So, it's another 2-3 weeks of pain, though the only really bad time is when I first get up in the morning and stretch.

    The only upside for this is that I've been looking for a good excuse to skip RAMROD this year, and I figure the double-rib qualifies.

  • Eric Gunnerson's Compendium

    Conference wrap-up...


    After a bit of time to recover from my back-to-back conferences (TechEd 2008 in Orlando and the 2008 HealthVault Solutions Conference in Bellevue), I have a few thoughts to share.

    I haven't been to TechEd for a few years, but TechEd is still TechEd. The developer division, however, has seen a lot of turnover, and I was surprised to find how few Microsoft people that I knew were in attendence. I did two "lunch talks", which is code for "we don't know what track to put you in...". You only get 45 minutes for your talk (after which they come in and tell you to get off the stage), but on the plus side, you don't have to go through any slide review process. I did a HealthVault introduction that went relatively well, and a "write lots of code" talk that went well except for some demo slowness (more on that later). There were about 50 attendees in each session, which is pretty good for a lunch session because of the hassle of attending them.

    I'm disappointed that TechEd no longer devotes a night to "ask the experts". Instead, the MS people have to do "booth duty", which means you stand at your designated section for hours and hope that somebody will come by to talk to you. From watching and talking with a few MS people, that meant a lot of hours where you just stand around, and even when people come by to talk, the MS person who is best equipped to answer the question may not be there.  

    I talked with developers at all the meals, and had some good conversations. I ran into 3 developers who worked in the Health area but had never heard of HealthVault. It means we have some work to do to find out why those developers don't know about us, but it also means that we have some nice opportunities to reach a new audience.

    Splitting the conference into two sections (dev for 4 days, IT for 4 days) was a positive move for the people I talked to, and made it a bit more intimate (if you can properly apply that term in a conference that big).

    Two things I suggest not doing at a conference:

    First, don't try to write a presentation for a second conference while you are at a conference. It's really hard to do well.

    Second, don't check your bag at the conference center, unless you want to spend 35 minutes to do what will take you 5 minutes at the airport. The shuttles to the airport were nice, however.

    After the week at TechEd, I headed back for the

    HealthVault Solutions Conference

    Which was held Mon/Tue of the next week in picturesque downtown Bellevue (come see our construction) at the Hyatt Hotel. This was a great conference - everybody was uniformly friendly, and because of the partner approach the conference is as much about partner <-> partner interaction as it is about Microsoft <-> partner interaction. Monday featured a keynote and then a very complex demo involving live code from lots of different partners working together, put on by my team (but with very little effort on my part). The demo was nearly flawless, and if anything, they made it look a little too effortless. It was very compelling.

    Tuesday started with a keynote by Dr. Oz, a cardiac surgeon and gifted speaker. After a product roadmap talk (that I skipped to get set up for our technical track), we had the following talks:

    • An architecture talk by Bert and Sean
    • A data type talk by Tim and Eric
    • (lunch)
    • A development talk (same talk I did at TechEd) by Eric
    • A third-party-library and other topics talk by Chris
    • A Patient Connect talk by Kalpita

    The talks all went well, with the exception of Eric's. Apparently he forgot to modify his proxy settings to be used outside the firewall, so everytime he made a request it had to time out finding the proxy server before it completed. He is disappointed that he didn't figure this out before the talk, and apologizes to all those who put up with the slowness.  

    Our goal is to re-use the slides and get them on MSDN in some form. I'm probably going to take the development talk about make a tutorial out of it.

  • Eric Gunnerson's Compendium

    Answering a question nobody asked...


    I'm on a break between sessions at TechEd, downstairs in one of the cavernous halls. Something like 400 yards from from to back (yes, I paced it off). I was looking for a comfortable place to sit for a few minutes, and found that the MSDN Zone has a big space with about 20 bean-bag chairs in it. I walk around the side, and am surprised to find that there are a few empty ones. I gracefully lower myself - as gracefully as my middle-aged body will let me right now - and settle in.


    Apparently, somebody thought it was important to answer the question "what if I made something that looked like a bean-bag chair but stuffed it with cheap fiberfill instead?", and somebody else thought it was a good idea to answer the question "will people like these better than bean-bag chairs?"

    Questions that nobody had every really asked before, but for good reason. The whole point of a bean-bag chair is that the user gets to modify the chair to their own personal support requirements. If you want to flop out, you mush it out flat. If you want to sit up - and perhaps use your laptop - you smush it so that it provides good support.

    This "chair" is okay for flopping on, but despite a bit of swelling at one end, provides little in the way of support. After trying a few positions, I'm writing this while I'm in "street luge" position - my legs stretched out in front of me and my head just slightly raised up - which manages to look fairly relaxing without being relaxing at all.

Page 4 of 46 (1,133 items) «23456»