• Eric Gunnerson's Compendium

    The Omnificent English Dictionary in Limerick Form


    A very nice collection of definitions, all done in limerick form.  


  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #4 – Storing and retrieving weights


    We are now ready to make our WeightTracker application do something useful.

    That something useful is composed of two stories:

    • The user enters a weight, and it’s stored to the user’s HealthVault record
    • The weight measurements in the user’s record are displayed

    We’ll do them in order.

    Entering and storing a weight

    Looking at default.aspx, you will find that it’s prepopulated with some controls that we’ll be using in the tutorial. Some of the ones that we don’t need yet are marked as invisible. For now, just pretend that the ones that aren’t visible aren’t there, and nobody will get hurt.

    For this story, we’ll be using the c_textboxWeight for the user to enter the weight, and the c_buttonSave for the user to press to save the weight.

    Switching to the code file (Default.aspx.cs), we see that there’s a click handler c_buttonSave_Click() where we’ll write the code to save the weight value to the user’s record.

    First, we’ll get the weight value that the user entered and convert it to an integer:

    int weightInPounds = Int32.Parse(c_textboxWeight.Text);

    Then we’ll create an instance of the Weight type, which is (surprise) the type you use to store weights.

    Weight weight = new Weight();

    Now, we need to put the weight value from the user into the weight instance. But we have a problem.

    To make weight values interchangeable between applications, the Weight data type stores weight values measured in kilograms. But most users in the US would prefer to see their weight displayed in pounds.

    To fix this, we could just tell applications to perform the appropriate conversions to and from whatever unit they want to use, but that may cause problems with round-trip conversions:

    3 miles -> 4828.031 meters

    4828.031 meters –> 2.99999938 miles

    HealthVault solves the problem by storing measurements in two separate formats – the exact way the user entered the value, and the value stored in the standard units. The first one is used when an application needs to display a value, and the second one is used to perform calculations.

    Back to our code. We’ll start by storing the weight using our standard measure:

    weight.Value.Kilograms = weightInPounds / 2.204;

    And then we’ll set what we call the DisplayValue, which is the value the user entered.

    weight.Value.DisplayValue = new DisplayValue(weightInPounds, "pounds");

    Now that we’ve stored the value in the weight instance, we need to save it to the appropriate HealthVault record. We do that with the following code:


    HealthVault organizes data into records, where each record holds data for a specific person. Data access is always performed through a specific person’s login. So…

    PersonInfo corresponds to the person who is currently logged in. When they chose to authorize an application, they selected a specific record to authorize, and that is returned to the application in SelectedRecord. So PersonInfo.SelectedRecord is shorthand for “the record the user chose”.

    It is possible for the application to redirect to the HealthVault shell to switch to a different record, or for an application to deal with more than one record at the same time, but that’s beyond what we’re trying to do right now.

    Querying and displaying weights

    Now that we’ve figured out how to save weights, we need to write some code that will fetch the weights that we’ve entered and display them in a table. We’ll put that code in the Page_Prerender() method.


    We use Page_Prerender rather than Page_Load because any events – such as the one fired when the user enters a new weight – occur after Page_Load but before Page_Prerender. So, if we put our code in Page_Load, newly entered weights would not show up.


    First, we need a way to query the current record. This is done through the HealthRecordSearcher class, which we can get from the selected record:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    Next, we need a way to specify which items we want to read from the record. That is done using the HealthRecordFilter class. It has a number of options that can be used to control what items it returns – we will use it in the simplest way by specifying the type that we want returned:


    HealthRecordFilter filter = new HealthRecordFilter(Weight.TypeId);


    Now we want to send that request over to the HealthVault server, and get the weight items back:


    HealthRecordItemCollection weights = searcher.GetMatchingItems()[0];



    You may have noticed that HealthRecordSearcher.Filters is a collection of filters rather than a single filter. If you want to do more than one query in a single request, you can add multiple filters to the searcher object, and they will all get executed in one batch.

    When you call GetMatchingItems(), you get a collection of result collections, with one result collection for each filter. Since there’s only one filter here, we look at the first collection only.

    Executing multiple filters at once may improve the performance of querying.


    Finally, we need to walk through the returned weight items and display them. We do that with the following code (including some canned code to do the ASP.NET table manipulation):

    AddHeaderCells(c_tableWeight, "Date", "Weight");
    foreach (Weight weight in weights)
        AddCellsToTable(c_tableWeight, weight.When.ToString(), 

    The two helper methods make adding items to the table a little easier. Note that we are using the weight DisplayValue to show to the user.

    Next Time

    In the next episode, we’ll add some calculations to our application…


  • Eric Gunnerson's Compendium

    New computer


    I've shot a fair number of pictures this last year - mostly of my daughter's sports. My camera is a Canon 40D, and I generally use my 70-200mm F4L lens. The camera works well and is better than I am.

    I shoot all my photos in RAW format, which basically means you get the information before the camera does any post-processing (white balance, exposure, sharpening, etc.) I then use Lightroom 1.4 to make the adjustments that I want, and then export them to jpeg and upload them to smugmug.

    I'll typically shoot a full card of about 300 exposures at a game, and then when I get back, I need to delete the shots I don't like, crop all the keepers, and then do the adjustments I want.

    My current laptop is pretty ancient and is slow at doing this, and I've wanted to start using Lightroom 2.2, which is nicer but takes more resources.

    After a bit of thought, I decided that I'd put together a new system for our office and do photography on that rather than getting a new laptop. My plan was to build a new office system and then get a mid-range laptop rather than trying to get a pricey laptop that did everything I wanted.

    I don't keep up on processors as much as I used to, so I set of to do some research. I started at Ars Technica, and looked through their current system recommendations as a starting point.

    The first choice point was the processor. I've traditionally gone with AMD on price/performance and though I considered the Intel dual or quad cores, I ended up settling on an AMD Athlon dual core running at 3.1 GHz ($72.99). The quad-core Phenom looks interesting - and Lightroom does a good job with multiple cores - but it isn't that fast yet. I bought the retail processor which came with a nice quiet fan to make things easier.

    For a motherboard, I got the ASUS M3A78-EM ($78.99). It has pretty much everything I want onboard, with decent but not great graphics, and it has the AM2+ socket, so when the Phast Phenoms are available, I'll just be able to slot one right in. The motherboard supports 8G of main memory, and it's full of DDR2 1066 memory from Kingston ($91.98).

    For storage, there are two Samsung Spinpoint F1 750G drives ($159.98). I like having multiple disks so I can put system/swap on one and application data on another, and for Lightroom you can put the catalog on one drive and the pictures on another, spreading your I/O out.

    Add in a DVD burner ($26) and a nice Inwin case ($65).

    In the old days, I'd talk about all the cards I'd put in it, but the motherboard has a ton of stuff built in. I'll likely pull my Soundblaster Audigy out of the old office system and put it in the new one.

    Oh, and a combination floppy disk and card reader, so that I can pull the pictures off the card as fast as possible ($25).

    Putting the system together was pretty simple - a bit tight around the drives but not bad. The Inwin case came with a really nice adapter for all the front panel controls. You plug in the separate cables for power light, HDD light, power switch, reset switch into this little header that's labeled, and then the header plugs into the motherboard. Same for the USB cable from the card reader.

    And, of course, SATA is more than a little easier to put together than IDE.

    The system started fine, and was treated to an install of Vista Ultimate, 64bit version. That went fine, thought I needed to use the motherboard driver disk to get everything up and running.

    Lightroom 2.2 went on tonight, and the system works very nicely, even with the adjustment brush feature which is known to be pretty slow, and I haven't gotten around to putting the catalog on a separate disc.

    Total is somewhere around $600, which is pretty good. I still need to get a calibrator to calibrate my monitor so I'm seeing the right colors, and then it's time to move over all the other apps and data.

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #3: Configuring our application


    Now that we have HelloWorld set up and running, we want to move on to developing the real application. We’ll start with a shell application and add to it as we go.

    Download the WeightTracker shell, and open the project in Visual Studio. Register the certificate using the same process you used in the previous tutorial. Add a reference to the following HealthVault assembiles in c:\program files\microsoft healthvault\sdk\dotnet\assemblies:

    • Microsoft.Health.dll
    • Microsoft.Health.ItemTypes.dll
    • Microsoft.Health.Web.dll

    Run it, and you will be taken to the HealthVault authorization page for the application. Note that it’s specific to this application – it has its own name, logo, description, and set of requested types.

    It is possible to continue in the tutorial using the configuration that comes with WeightTracker, but since configuration is an important part of HealthVault applications, I recommend that you create your own configuration. That’s what we’ll be doing in this part of the tutorial.

    Creating a new certificate

    When we were working with Hello World, we had to register the application certificate on our system, and we did that through the MMC utility.

    This time, we need to create a certificate, register it on our system, then register it with the HealthVault platform. It’s possible to do the work by hand, but we’re going to use a utility that ships with the SDK to do this.

    Go to Start-> All Programs –> Microsoft HealthVault –> SDK –> HealthVault Application Manager. If you are running Vista, you’ll need to run as administrator.

    You should see the initial application manager screen, and it should list the HelloWorld Sample certificate that you registered in the last part of the tutorial, and it may list other sample applications you have used.

    The Application Name isn’t stored as part of the certificate – it is only stored by the ApplicationManager application. The application knows the names of some of the sample applications, which is why they show up.

    If you check the “Show Unnamed applications” checkbox, you will see the WeightTracker certificate pop up. You can verify which one it is by matching the application ID (defined in web.config) with the certificate name.

    We will create a new certificate by clicking the appropriately-named button. This will bring up a dialog that asks you to enter a name for the application.

    I chose to call mine “Weight Tracker Tutorial”, since that’s the name of the application we’ll be writing. You can use the same name or something else, like “Pizza Joe’s spicy vegetarian”.

    The main list should refresh and you should see the new certificate listed. When we generated a certificate, it was created with both a private and a public key, and they’re both in the certificate store. The public key is something that we’ll send off to the HealthVault platform so it can use to verify items signed with the private key. Public keys are typically passed around as .cer files.

    The private key isn’t something that we should be emailing around, because anybody who has it can pretend to be us. Application manager does allow you to export both the private and public keys in a .pfx file, in case you want to run the application on another server.

    The wikipedia articles on public key cryptography and digital signatures are good introductions to this area if you would like to know more. There is also the cryptography overview on MSDN.

    Registering a certificate with the HealthVault platform

    To register a certificate, right-click on it and choose “Upload certificate”.

    This will package the certificate’s public key and the application name (if present) into a request, and send it off to the HealthVault application configuration center. It will then launch your browser to the app config center.

    The app config center is a HealthVault application, and access to the configuration of a specific application is limited to a single HealthVault account. When you authenticate, you should use an appropriate account that works for what you are doing – using a shared account when multiple people need to be able to access and modify an application’s configuration.

    Once you have authenticated, you will see a list that contains your new application, with the name that you chose, and the generated application id. Click on your application.

    You will see the Information page for your application. Here, you set the name, description, and other items that show up on the authorization page, including the logo.

    Configuring an application’s data access

    We’ll start by configuring the data access that we’ll need to get started.

    There are two kinds of access that an application can request. Online access provides access when the user is running your web application, and is what we’ll use in the tutorial. Offline access provides access whenever the application wants access, and is typically used to synchronize information between HealthVault and the application.

    Offline access is something users are more careful about granting, especially to groups that they don’t trust, so your application should try to use online access whenever possible.

    Click on the “OnlineAccess” tab. To make organization easier, data access is grouped into rules. Choose “add rule”, to bring up the rule editor.

    Here you need to give a name for the rule (which isn’t user-visible, so you can name it whatever you want), and a “why string”, which is the justification that is given to the user on the page where they decide whether to grant access to your application. Good why strings are required before you can go live, and if you put some thought into it at this point, things are much easier later on.

    I’ll call this rule “Weight”, because that’s what we’re going to ask for. For the why string, I thought of a few possibilities that we could use:

    1. Because we need to access your data.
    2. Because the application needs access to work correctly.
    3. WeightTracker uses access to your Weight measurements to help you effectively manage your weight.
    4. All your weight are belong to us.

    Which one is best? Please write 500 words explaining why, and have it for me on Monday.

    The key here is that the why string is displayed to the user, and it needs to be something that a) the user understands and b) explains the benefit of granting access.

    For permissions, I choose “All”, because I want to be able to perform all operations on Weight measurements (create/read/update/delete) , and then I choose the “Weight Measurement” data type from the list, and pick “Save”. That gives me my first rule.

    I add a second rule:

    Rule name: Height
    Why string: Weight Tracker uses your height to help determine whether your weight is appropriate.
    Permissions: Read
    Data types: Height Measurement

    Note that I only specified Read access, rather than asking for access that I don’t need. Your application’s access should always be as minimal as possible – don’t ask for access that you don’t need.

    You will also need to delete the rule for “Personal Image”.

    Now that we’ve done that, the platform configuration is complete. All that is left is to configure the application to use the new application ID.

    Configuring the application to use a new certificate

    To modify the application, we need to update the application id in web.config file.

    In application manager, right-click on your new certificate, and choose “copy certificate name to clipboard”. Switch over to Visual studio, open the web.config file, and look for the “ApplicationId” entry. You will find:

    <add key="ApplicationId" value="f36debe2-c2a3-434a-b822-f8c294fdecf9" />

    That’s the one that the application is currently using. On startup, the application finds this string, prepends “WildcatApp-“ to it, and then uses that string to search for a certificate to use.

    Duplicate the app id entry, and comment one of them out. Paste in your new application ID for the value in the one that isn’t commented out, remove the “WildcatApp-“ part, and save the file.

    Here’s what I typically do:

    <add key="ApplicationId" value="1b8cbb19-a9ed-4ebc-b498-6ae3d0ed44d7" />

    <!—applications ids
        my key
    <add key="ApplicationId" value="1b8cbb19-a9ed-4ebc-b498-6ae3d0ed44d7" /> |
        original key
    <add key="ApplicationId" value="f36debe2-c2a3-434a-b822-f8c294fdecf9" />

    At this point, you should be able to run your application, and get the authorization screen that matches the information that you entered in application configuration center.

    I like to keep another application ID around (usually the HelloWorld one) so that I can easily switch to it to see if I’m running into problems due to my application configuration.

    Next Time

    Next time, we’ll be writing some code.


  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #2: Hello World


    In this post, we’ll create an account on the HealthVault system, open up the HelloWorld sample application, and verify that it works on our system.

    There are two live HealthVault platforms.

    • A developer platform at This platform is intended to store test data rather than real consumer data, and is the one that should be used to develop an application. When an application is all done, there is a process to deploy (aka “go live”) with the application, and it will then be able to run against…
    • The consumer platform (which we sometimes confusingly refer to as the “live” platform) lives at

    All of our tutorial examples will be talking to the developer platform.

    Examine the source code

    Start Visual Studio, then open the HelloWorld application at C:\Program Files\Microsoft HealthVault\SDK\DotNet\WebSamples\HelloWorld\website.

    In the solution explorer, expand Default.aspx and double-click on default.aspx.cs. This shows the code for the main page. Notice that the page class is derived from HealthServicePage – that class handles the initial communication handshake between your application and HealthVault, sending the user to the right page to log into the system, etc. That all happens behind the scenes before the Page_Load handler is called.

    Open web.config, and browse through it. If you find the following line:

        <sessionState mode="InProc" cookieless="true"/>

    change it to:

        <sessionState mode="InProc"/>

    ApplicationId specifies a GUID that uniquely identifies a specific application to the platform. ShellUrl and HealthServiceUrl define which instance of the platform to talk to.

    There’s also a proxy setting at the end of the file – if you are running on a network with a proxy server, you will need to change this so that the application can get outside the firewall.

    Run the application

    Hit <F5> to start the program in the debugger.

    That will build the solution, start up the development web server, and start debugging default.aspx. A browser session will open up, and you’ll find yourself on the login page for the HelloWorld application.

    All authentication and authorization in HealthVault applications is performed by a set of web pages that live on the HealthVault server. These web pages are collectively known as the “HealthVault Shell”.

    When you ran the application, the startup code in HelloWorld realized that it didn’t know who the current user was, and redirected off to the appropriate HealthVault shell page.

    At this point, you will need to create a test HealthVault account. For authentication, you can either use Windows live or Live ID. If you need to create an authentication account – and for test purposes it’s probably a good idea not to use an account you use for something else – go do that now.

    Once you’ve created that account, enter the credentials on the login screen. You will be prompted to create a HealthVault account, and then (when you click continue), will be prompted to authorize the Hello World application to access the information in your record.

    Before an application can run against the HealthVault platform, it must be configured on that platform. That configuration stores some descriptive information about the application (name, etc.), and also the precise data type access that the application is required. For example, an application that tracks a person’s weight might need to be able to store and retrieve weight measurements, but only read height measurements.

    The authorization page that you are currently looking at details this information for the user, who can then make a decision about whether to grant the application that access. This page is atypical because the Hello World application asks for access to all types to make things more convenient, but real applications will only specify the subset of access required.

    Choose “approve and continue”, and you will be redirected back to a page on your system.

    This will be a page that says “Server error in ‘/website’ application. If you dig a little more, in the exception text you will find:

    SecurityException: The specified certificate, CN=WildcatApp-05a059c9-c309-46af-9b86-b06d42510550, could not be found in the LocalMachine certificate store,or the certificate does not have a private key.]

    Every time an application runs, it needs to prove its identity to the platform through a cryptographic signature. To do this, it needs a private key on the machine where the application is running. It will use that key to sign the data, and the platform will then verify the signature using the public key that was registered as part of the application configuration process.

    The Hello World application is already configured on developer platform, so we just need to register it on the client.

    Register the certificate

    To do this, we’ll need to get the certificate into the local machine’s certificate store. Go to the location on disk where HelloWorld lives, go to the cert directory, and you’ll find a .pfx file, which contains both the public and private keys.

    Start up the certificate manager using the following shortcut:

    C:\Program Files\Microsoft HealthVault\SDK\Tools\ComputerCertificates.msc

    Right click on certificates, choose “All tasks”, then “Import”. Specify the .pfx file at:

    C:\Program Files\Microsoft HealthVault\SDK\DotNet\WebSamples\HelloWorld\cert\HelloWorld-SDK_ID-05a059c9-c309-46af-9b86-b06d42510550.pfx

    And then hit next repeatedly and finish. That will finish the import of the certificate.

    If you use a proxy to get to the internet and there is a password associated with it, you may need to modify the config file for it. In the sdk/tools directory, find ApplicationManager.exe.config, and add the following:

    defaultProxy enabled="true" useDefaultCredentials="true">
    proxy usesystemdefault="True"/>

    At this point, you should be able to re-run the application (or just hit F5 in the browser), and the HelloWorld application should then work. Note that the certificate is only accessible for the user who imported the certificate – access for other accounts (such as IIS) can be granted through the winhttpcertcfg utility (also in the tools directory), or through a utility that we’ll discuss in the future.

    Next time, we’ll start on our application itself.

    Introduction to HealthVault Development #3: Configuring our Application

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #1: Introduction


    Welcome to the tutorial. In this tutorial, we will develop a HealthVault application from scratch.

    My intention is to add new tutorial chapters on a roughly weekly basis, though I have a few ones queued up and already to go.

    If you haven’t located the HealthVault Developer Center on MSDN, start by spending some time there. You can find a link to the SDK on the left side of the page. Download the SDK and install it.

    You will also need to have Microsoft Visual Studio installed on your machine (either 2005 or 2008). IIS is optional for development but may be useful for testing.

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #0: Background


    Coding without a net

    Over the years, I’ve attended a number of talks that show you how easy it is to use a new component, using canned projects, pre-written code that is cut-and-pasted in, and contrived scenarios.

    In the early years of .NET, I saw a different sort of talk (which I believe was done either by Scott Guthrie or Rob Howard) – one that started with File->New project, and just involved coding from scratch. It was very impressive, because you could see exactly what was going on, and you knew it was as easy (or as hard) as it looked. I’ve come to call this approach as “coding without a net”, which I will gladly take credit for coining despite the fact I’m sure I stole it from somebody.

    In the spring of 2008, I set out to write such a talk for HealthVault, to be presented at TechEd 2008 in late May and the HealthVault solutions conference a week later. I wasn’t totally successful at avoiding pre-written code, partly because I didn’t want to write UI code, and partly because there are a few parts of the HealthVault API that still need a little polishing, but overall, I was pleased with the result.

    This is the written version of that talk. My goal is to use the same progression that I did in the talk, and perhaps expand on a few topics that had to be limited due to time constraints.

    Installments will appear on my blog periodically, though those who ever read my C# column on MSDN may remember that I have an unconventional definition for “periodically”.

    Introduction to HealthVault Development #1: Introduction

  • Eric Gunnerson's Compendium

    Retro gaming at its best...


    Back when I was in high school, in the early 1980s, was when I was first introduced to computer games.  What we called "arcade games" at the time.

    There were three common systems for this.

    First of all, there was the TRS-80. You can read about all the exciting detail in the link, but the system could display text at 64 x 18, and graphics at a resolution of 128 x 48 if you used the special 2x3 grid characters. Each pixel was either black or an interesting blue/white (the phosphor used in the monitor was not pure white).

    In addition to not having any storage built in, it also had no sound output whatsoever. However, it was well-renowned for putting out tremendous amounts of RF interference, and somebody discovered that if you put an AM radio next to it, you could, through use of timing loops, generate interference that was somewhat related to what was going on in the game.

    But it was cheap. Not cheap enough for me to afford one, but cheap.

    The second system was the Apple II, sporting 280x192 high-resolution color graphics. Well, 'kindof-color' graphics - any pixel could be white or black, but only odd ones could be green or orange, and only even ones could be violet or blue.

    The sound system was a great improvement over the TRS-80, with a speaker that you could toggle to off or on. Not off or on with a tone, just off or on - any tone had to be done in software synthesis.

    Finally, the third system was the Atari 800 and 400. It was far more sophisticated - not only did it have the 6502 as the Apple II did, it had a separate graphics processor (named Antic) that worked its way through a display list, a television interface processor that implemented player-missile graphics (aka "sprites") and collision detection in hardware, and a third custom chip that handles the keyboard, 4-channel sound output, and controllers (we called them "paddles" and "joysticks" back then).

    It was light-years beyond the Apple in sophistication, which only shows you the importance of cheapness over elegance of design and implementation.

    Oh, and you could plug *cartridges* into it, so you didn't have to wait for your game to load from the cassette (or later, floppy disk) before you played it.

    My brother-in-law bought an Atari 400 (the younger sibling of the 800), and of course he had a copy of Star Raiders, arguably one of the first good first-person shooters.  He also had a copy of Miner 2049er, a 2-D walker/jumper that's a little bit like donkey kong and a bit like pac man.

    It was very addictive, and put 10 levels into a small cartridge.

    It was followed in 1984 by "Bounty Bob Strikes Back", featuring 30 levels.

    I played both a fair bit until we finally broke down and sold our Atari 800XL in the early 1990s.

    And now, Big Five software has released both games in emulator form, so you can run them on your PC.

    Marvel to the advanced graphics, and wonderful sound. Note that the gameplay and addiction is still there.

    I have to run it at 640x480 size or the keys aren't sensed correctly. Play Miner first, as the controls are slightly different, and you'll get confused.

    Highly recommended.

  • Eric Gunnerson's Compendium

    Holiday Lights 2008


    Today, I took a break in the snow and finished the installation of the new light display. It's functional, except for one light that isn't working.  I've been extra busy this year, so while the main displays are up, there aren't as many additional lights as I would like to have.


    Our recent snowstorm has changed the look quite a bit - normally you only get a little light from the streetlight on the left, but now there's a ton.

    On the left, there are 8 strings of multipcolored LEDs in a circle around the light pole. To the right in front of the truck are some other lights. Hiding behind the truck is the first animated display, the "tree of lights". The big tree (about 40' tall) has red leds around the trunk, and features to animated displays. At the top is the second animated display, the "ring of fire", arrayed on the tree is the new display. To the right you can see the original animated display, santa in the sleigh and on the roof. Finally, outlining the house is a red/green/blue/white string, the last animated display.

    Tree of Lights

    16 channel sequenced controller, about 1500 lights total. From base of tree to top is about 14'.

    The controller is 68HC11 based.














    Ring of Fire

    Ring of Fire is 16 high-output red LEDs driven by a custom 16 channel controller, supporting 16 dim levels per LED.

    The controller is Atmel AVR based.

    I wrote a fair bit about it last year.








    The display that started it all. It animates as follows:

    1. Blue lights in foreground strobe towards santa.
    2. Reindeer, sleigh, and santa appear.
    3. Santa leaves sleight and hops up on the roof edge.
    4. Santa goes up to the peak near the chimney.
    5. Santa disappears, and then his hat disappears soon after.

    Then the whole things reverses itself.

    The display itself is painted plywood, with about 800 lights in total. After 12 years the lights had gotten a bit dim, so this year we replaced all of them. The santa at the top of the roof is usually a it more distinct, but he has a big snow beard this year.

    The controller is based on the Motorola 68HC11, running 8 channels.

    House Lights

    The house lights are 4 individual strands in red, green, blue, and white, with a 4-channel controller that dims between the colors. So, the house changes between colors.

    The controller is based on the Motorola 68HC11, with 4 channels, this time dimmable.

    Tree Lights

    The tree lights are the new display for this year.

    These are jumbo lights lit up with C7 (7 watt) bulbs inside of of a colored plastic housing. They really don't show up that well in the picture because of all the light coming off the snow, but even so, I think I will likely need to upgrade the bulbs in them to something brighter (say, in the 25 watt range). And I think I will go with clear bulbs - having both colored bulbs and colored lenses works well for yellow and orange but the blues and greens are really dark.

    The controller can support up to about 100 watts per channel, though I'm not sure my power budget can support it.

    The controller is Atmel AVR based (my new platform of choice), and the code is written in C. There are 15 channels, and each of them has 32 dimming levels. 

    You can find a lot more boring info here.

  • Eric Gunnerson's Compendium

    Holiday light project 2008 in pictures


    A trip through the new project in pictures:

    I was late in getting started on the project due to putting finished touches on the office, but I ended up with this wonderful new workbench. Quite the step up from the old door on top of shelf units that I've used for the last 35 years or so (really).

    Left to right, we have the Tektronix dual-channel 20MHz oscilloscope (thank you eBay), a bench power supply, a perfboard with sockets on it in front of my venerable blue toolbox (also 35+ years old), a outlet strip with a power supply plugged into it, a perfboard, a STK500 microcontroller programmer, a weller soldering staioin, and a fluke voltmeter.

    This the project in its first working version. On the far left partially clipped is the power supply. The upper-left part of the prototype board (the white part) has the zero crossing circuit, and the upper-right has a solid-state relay. A brown wire takes the zero-crossing signal to the microcontroller on the development board, and a brown wire takes the signal back to the relay. The Atmel AVR microcontroller that I use comes in a lot of different sizes, so the development board has any different sockets to support the. On the far-right is a white serial line which leads to my laptop - the AVR is programmed over a serial link.

    Back to the zero-crossing circuit. To switch AC power, you use a semiconductor device known as a triac. The triac has a weird characteristic - once you turn it on, it stays on until the voltage goes back to zero. That happens 120 times per second, so to implement diming you need to control when you turn on the power for for each half-cycle.

    Here's a picture that should make it easier to understand.

    The wavy part is the AC sine wave, and the nice square pulse is the zero-crossing signal, which goes high whenever the AC voltage is low enough. The microcontroller gets interrupted when the zero-crossing signal goes high, and then waits a little time until just after the real zero crossing happens.

    If it turned the output right at that point, it would stay on for the whole half-cycle, which would means the light was on full bright. If it never turned it on, it would mean the light was (does anybody know the answer? class?) off. If it turned it off halfway in between, the light would be at half brightness. To implement the 32 levels of brightness means dividing the half-cycle into 32 divisions of each area, corresponding to areas of equal power.

    (To be better, I should take into account the power/luminance curve of the incandescent bulb that I'm using and use that to figure out what the delays are. Perhaps the next version).

    To do this for multiple channels, you end up with code that does the following:

    1. Check all channels to see which ones should be turned on at the current time.
    2. Figure out when the next time is to check.
    3. Set an interrupt to that time.

    That gives us a set of lights stuck at a given dim level. To animate, you need to change that dim level over time. That is handled at two levels.

    The interrupt-level code handles what I call a dim transition. At least, that's what I call it now, since I didn't have a name for it before. We have a vector of current dim levels, one for each channel, and a vector of increments that are added to the current dim vector during each cycle.

    So, if we want to slowly dim up channel 1 while keeping all the others constant, we would set dimIncrement[0] to 1 and set the count to 31. 31 cycles later, channel 1 would be at full brightness.

    If we want to do a cross-fade, we set two values in the increment vector.

    That all happens underneath the covers - the main program loop doesn't know about it. The main program loop figures out what will happen next after the current dim transition, and then blocks.

    My early controllers were all table-based, with the tables pre-computed. This was because I was writing in assembler. The current system could also use that approach, but with only 2K of program memory, the procedural approach is more compact, though it is considerably harder to debug. I have a C# program I use to create and test the animations, but I need to rewrite it to use DirectX because I need a 120Hz frame rate to match what I the dimming hardware does.

    To get back to the zero-crossing circuit, I first built this circuit using a wall wart with a switching power supply. Such power supplies are small and efficient, but put a lot of noise into the transformer side. I wasted a lot of time on this, and ultimately switched back to a conventional wall wart (from an old Sony discman I used to have) with a linear power supply. Problem solved.

    Back to pictures:

    Here's the completed controller board.

    In the center is the AVR 2313V microcontroller. The U-shape is the solid-state relays that switch the AC for the lights. These are nice little Panasonic AQH2223 relays, which switch 600mA (about 75 watts) (though you can get the in versions that do 150 watts), are tiny, and, most importantly, they're less than $2 each.

    Note that these do not have built-in zero-crossing circuits built in. Most solid-state relays do, but you can't use those to do dimming.

    The top has the one transistor used to generate the zero-crossing circuit, a 7805 voltage regulator to provide +5V to the circuit, and a few passive components.

    Careful viewers will notice that the upper-right socket is empty. That's because it's only a 15-channel controller, but I used 16-pin sockets.  The blue wire that holds the AC socket wires on is wire-wrap wire that I had lying around - these are hot-glued down later on. The two black wires provide the rectified voltage (about 15V IIRC) from the wall-wart.

    The controller board is in a small plastic box designed to hold 4x6 photos, and then that's inside of a larger box. This lets me keep all of the electrical connections inside of the box. It's not really required for safety, but if you have a lot of exposed plugs and some water on them, you can get enough leakage from them to trip the GFI on your circuit. So having them all inside is better.

    The box will be enclosed in a white kitchen garbage bag for weather protection when it's outside. That seems low-tech, but has worked will in all of my controllers over the years.


    Projects like this often come down to cabling. Each light needs a wire that goes from the controller out to the light. I did a random layout of lights on the tree, and put them on 5 different ropes so they could easily be pulled up the tree on a pulled.

    Here are the 15 lights and all the extension cords required to hook them up. In this case, I used 24 15' extension cords because it was cheaper and easier than building the cables up from scratch.

    That's all for now.

  • Eric Gunnerson's Compendium

    Benchmarking, C++, and C# Micro-optimizations


    Two posts (1 2) on C# loop optimization got me thinking recently.

    Thinking about what I did when I first joined Microsoft.

    Way back in the spring of 1995 or so (yes, we did have computers back then, but the Internet of the time really *was* just a series of tubes), I was on the C++ compiler test team, and had just picked up the responsibility for running benchmark tests on various C++ compilers. I would run compilation speed and execution speed tests in controlled environments, so that we could always know where we were.

    We used a series of “standard” benchmarks – such as Spec – and a few of our own.

    Because execution speed was one of the few ways (other than boxes with lots of checkmarks) that you could differentiate your compiler from the other guy’s, all the compiler companies invested  resources at being faster at the benchmarks.

    The starting point was to look at the benchmark source, the resultant IL, and the final machine code, and see if you could see any opportunity for improvement. Were you missing any optimization opportunities?

    Sometimes, that wasn’t enough, so some compiler writers (*not* the ones I worked with) sometimes got creative.

    You could, for example, identify the presence of a specific expression tree that just “happened to show up” in the hot part of of a benchmark, and bypass your usual code generation with a bit of hand-tuned assembly that did things a lot faster.

    Or, with a little more work, you could identify the entire benchmark, and substitute another bit of hand-tuned assembly.

    Or, perhaps that hand-tuned assembly doesn’t really do *all* the work it needed to, but took a few shortcuts but still managed to return the correct answer.

    For some interesting accounts, please text “compiler benchmark cheating” to your preferred search engine.

    As part of that work, I got involved a bit in the writing and evaluation of benchmarks, and I thought I’d share a few rules around writing and interpreting micro-benchmarks. I’ll speak a bit about the two posts – which are about looping optimizations in C# – along the way. Just be sure to listen closely, as I will be speaking softly (though not in the Rooseveltian sense…)

    Rule 0: Don’t

    There has always been a widespread assumption that the speed of individual language constructs matter. It doesn’t.

    Okay, it does, but only in limited cases, and frankly people devote more time to it than it deserves.

    The more productive thing is to follow the agile guideline and write the simplest thing that works. And note that “works” is a bit of a weasely word here – if you write scientific computing software, you may have foreknowledge about what operations need to be fast and can safely choose something more complicated, but for most development that is assuredly not true.

    Rule 1: Do something useful

    Consider the following:

    void DoLoop()
        for (int x = 0; x < XMAX; x++)
            for (int y = 0; y < YMAX; y++)


    void TimeLoop()
        // start timer
        for (int count = 0; count < 1000; count++)
        // stop timer

    if XMAX is 1000, YMAX is 1000, and the total execution time is 0.01 seconds, what is the time spent per iteration?

    Answer: Unknown.

    The average C++ optimizer is smarter that this. That nested loop has no effect on the result of the program, so the compiler is free to optimize it out (the .NET JIT may not have time to do this).

    So, you modify the loop to be something like:

    void DoLoop()
        int sum;

        for (int x = 0; x < XMAX; x++)
            for (int y = 0; y < YMAX; y++)
                sum += y;

    The loop now has some work done inside of it, so the loop can’t be eliminated.

    Rule 2: No, really. Do something useful

    However, the numbers won’t change. The call to DoLoop() has no side effects, so the entire call can be safely eliminated.

    To make sure your loop is really a loop, there needs to be a side effect. The best bet is to have a value returned from the method and write it out to the console. This has the added benefit of giving you a way of checking whether things are working correct.

    Rule 3: Benchmark != Real world

    There are lurking effects that invalidate your results. Your benchmark is likely tiny and places very different memory demands on the system than your real program does.

    Rule 4: Profile, don’t benchmark


    C# loop optimization

    If you are writing code that needs the utmost in speed, there is an improvement to be had using for rather than foreach. There is also improvement to be had using arrays rather than lists, and unsafe code and pointers rather than array indexing.

    Whether this is worthwhile in a specific case depends exactly on what the code is doing. I don’t see a lot of point in spending time measuring loops when you could spend time measuring the actual code.

  • Eric Gunnerson's Compendium

    Holiday light project 2008...


    I've been searching for a new project to do for this season's holiday lights. I typically have four or five ideas floating around my head, and this year is no different.

    Lots of choices, so I've had to come up with a "rule" about new projects. The rule is that the project has to be in the neighborhood of effort-neutral. It already takes too long to put up (and worse, take down) the displays we already have, and I don't want to add anything that makes that worse. Oh, and they can't take too much power, because I'm already on a power budget.

    Unless, it's, like, especially cool.

    I had an idea that met all my criteria. It was small - small enough to be battery powered, if I did my power calculations properly, and was going to be pretty cool.

    It was, unfortunately, going to be a fair pain-in-the-butt to build - the fabrication was a bit complex, and the plan was to build a number of identical pieces. Oh, and it required me to choose the perfect LEDs from the 15 thousand that Mouser carries.

    So, I hadn't made much progress.

    Then, one day I was waiting for some paint to be tinted at my local home store, and I came across these.

    They're holiday lights. Jumbo-sized holiday lights.  The bulb part is made of colored plastic, and measures about 7" high. At the bottom there is a large fake lamp socket. Inside of all of it is a genuine C7 bulb of the appropriate color.

    I bought 3 sets, 15 in all.

    To be different, I wanted to build these as self-contained devices, with a separate microcontroller in each of the light bases. The microcontrollers I'm using cost about $1 each, so there isn't too much cost there, but the big challenge is a power supply. Generally, I build a linear power supply, which is simple and performs well, but you need an expensive and bulky transformer.

    There is a way around that, with the reasonably named "transformerless power supply". Realistically, a better name would be the "high-voltage shock-o-matic", because it involves hooking things directly to the AC line, can only supply a small amount of current, is inefficient, and is hard to troubleshoot. Oh, and if one component fails you get 150 volts instead of the 5 volts you were expecting.

    I decided to build one of these, so I ordered up some parts, wired it up, plugged it in, and immediately lost the magic smoke from one of the resistors. Turns out I miscalculated, and I needed a much-more-expensive power resister.

    Thinking about it some more, I decided that since I still needed power to each bulb - and therefore a wire to each bulb - it was simpler to just build a simple system with one microcontroller.

  • Eric Gunnerson's Compendium


    I've been doing some electronics recently, and perhaps I'm therefore more likely to treat this kindly, but I have to say that I think it's brilliant.
  • Eric Gunnerson's Compendium

    What's going on now...

    If you want to keep track of what's going on right now, you need this site...
  • Eric Gunnerson's Compendium



    Last Saturday, we were invited to a Halloween party at a friend of a friend. I only decided to go Friday night, so I'd put essentially zero effort into thinking about a costume.

    The wife was going as a vampire (we had a long discussion on what the feminine form of "vampire" was. I tended towards "vampress", mostly because of how silly it sounded), and I thought of doing something that fit together with that thematically. A lame costume that fits together thematically with another one is much better than a lame one that sits by itself.

    After a while, something suggested itself, and things came together pretty well. You can see the results here:

    (Like pilot in the 1950s who had eyepatches to preserve eyesight in their dominant eye in case of a nuclear explosion, I suggest covering one eye before clicking on the following link).


    I'm hoping that it's obvious who I am.

    The party itself was pretty good. The hosts hired a magician who walked around and did close magic to entertain the crowd. He was talented, though frankly given the sobriety of the majority of the guests it could have been me doing the tricks.

    While we were there a few of the ladies took it on themselves to vandalize me.

  • Eric Gunnerson's Compendium

    wireless LCD photo frame...


    I want to buy a couple of wireless LCD photo frames for my relatives. It needs to have a decent display, speak wireless, and hook up to a smugmug gallery.

    Any recommendations?

  • Eric Gunnerson's Compendium

    Versioning in HealthVault


    Download the sample code from MSDN Code Gallery.

    [Note that EncounterOld has been renamed to EncounterV1 in newer versions of the SDK]

    “Versioning” is an interesting feature of the HealthVault platform that allows us to evolve our data types (aka “thing types”) without forcing applications to be rewritten. Consider the following scenario:

    An application is written using a specific version of the Encounter type. To retrieve instances of this type, you write the following:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)


    That returns all instances of the type with the Encounter.TypeId type id. When the XML data comes back into the .NET SDK, it creates instances of the Encounter wrapper type from them, and that’s what’s in the items array.

    Time passes, cheese and wine age, clothes go in and out of style. The HealthVault data type team decides to revise the Encounter type, and the change is a breaking one in which the new schema is incompatible with the existing schema. We want to deploy that new version out so that people can use it, but because it’s a breaking change, it will (surprise surprise) break existing applications if we release it.

    Looking at our options, we come up with 3:

    1. Replace the existing type, break applications, and force everybody to update their applications.
    2. Leave the existing type in the system and release a new type (which I’ll call EncounterV2). New applications must deal with two types, and existing applications don’t see the instances of the new type.
    3. Update all existing instances in the database to the new type.

    #3 looks like a nice option, were it not for the fact that some instances are digitally signed and we have no way to re-sign the updated items.

    #1 is an obvious non-starter.

    Which leaves us with #2. We ask ouselves, “selves, is there a way to make the existence of two versions of a type easier for applications to deal with?”

    And the answer to that question is “yes, and let’s call it versioning”…


    Very simply, the platform knows that the old and new versions of a type are related to each other, and how to do the best possible conversion between the versions (more on “best possible” in a minute…). It uses this information to let applications pretend that there aren’t multiple versions of a type.

    Down Versioning

    The first scenario is existing applications that were written using the original version of the Encounter type (which we’ll call EncounterV1 for clarity), and what happens when the come across a health record that also has EncounterV2 instances in it. Here’s a graphical indication of what is going on:


    This application is doing queries with the EncounterV1 type id. When a query is executed, the platform knows that the encounter type has multiple versions, and converts the query for the V1 type to a query for all encounter types (both EncounterV1 and EncounterV2).

    The platform finds all the instances of those two types, and looks at each one. If it’s an EncounterV1 instance, it just ships it off to the application.

    But, if it’s an EncounterV2 instance, the platform knows (by looking at the application configuration) that this application doesn’t know what to do with an EncounterV2. It therefore takes the EncounterV2 instance, runs a transform on the EncounterV2 xml to convert it into EncounterV1 xml, and ships that across to the application. The data is “down versioned” to the earlier version.

    The application is therefore insulated from the updated version – it sees the instances using the type that the application was coded against.

    Down version conversions are typically lossy – there are often fields in the V2 version that are missing in the V1 version.  The platform therefore prevents updating instances that are down versioned, and will throw an exception if you try. You can look at the IsDownVersioned property on an instance to check whether updating is possible to avoid the exception.

    Up Versioning

    The second scenario is an application written to use the new EncounterV2 type:


    This time, the V1 data is transformed to the V2 version. An application can check the aptly-named IsUpVersioned property to tell whether an instance is up versioned.

    Higher versions typically contain a superset of the data in the old version, and the platform therefore allows the instance to be “upgraded” (ie updated to the new version).

    However, doing so will prevent an application using the V1 version from being able to update that instance, which may break some scenarios. The application should therefore ask the user for confirmation before upgrading any instances.

    This would be a good time to download the sample applications and run the EncounterV1 and EncounterV2 projects. Because this behavior is controlled by the application configuration, each application has its own certificate, which will need to be imported before the application can be run.

    Add some instances from both V1 and V2, and see how they are displayed in each application. Note that EncounterV1 displays both the instances it created and the down-versioned instances of the EncounterV2 type, and that EncounterV2 displays the instances it created and up-versioned instances of the EncounterV1 type.

    Version-aware applications….

    In the previous scenarios, the application was configured to only use a single version of a type.

    In some cases, an application may want to deal with both versions of a data type simultaneously, and this is known as a “version-aware” application.

    We expect such applications to be relatively uncommon, but there are some cases where versioning doesn’t do everything you want.

    One such case is our upcoming redesign to the aerobic session type. The AerobicSession type contains both summary information and sample information (such as the current heart rate collected every 5 seconds). In the redesign, this information will be split between the new Exercise and ExerciseSamples type. AerobicSession and Exercise will be versioned, but there will be no way to see the samples on AerobicSession through Exercise, nor will you be able to see ExerciseSamples through AerobicSession (there are a number of technical reasons why this is very difficult – let me know if you want more details). Therefore, applications that care about sample data will need to be version-aware.

    Writing a version-aware application adds one level of complexity to querying for data. Revisiting our original code, this time for an application that is configured to access both EncounterV1 and EncounterV2:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)


    When we execute this, items contains a series of Encounter items. The version of those items is constrained to the versions that the application is configured (in the Application Configuration Center) to access. If the version in the health record is a version that the application is configured to access, no conversion is performed, *even if* the version is not the version specified by the filter.

    That means that the items collection may contain instances of different versions of the type – in this case, either Encounter or EncounterOld instances. Any code that assume that instances are only of the Encounter type won’t work correctly in this situation. You may have run into this if you are using the HelloWorld, as it has access to all versions of all types.

    Instead, the code needs to look at the instances and determine their types:

    foreach (HealthRecordItem item in items)

         Encounter encounter = item as Encounter; 
         if (encounter != null) 

         EncounterOld encounterOld = item as EncounterOld;
         if (encounterOld != null)

    This is a good reason to create your own application configuration rather than just writing code against HelloWorld.

    Controlling versioning

    The platform provides a way for the application to override that application configuration and specify the exact version (or versions) that the application can support. For example, if my application wants to deal with the new type of Encounter all the time, it can do the following:


    which will always return all versions of encounter instances as the Encounter type. 

    EncounterBoth is a version-aware application in the sample code that handles the Encounter and EncounterV1 types. It will also let you set the TypeVersionFormat.

    Asking the platform about versioned types…

    If you ask nicely, the platform will be happy to tell you about the relationship between types. If you get fetch the HealthRecordItemTypeDefinition for a type, you can check the Versions property to see if there are other versions of the type.

    The VersionedTypeList project in the sample code shows how to do this.

    Naming Patterns





    There are a couple of different naming patterns in use. The types that were modified before we had versioning use the “Old” suffix (MedicationOld and EncounterOld) to denote the original types. For future types, we will be adopting version suffix on the previous type (so, if we versioned HeartRate, the new type would be named HeartRate, and the old one HeartRateV1). We are also planning to rename MedicationOld and EncounterOld to follow this convention.

    We will also be modifying the tools (application configuration center and the type schema browser) to show the version information. Until we get around to that, the best way is to ask the platform as outlined in the previous section.

  • Eric Gunnerson's Compendium

    100 skills every one should know...


    Popular Mechanics has a list of "100 skills every one should know". How many have you have?

     (I'll give my count later...)

  • Eric Gunnerson's Compendium

    Triathlon report

    For your "enjoyment", a report on the Triathlon I did last Sunday...
  • Eric Gunnerson's Compendium

    TV Calibration


    A few years, ago, I bought one of the last high-end rear-projection TVs based on CRTs - a Pioneer Elite 620HD. I did some basic calibration with the Avia disc and did some other minor adjustments, but never got around to getting a real calibration done.

    Calibration is the process of getting the TV to be as close as possible to NTSC settings - the same settings that were used when the program was created. That means getting colors and gray levels as close as possible to what they should be.

    But, if it's a high-end TV, why isn't it set from the factory to meet NTSC settings? Well, the simple fact is that TV manufacturers play games to make their TVs stand out in showroom settings. That generally means pictures that are far brighter than they should be (with corresponding poor black levels), colors are off, and the picture is over-sharpened.

    Some newer TVs let you choose a setting that's close to NTSC, but in most cases, calibration can make a big difference. If you have a LDC or Plasma set, start with a disc like Avia and see what you get out of it (Avia also has calibration for surround sound, which may also be useful).

    In my case, my set needed cleaning, focus adjustments (because it's a rear-projection set), convergence (because it has 3 CRTs in it, one for red, blue, and green), and geometry (because it uses CRTs). You can find local techs in my area who can do calibration, but because my set is fairly rare these days, I wanted an expert, and hooked with David Abrams from Avical, working out of LA, when he was on a trip in the Seattle area.

    David is a really nice guy and did a wonderful job. He spent 90 minutes on the geometry of the set (making sure straight lines are straight in all 3 colors), and about 60 minutes on the convergence (aligning all three colors). I only have about 5% of the patience he has when he's doing those kinds of things.

    So, after cleaning the set, setting the focus, setting the geometry and convergence, he was on to setting the gray levels and colors. With his test pattern generator (running through the TV's component inputs, which is all I use...) and his $18K color analyzer, that part went pretty quickly. He then worked through all my sources (Tivo HD, DVD, XBox 360) and verified that everything was set right. Finally, we looked at some source and he did some final tuning.

    The results were pretty impressive.

    The downside is that the differences between HD feeds is now really obvious - some look great, but others really fall short.


  • Eric Gunnerson's Compendium

    A box of toys...


    I am a box of toys and notions. Among other things, I contain a hard box full of legos and a gross of superballs. On my outside I have a series of labels that tell me where I have lived.

    As a proud corrugated-American, I'd like to share my story.


    My location for the last 24 hours. My owner has moved to this office to be closer to the rest of the HealthVault partner team, which he has joined due to a recent organizational optimization. I like this location because lots of people stop by, but I'm worried it will be loud because of the standby diesel generator right outside.


    I moved back to main campus to this office. It faces east and has a decent view of a parking lot. My owner works on the HealthVault partner team.


    I love this office, which is big and has a nice view to the south. I do worry about degradation of my structure because the sunlight makes it hot and sometimes the A/C fails. My owner works on the HealthVault partner team, and enjoys the "small team" atmosphere.


    This office is on west campus. It's okay, but the building isn't great. My owner works on the Windows Live pictures and video team.


    I moved from one end of the hall to the other, and my corners are getting banged up. My owner is on a newly-reorganized team and isn't sure what he works on right now.


    I moved two offices down to this one - I can look out through the window and watch my owner when he sits outside at lunch, though he does complain about the cafeteria now and then. My owner is on the Windows Movie Maker team working on the DVD Maker user interface.


    Welcome to the Movie Maker team, and to a new area of campus. Building 50 stands alone by itself and is a bit inconvenient to get to, but it seems nice enough, and there are lots of boxes next door that I can spend time with.


    A new office in the same building, this time looking at a wall of plants. My owner is a PM on the C# team, and is doing language design.


    Despair. My owner is happy with his assignment as the test lead on the C# compiler, but this office faces south, gets very hot, and has a lovely view of the top of a cafeteria.


    A pretty good office in a really nice building. I face north towards a set of vegetation. My owner co-owned office assignments on this move and got to choose his one before other people, so he got a nice one. He's a test lead for the C++ compiler front end, though there are rumors that there's something new coming along.


    My first window office, with a beautiful view of a forest. My owner is very happy to get such a nice window office, but he finds it hard not to get lost in the 1-4 building complex. He's a test lead for the C++ compiler.


    I was never in building 25, but my owner often tells me stories about it late at night. He says he had two different offices there, both of them pretty good.

  • Eric Gunnerson's Compendium

    Fun with HealthVault transforms


    It is sometimes useful to be able to display data in an application without having to understand the details of that data.

    To make this easier, HealthVault provides a couple of features: transforms, and the HealthRecordItemDataGrid class.

    Discovering transforms

    To find out what transforms are defined for a specific type, we just ask the platform:

        HealthRecordItemTypeDefinition definition =
                 ItemTypeManager.GetHealthRecordItemTypeDefinition(Height.TypeId, ApplicationConnection);

    In this case, we get back the definition of the Height thing type. In the returned HealthRecordItemTypeDefinition (which I’ll just call “type definition” for simplicity), you can find the schema for the type (in the XmlSchemaDefinition property), and two properties related to transforms.

    SupportedTransformNames lists the names of the transforms. In this case, the list is “form”, “mtt”, and “stt” (more on what these do later).

    TransformSource contains the XSL transforms themselves, keyed with the name of the transform.

    Applying transforms

    If you want to apply the transform, there are two ways to do it.

    The first way is to do it on the client side:

        string transformedXml = definition.TransformItem(“mtt”, heightInstance);

    That’s pretty straightforward, though you do need to fetch the type definition for the type first. You could also apply the transform using the .NET XML classes, if you wanted to do more work.

    The second option is to ask the platform to do the transform for you. You do this by specifying the transform name as part of the filter definition:

        HealthRecordFilter filter = new HealthRecordFilter(Height.TypeId);
        filter.View = new HealthRecordView();
        filter.View.Sections = HealthRecordItemSections.Core;

    and then the item will already have the transformed text inside of it:

        XmlDocument mttDocument = heightInstance.TransformedXmlData["mtt"];

    Available transforms

    Each type defines a set of transforms that let you look at a data instance in a more general way. The are named “mtt”, “stt”, and “form”.

    “mtt” transform

    The mtt (which stands for “multi type transform”) generates the most condensed view of data – it returns values for properties that are present on all data types, and a single summary string.

    For example, asking for the mtt transform of a Height instance returns the following:

    <row wc-id="8608696a-94b3-41a8-b9a6-219ecbbc87d1" 
         wc-date="2008-01-22 11:12:42" 
         wc-type="Height Measurement" 
         summary="1.9405521428867 m" />

    The “wc-“ attributes are the common data items across all instances. The most interesting piece of data is the summary attribute, which gives you the (surprise!) summary string for the instance.

    “stt” transform

    The stt (“single type transform”) is similar to the mtt, but instead of a single summary attribute there are a series of attributes that correspond to the properties on the data type. It will generally contain a attribute for every important property, but if the property is a less important detail and/or the type is very complex, this may not be true.

    For our Height instance, we get this for the mtt transform

      wc-date="2008-01-22 11:12:42"
      wc-type="Height Measurement"
      when="2008-01-22 11:12:42"
      display="1.9405521428867 m"
      height-in-m="1.9405521428867" />

    How do we know what attributes are here and what to do with them?

    That information is stored in the ColumnDefinitions property of the type definition. Each of these (an ItemTypeDataColumn instance) corresponds to one of the attributes on the row created by the STT transform.

    The following code can be used to pull out the values:

        XmlNode rowNode = item.TransformedXmlData["stt"].SelectSingleNode("data-xml/row");

        foreach (ItemTypeDataColumn columnDefinition in definition.ColumnDefinitions)
            XmlAttribute columnValue = rowNode.Attributes[columnDefinition.ColumnName];

    This is the mechanism that the HealthVault shell uses to display detailed information about an instance. There is additional information in the ItemTypeDataColumn that it uses:

    The Caption property stores a textual name for the column.

    The ColumnTypeName property stores the type of the column.

    The ColumnWidth contains a suggested width to use to display this information.

    The VisibleByDefault property defines whether the column is visible in the shell view by default (the wc-<x> ones typically are not, with the exception of wc-date).


    If you don’t want to decode all the column information yourself, you can use the HealthRecordItemDataGrid in your project.

    Put the following after the page directive in your .aspx file:

        <%@ Register TagPrefix="HV" Namespace="Microsoft.Health.Web"  Assembly="Microsoft.Health.Web" %>

    and then put an instance of the grid in the appropriate place:

        <HV:HealthRecordItemDataGrid ID="c_itemDataGrid" runat="server" />

    You then create a filter that defines the data to show in the grid in the page load handler:

        c_itemDataGrid.FilterOverride = new HealthRecordFilter();

    and the grid will be rendered using the STT transform view. If you want it to use the MTT transform view, you can set the TableView property on the grid to MultipleTypeTable, and it will show a summary view. You will also see this view if the filter returns more than one thing type.

    The form transform

    The final transform is the form transform. This transform exists on most, though not all types (we’re working to add form transforms where they’re absent). It provides an HTML view of the type.

    For our Height instance, we get the following from the form transform:

    <div class="xslThingTitle" id="genThingTitle">Height</div>
    <div class="xslThingValue">1.9405521428867 m</div>
    <table class="xslThingTable">
        <td class="xslTitleColumn">Date</td>
        <td class="xslValueColumn">2008-01-22 11:12:42</td>

    which, when rendered, looks something like this:

    1.9405521428867 m
    Date 2008-01-22 11:12:42

    Other transforms

    Some thing types will return a list that contain other transforms with names like “wpd-F5E5C661-26F5-46C7-9C6C-7C4E99797E53” or “hvcc-display”. These transforms are used by HealthVault Connection Center, for things like transforming WPD data into the proper xml format for a HealthVault instance.

  • Eric Gunnerson's Compendium

    Fantastic contraption


    I apologize ahead of time

    Fantastic Contraption...

  • Eric Gunnerson's Compendium

    In praise of boredom...


    Raymond wrote an interesting post about the erosion of the car trip experience.

    Along with the desired to shield our kids from any discomfort, I think there's a big desire to shield them from boredom.

    Boredom is part of being an adult, and I think learning to deal with it is an important part of growing up.

    Back when I was a kid, every year or so we took a long trip from Seattle to Boise to visit my grandparents. Though we usually made the trip over in the night (to avoid the heat (no AC, of course)), there were lots of hours of watching the "scenery" roll by (as an adult, I find the area around along the Columbia river to be striking, but as it kid it's a whole lot a nothin'), and then similar hours while we were there.

    That meant we have to learn how to amuse ourselves and not annoy each other too much.

    But if you have video every time your in the car, you don't learn how to deal with being bored, and (as a parent) you miss some great opportunities for conversation, not to mention the chance to inflict your musical tastes on your offspring.



  • Eric Gunnerson's Compendium

    Dr. Horrible...


    Last night I fixed the quicktime player on my somewhat aging home machine by turning of DirectX acceleration, and the family sat down to watch Dr. Horrible's Sing-Along Blog, which I had purchased through iTunes for the princely sum of $3.99.

    Dr. Horrible, if you aren't "in the know", was created by Joss Whedon, of Firefly fame. Firefly was a sci-fi western, and Dr. Horrible is a internet comic book superhero musical. It stars Felicia Day as the love interest, Nathan Fillion (aka Mal) as Captain Hammer, and Neil Patrick Harris (who will always be "Doogie" to me...) as the title character.

    Both the writing and the music are top-notch, as are the performances by the main characters. I'm hoping there will be more than the initial 3 episodes.


Page 4 of 46 (1,144 items) «23456»