Posts
  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #2: Hello World

    • 1 Comments

    In this post, we’ll create an account on the HealthVault system, open up the HelloWorld sample application, and verify that it works on our system.

    There are two live HealthVault platforms.

    • A developer platform at http://www.healthvault-ppe.com. This platform is intended to store test data rather than real consumer data, and is the one that should be used to develop an application. When an application is all done, there is a process to deploy (aka “go live”) with the application, and it will then be able to run against…
    • The consumer platform (which we sometimes confusingly refer to as the “live” platform) lives at http://www.healthvault.com.

    All of our tutorial examples will be talking to the developer platform.

    Examine the source code

    Start Visual Studio, then open the HelloWorld application at C:\Program Files\Microsoft HealthVault\SDK\DotNet\WebSamples\HelloWorld\website.

    In the solution explorer, expand Default.aspx and double-click on default.aspx.cs. This shows the code for the main page. Notice that the page class is derived from HealthServicePage – that class handles the initial communication handshake between your application and HealthVault, sending the user to the right page to log into the system, etc. That all happens behind the scenes before the Page_Load handler is called.

    Open web.config, and browse through it. If you find the following line:

        <sessionState mode="InProc" cookieless="true"/>

    change it to:

        <sessionState mode="InProc"/>

    ApplicationId specifies a GUID that uniquely identifies a specific application to the platform. ShellUrl and HealthServiceUrl define which instance of the platform to talk to.

    There’s also a proxy setting at the end of the file – if you are running on a network with a proxy server, you will need to change this so that the application can get outside the firewall.

    Run the application

    Hit <F5> to start the program in the debugger.

    That will build the solution, start up the asp.net development web server, and start debugging default.aspx. A browser session will open up, and you’ll find yourself on the login page for the HelloWorld application.

    All authentication and authorization in HealthVault applications is performed by a set of web pages that live on the HealthVault server. These web pages are collectively known as the “HealthVault Shell”.

    When you ran the application, the startup code in HelloWorld realized that it didn’t know who the current user was, and redirected off to the appropriate HealthVault shell page.

    At this point, you will need to create a test HealthVault account. For authentication, you can either use Windows live or Live ID. If you need to create an authentication account – and for test purposes it’s probably a good idea not to use an account you use for something else – go do that now.

    Once you’ve created that account, enter the credentials on the login screen. You will be prompted to create a HealthVault account, and then (when you click continue), will be prompted to authorize the Hello World application to access the information in your record.

    Before an application can run against the HealthVault platform, it must be configured on that platform. That configuration stores some descriptive information about the application (name, etc.), and also the precise data type access that the application is required. For example, an application that tracks a person’s weight might need to be able to store and retrieve weight measurements, but only read height measurements.

    The authorization page that you are currently looking at details this information for the user, who can then make a decision about whether to grant the application that access. This page is atypical because the Hello World application asks for access to all types to make things more convenient, but real applications will only specify the subset of access required.

    Choose “approve and continue”, and you will be redirected back to a page on your system.

    This will be a page that says “Server error in ‘/website’ application. If you dig a little more, in the exception text you will find:

    SecurityException: The specified certificate, CN=WildcatApp-05a059c9-c309-46af-9b86-b06d42510550, could not be found in the LocalMachine certificate store,or the certificate does not have a private key.]

    Every time an application runs, it needs to prove its identity to the platform through a cryptographic signature. To do this, it needs a private key on the machine where the application is running. It will use that key to sign the data, and the platform will then verify the signature using the public key that was registered as part of the application configuration process.

    The Hello World application is already configured on developer platform, so we just need to register it on the client.

    Register the certificate

    To do this, we’ll need to get the certificate into the local machine’s certificate store. Go to the location on disk where HelloWorld lives, go to the cert directory, and you’ll find a .pfx file, which contains both the public and private keys.

    Start up the certificate manager using the following shortcut:

    C:\Program Files\Microsoft HealthVault\SDK\Tools\ComputerCertificates.msc

    Right click on certificates, choose “All tasks”, then “Import”. Specify the .pfx file at:

    C:\Program Files\Microsoft HealthVault\SDK\DotNet\WebSamples\HelloWorld\cert\HelloWorld-SDK_ID-05a059c9-c309-46af-9b86-b06d42510550.pfx

    And then hit next repeatedly and finish. That will finish the import of the certificate.

    If you use a proxy to get to the internet and there is a password associated with it, you may need to modify the config file for it. In the sdk/tools directory, find ApplicationManager.exe.config, and add the following:

    <system.net>
        <
    defaultProxy enabled="true" useDefaultCredentials="true">
            <
    proxy usesystemdefault="True"/>
        </defaultProxy>
    </
    system.net>

    At this point, you should be able to re-run the application (or just hit F5 in the browser), and the HelloWorld application should then work. Note that the certificate is only accessible for the user who imported the certificate – access for other accounts (such as IIS) can be granted through the winhttpcertcfg utility (also in the tools directory), or through a utility that we’ll discuss in the future.

    Next time, we’ll start on our application itself.

    Introduction to HealthVault Development #3: Configuring our Application

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #1: Introduction

    • 2 Comments

    Welcome to the tutorial. In this tutorial, we will develop a HealthVault application from scratch.

    My intention is to add new tutorial chapters on a roughly weekly basis, though I have a few ones queued up and already to go.

    If you haven’t located the HealthVault Developer Center on MSDN, start by spending some time there. You can find a link to the SDK on the left side of the page. Download the SDK and install it.

    You will also need to have Microsoft Visual Studio installed on your machine (either 2005 or 2008). IIS is optional for development but may be useful for testing.

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #0: Background

    • 2 Comments

    Coding without a net

    Over the years, I’ve attended a number of talks that show you how easy it is to use a new component, using canned projects, pre-written code that is cut-and-pasted in, and contrived scenarios.

    In the early years of .NET, I saw a different sort of talk (which I believe was done either by Scott Guthrie or Rob Howard) – one that started with File->New project, and just involved coding from scratch. It was very impressive, because you could see exactly what was going on, and you knew it was as easy (or as hard) as it looked. I’ve come to call this approach as “coding without a net”, which I will gladly take credit for coining despite the fact I’m sure I stole it from somebody.

    In the spring of 2008, I set out to write such a talk for HealthVault, to be presented at TechEd 2008 in late May and the HealthVault solutions conference a week later. I wasn’t totally successful at avoiding pre-written code, partly because I didn’t want to write UI code, and partly because there are a few parts of the HealthVault API that still need a little polishing, but overall, I was pleased with the result.

    This is the written version of that talk. My goal is to use the same progression that I did in the talk, and perhaps expand on a few topics that had to be limited due to time constraints.

    Installments will appear on my blog periodically, though those who ever read my C# column on MSDN may remember that I have an unconventional definition for “periodically”.

    Introduction to HealthVault Development #1: Introduction

  • Eric Gunnerson's Compendium

    Retro gaming at its best...

    • 1 Comments

    Back when I was in high school, in the early 1980s, was when I was first introduced to computer games.  What we called "arcade games" at the time.

    There were three common systems for this.

    First of all, there was the TRS-80. You can read about all the exciting detail in the link, but the system could display text at 64 x 18, and graphics at a resolution of 128 x 48 if you used the special 2x3 grid characters. Each pixel was either black or an interesting blue/white (the phosphor used in the monitor was not pure white).

    In addition to not having any storage built in, it also had no sound output whatsoever. However, it was well-renowned for putting out tremendous amounts of RF interference, and somebody discovered that if you put an AM radio next to it, you could, through use of timing loops, generate interference that was somewhat related to what was going on in the game.

    But it was cheap. Not cheap enough for me to afford one, but cheap.

    The second system was the Apple II, sporting 280x192 high-resolution color graphics. Well, 'kindof-color' graphics - any pixel could be white or black, but only odd ones could be green or orange, and only even ones could be violet or blue.

    The sound system was a great improvement over the TRS-80, with a speaker that you could toggle to off or on. Not off or on with a tone, just off or on - any tone had to be done in software synthesis.

    Finally, the third system was the Atari 800 and 400. It was far more sophisticated - not only did it have the 6502 as the Apple II did, it had a separate graphics processor (named Antic) that worked its way through a display list, a television interface processor that implemented player-missile graphics (aka "sprites") and collision detection in hardware, and a third custom chip that handles the keyboard, 4-channel sound output, and controllers (we called them "paddles" and "joysticks" back then).

    It was light-years beyond the Apple in sophistication, which only shows you the importance of cheapness over elegance of design and implementation.

    Oh, and you could plug *cartridges* into it, so you didn't have to wait for your game to load from the cassette (or later, floppy disk) before you played it.

    My brother-in-law bought an Atari 400 (the younger sibling of the 800), and of course he had a copy of Star Raiders, arguably one of the first good first-person shooters.  He also had a copy of Miner 2049er, a 2-D walker/jumper that's a little bit like donkey kong and a bit like pac man.

    It was very addictive, and put 10 levels into a small cartridge.

    It was followed in 1984 by "Bounty Bob Strikes Back", featuring 30 levels.

    I played both a fair bit until we finally broke down and sold our Atari 800XL in the early 1990s.

    And now, Big Five software has released both games in emulator form, so you can run them on your PC.

    Marvel to the advanced graphics, and wonderful sound. Note that the gameplay and addiction is still there.

    I have to run it at 640x480 size or the keys aren't sensed correctly. Play Miner first, as the controls are slightly different, and you'll get confused.

    Highly recommended.

  • Eric Gunnerson's Compendium

    Holiday Lights 2008

    • 1 Comments

    Today, I took a break in the snow and finished the installation of the new light display. It's functional, except for one light that isn't working.  I've been extra busy this year, so while the main displays are up, there aren't as many additional lights as I would like to have.

     

    Our recent snowstorm has changed the look quite a bit - normally you only get a little light from the streetlight on the left, but now there's a ton.

    On the left, there are 8 strings of multipcolored LEDs in a circle around the light pole. To the right in front of the truck are some other lights. Hiding behind the truck is the first animated display, the "tree of lights". The big tree (about 40' tall) has red leds around the trunk, and features to animated displays. At the top is the second animated display, the "ring of fire", arrayed on the tree is the new display. To the right you can see the original animated display, santa in the sleigh and on the roof. Finally, outlining the house is a red/green/blue/white string, the last animated display.

    Tree of Lights

    16 channel sequenced controller, about 1500 lights total. From base of tree to top is about 14'.

    The controller is 68HC11 based.

     

     

     

     

     

     

     

     

     

     

     

     

     

    Ring of Fire

    Ring of Fire is 16 high-output red LEDs driven by a custom 16 channel controller, supporting 16 dim levels per LED.

    The controller is Atmel AVR based.

    I wrote a fair bit about it last year.

     

     

     

     

     

     

    Santa

    The display that started it all. It animates as follows:

    1. Blue lights in foreground strobe towards santa.
    2. Reindeer, sleigh, and santa appear.
    3. Santa leaves sleight and hops up on the roof edge.
    4. Santa goes up to the peak near the chimney.
    5. Santa disappears, and then his hat disappears soon after.

    Then the whole things reverses itself.

    The display itself is painted plywood, with about 800 lights in total. After 12 years the lights had gotten a bit dim, so this year we replaced all of them. The santa at the top of the roof is usually a it more distinct, but he has a big snow beard this year.

    The controller is based on the Motorola 68HC11, running 8 channels.

    House Lights

    The house lights are 4 individual strands in red, green, blue, and white, with a 4-channel controller that dims between the colors. So, the house changes between colors.

    The controller is based on the Motorola 68HC11, with 4 channels, this time dimmable.

    Tree Lights

    The tree lights are the new display for this year.

    These are jumbo lights lit up with C7 (7 watt) bulbs inside of of a colored plastic housing. They really don't show up that well in the picture because of all the light coming off the snow, but even so, I think I will likely need to upgrade the bulbs in them to something brighter (say, in the 25 watt range). And I think I will go with clear bulbs - having both colored bulbs and colored lenses works well for yellow and orange but the blues and greens are really dark.

    The controller can support up to about 100 watts per channel, though I'm not sure my power budget can support it.

    The controller is Atmel AVR based (my new platform of choice), and the code is written in C. There are 15 channels, and each of them has 32 dimming levels. 

    You can find a lot more boring info here.

  • Eric Gunnerson's Compendium

    Holiday light project 2008 in pictures

    • 1 Comments

    A trip through the new project in pictures:

    I was late in getting started on the project due to putting finished touches on the office, but I ended up with this wonderful new workbench. Quite the step up from the old door on top of shelf units that I've used for the last 35 years or so (really).

    Left to right, we have the Tektronix dual-channel 20MHz oscilloscope (thank you eBay), a bench power supply, a perfboard with sockets on it in front of my venerable blue toolbox (also 35+ years old), a outlet strip with a power supply plugged into it, a perfboard, a STK500 microcontroller programmer, a weller soldering staioin, and a fluke voltmeter.

    This the project in its first working version. On the far left partially clipped is the power supply. The upper-left part of the prototype board (the white part) has the zero crossing circuit, and the upper-right has a solid-state relay. A brown wire takes the zero-crossing signal to the microcontroller on the development board, and a brown wire takes the signal back to the relay. The Atmel AVR microcontroller that I use comes in a lot of different sizes, so the development board has any different sockets to support the. On the far-right is a white serial line which leads to my laptop - the AVR is programmed over a serial link.

    Back to the zero-crossing circuit. To switch AC power, you use a semiconductor device known as a triac. The triac has a weird characteristic - once you turn it on, it stays on until the voltage goes back to zero. That happens 120 times per second, so to implement diming you need to control when you turn on the power for for each half-cycle.

    Here's a picture that should make it easier to understand.

    The wavy part is the AC sine wave, and the nice square pulse is the zero-crossing signal, which goes high whenever the AC voltage is low enough. The microcontroller gets interrupted when the zero-crossing signal goes high, and then waits a little time until just after the real zero crossing happens.

    If it turned the output right at that point, it would stay on for the whole half-cycle, which would means the light was on full bright. If it never turned it on, it would mean the light was (does anybody know the answer? class?) off. If it turned it off halfway in between, the light would be at half brightness. To implement the 32 levels of brightness means dividing the half-cycle into 32 divisions of each area, corresponding to areas of equal power.

    (To be better, I should take into account the power/luminance curve of the incandescent bulb that I'm using and use that to figure out what the delays are. Perhaps the next version).

    To do this for multiple channels, you end up with code that does the following:

    1. Check all channels to see which ones should be turned on at the current time.
    2. Figure out when the next time is to check.
    3. Set an interrupt to that time.

    That gives us a set of lights stuck at a given dim level. To animate, you need to change that dim level over time. That is handled at two levels.

    The interrupt-level code handles what I call a dim transition. At least, that's what I call it now, since I didn't have a name for it before. We have a vector of current dim levels, one for each channel, and a vector of increments that are added to the current dim vector during each cycle.

    So, if we want to slowly dim up channel 1 while keeping all the others constant, we would set dimIncrement[0] to 1 and set the count to 31. 31 cycles later, channel 1 would be at full brightness.

    If we want to do a cross-fade, we set two values in the increment vector.

    That all happens underneath the covers - the main program loop doesn't know about it. The main program loop figures out what will happen next after the current dim transition, and then blocks.

    My early controllers were all table-based, with the tables pre-computed. This was because I was writing in assembler. The current system could also use that approach, but with only 2K of program memory, the procedural approach is more compact, though it is considerably harder to debug. I have a C# program I use to create and test the animations, but I need to rewrite it to use DirectX because I need a 120Hz frame rate to match what I the dimming hardware does.

    To get back to the zero-crossing circuit, I first built this circuit using a wall wart with a switching power supply. Such power supplies are small and efficient, but put a lot of noise into the transformer side. I wasted a lot of time on this, and ultimately switched back to a conventional wall wart (from an old Sony discman I used to have) with a linear power supply. Problem solved.

    Back to pictures:

    Here's the completed controller board.

    In the center is the AVR 2313V microcontroller. The U-shape is the solid-state relays that switch the AC for the lights. These are nice little Panasonic AQH2223 relays, which switch 600mA (about 75 watts) (though you can get the in versions that do 150 watts), are tiny, and, most importantly, they're less than $2 each.

    Note that these do not have built-in zero-crossing circuits built in. Most solid-state relays do, but you can't use those to do dimming.

    The top has the one transistor used to generate the zero-crossing circuit, a 7805 voltage regulator to provide +5V to the circuit, and a few passive components.

    Careful viewers will notice that the upper-right socket is empty. That's because it's only a 15-channel controller, but I used 16-pin sockets.  The blue wire that holds the AC socket wires on is wire-wrap wire that I had lying around - these are hot-glued down later on. The two black wires provide the rectified voltage (about 15V IIRC) from the wall-wart.

    The controller board is in a small plastic box designed to hold 4x6 photos, and then that's inside of a larger box. This lets me keep all of the electrical connections inside of the box. It's not really required for safety, but if you have a lot of exposed plugs and some water on them, you can get enough leakage from them to trip the GFI on your circuit. So having them all inside is better.

    The box will be enclosed in a white kitchen garbage bag for weather protection when it's outside. That seems low-tech, but has worked will in all of my controllers over the years.

    Cabling

    Projects like this often come down to cabling. Each light needs a wire that goes from the controller out to the light. I did a random layout of lights on the tree, and put them on 5 different ropes so they could easily be pulled up the tree on a pulled.

    Here are the 15 lights and all the extension cords required to hook them up. In this case, I used 24 15' extension cords because it was cheaper and easier than building the cables up from scratch.

    That's all for now.

  • Eric Gunnerson's Compendium

    Benchmarking, C++, and C# Micro-optimizations

    • 6 Comments

    Two posts (1 2) on C# loop optimization got me thinking recently.

    Thinking about what I did when I first joined Microsoft.

    Way back in the spring of 1995 or so (yes, we did have computers back then, but the Internet of the time really *was* just a series of tubes), I was on the C++ compiler test team, and had just picked up the responsibility for running benchmark tests on various C++ compilers. I would run compilation speed and execution speed tests in controlled environments, so that we could always know where we were.

    We used a series of “standard” benchmarks – such as Spec – and a few of our own.

    Because execution speed was one of the few ways (other than boxes with lots of checkmarks) that you could differentiate your compiler from the other guy’s, all the compiler companies invested  resources at being faster at the benchmarks.

    The starting point was to look at the benchmark source, the resultant IL, and the final machine code, and see if you could see any opportunity for improvement. Were you missing any optimization opportunities?

    Sometimes, that wasn’t enough, so some compiler writers (*not* the ones I worked with) sometimes got creative.

    You could, for example, identify the presence of a specific expression tree that just “happened to show up” in the hot part of of a benchmark, and bypass your usual code generation with a bit of hand-tuned assembly that did things a lot faster.

    Or, with a little more work, you could identify the entire benchmark, and substitute another bit of hand-tuned assembly.

    Or, perhaps that hand-tuned assembly doesn’t really do *all* the work it needed to, but took a few shortcuts but still managed to return the correct answer.

    For some interesting accounts, please text “compiler benchmark cheating” to your preferred search engine.

    As part of that work, I got involved a bit in the writing and evaluation of benchmarks, and I thought I’d share a few rules around writing and interpreting micro-benchmarks. I’ll speak a bit about the two posts – which are about looping optimizations in C# – along the way. Just be sure to listen closely, as I will be speaking softly (though not in the Rooseveltian sense…)

    Rule 0: Don’t

    There has always been a widespread assumption that the speed of individual language constructs matter. It doesn’t.

    Okay, it does, but only in limited cases, and frankly people devote more time to it than it deserves.

    The more productive thing is to follow the agile guideline and write the simplest thing that works. And note that “works” is a bit of a weasely word here – if you write scientific computing software, you may have foreknowledge about what operations need to be fast and can safely choose something more complicated, but for most development that is assuredly not true.

    Rule 1: Do something useful

    Consider the following:

    void DoLoop()
    {
        for (int x = 0; x < XMAX; x++)
        {
            for (int y = 0; y < YMAX; y++)
            {
            }
        }

    }

    void TimeLoop()
    {
        // start timer
        for (int count = 0; count < 1000; count++)
        {
            DoLoop();
        }
        // stop timer
    }

    if XMAX is 1000, YMAX is 1000, and the total execution time is 0.01 seconds, what is the time spent per iteration?

    Answer: Unknown.

    The average C++ optimizer is smarter that this. That nested loop has no effect on the result of the program, so the compiler is free to optimize it out (the .NET JIT may not have time to do this).

    So, you modify the loop to be something like:

    void DoLoop()
    {
        int sum;

        for (int x = 0; x < XMAX; x++)
        {
            for (int y = 0; y < YMAX; y++)
            {
                sum += y;
            }
        }
    }

    The loop now has some work done inside of it, so the loop can’t be eliminated.

    Rule 2: No, really. Do something useful

    However, the numbers won’t change. The call to DoLoop() has no side effects, so the entire call can be safely eliminated.

    To make sure your loop is really a loop, there needs to be a side effect. The best bet is to have a value returned from the method and write it out to the console. This has the added benefit of giving you a way of checking whether things are working correct.

    Rule 3: Benchmark != Real world

    There are lurking effects that invalidate your results. Your benchmark is likely tiny and places very different memory demands on the system than your real program does.

    Rule 4: Profile, don’t benchmark

     

    C# loop optimization

    If you are writing code that needs the utmost in speed, there is an improvement to be had using for rather than foreach. There is also improvement to be had using arrays rather than lists, and unsafe code and pointers rather than array indexing.

    Whether this is worthwhile in a specific case depends exactly on what the code is doing. I don’t see a lot of point in spending time measuring loops when you could spend time measuring the actual code.

  • Eric Gunnerson's Compendium

    Holiday light project 2008...

    • 1 Comments

    I've been searching for a new project to do for this season's holiday lights. I typically have four or five ideas floating around my head, and this year is no different.

    Lots of choices, so I've had to come up with a "rule" about new projects. The rule is that the project has to be in the neighborhood of effort-neutral. It already takes too long to put up (and worse, take down) the displays we already have, and I don't want to add anything that makes that worse. Oh, and they can't take too much power, because I'm already on a power budget.

    Unless, it's, like, especially cool.

    I had an idea that met all my criteria. It was small - small enough to be battery powered, if I did my power calculations properly, and was going to be pretty cool.

    It was, unfortunately, going to be a fair pain-in-the-butt to build - the fabrication was a bit complex, and the plan was to build a number of identical pieces. Oh, and it required me to choose the perfect LEDs from the 15 thousand that Mouser carries.

    So, I hadn't made much progress.

    Then, one day I was waiting for some paint to be tinted at my local home store, and I came across these.

    They're holiday lights. Jumbo-sized holiday lights.  The bulb part is made of colored plastic, and measures about 7" high. At the bottom there is a large fake lamp socket. Inside of all of it is a genuine C7 bulb of the appropriate color.

    I bought 3 sets, 15 in all.

    To be different, I wanted to build these as self-contained devices, with a separate microcontroller in each of the light bases. The microcontrollers I'm using cost about $1 each, so there isn't too much cost there, but the big challenge is a power supply. Generally, I build a linear power supply, which is simple and performs well, but you need an expensive and bulky transformer.

    There is a way around that, with the reasonably named "transformerless power supply". Realistically, a better name would be the "high-voltage shock-o-matic", because it involves hooking things directly to the AC line, can only supply a small amount of current, is inefficient, and is hard to troubleshoot. Oh, and if one component fails you get 150 volts instead of the 5 volts you were expecting.

    I decided to build one of these, so I ordered up some parts, wired it up, plugged it in, and immediately lost the magic smoke from one of the resistors. Turns out I miscalculated, and I needed a much-more-expensive power resister.

    Thinking about it some more, I decided that since I still needed power to each bulb - and therefore a wire to each bulb - it was simpler to just build a simple system with one microcontroller.

  • Eric Gunnerson's Compendium

    Brilliant...

    • 2 Comments
    I've been doing some electronics recently, and perhaps I'm therefore more likely to treat this kindly, but I have to say that I think it's brilliant.
  • Eric Gunnerson's Compendium

    What's going on now...

    • 1 Comments
    If you want to keep track of what's going on right now, you need this site...
  • Eric Gunnerson's Compendium

    Scary...

    • 3 Comments

    Last Saturday, we were invited to a Halloween party at a friend of a friend. I only decided to go Friday night, so I'd put essentially zero effort into thinking about a costume.

    The wife was going as a vampire (we had a long discussion on what the feminine form of "vampire" was. I tended towards "vampress", mostly because of how silly it sounded), and I thought of doing something that fit together with that thematically. A lame costume that fits together thematically with another one is much better than a lame one that sits by itself.

    After a while, something suggested itself, and things came together pretty well. You can see the results here:

    (Like pilot in the 1950s who had eyepatches to preserve eyesight in their dominant eye in case of a nuclear explosion, I suggest covering one eye before clicking on the following link).

    Pictures

    I'm hoping that it's obvious who I am.

    The party itself was pretty good. The hosts hired a magician who walked around and did close magic to entertain the crowd. He was talented, though frankly given the sobriety of the majority of the guests it could have been me doing the tricks.

    While we were there a few of the ladies took it on themselves to vandalize me.

  • Eric Gunnerson's Compendium

    wireless LCD photo frame...

    • 5 Comments

    I want to buy a couple of wireless LCD photo frames for my relatives. It needs to have a decent display, speak wireless, and hook up to a smugmug gallery.

    Any recommendations?

  • Eric Gunnerson's Compendium

    Versioning in HealthVault

    • 2 Comments

    Download the sample code from MSDN Code Gallery.

    [Note that EncounterOld has been renamed to EncounterV1 in newer versions of the SDK]

    “Versioning” is an interesting feature of the HealthVault platform that allows us to evolve our data types (aka “thing types”) without forcing applications to be rewritten. Consider the following scenario:

    An application is written using a specific version of the Encounter type. To retrieve instances of this type, you write the following:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);
    searcher.Filters.Add(filter);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)
    {

    }

    That returns all instances of the type with the Encounter.TypeId type id. When the XML data comes back into the .NET SDK, it creates instances of the Encounter wrapper type from them, and that’s what’s in the items array.

    Time passes, cheese and wine age, clothes go in and out of style. The HealthVault data type team decides to revise the Encounter type, and the change is a breaking one in which the new schema is incompatible with the existing schema. We want to deploy that new version out so that people can use it, but because it’s a breaking change, it will (surprise surprise) break existing applications if we release it.

    Looking at our options, we come up with 3:

    1. Replace the existing type, break applications, and force everybody to update their applications.
    2. Leave the existing type in the system and release a new type (which I’ll call EncounterV2). New applications must deal with two types, and existing applications don’t see the instances of the new type.
    3. Update all existing instances in the database to the new type.

    #3 looks like a nice option, were it not for the fact that some instances are digitally signed and we have no way to re-sign the updated items.

    #1 is an obvious non-starter.

    Which leaves us with #2. We ask ouselves, “selves, is there a way to make the existence of two versions of a type easier for applications to deal with?”

    And the answer to that question is “yes, and let’s call it versioning”…

    Versioning

    Very simply, the platform knows that the old and new versions of a type are related to each other, and how to do the best possible conversion between the versions (more on “best possible” in a minute…). It uses this information to let applications pretend that there aren’t multiple versions of a type.

    Down Versioning

    The first scenario is existing applications that were written using the original version of the Encounter type (which we’ll call EncounterV1 for clarity), and what happens when the come across a health record that also has EncounterV2 instances in it. Here’s a graphical indication of what is going on:

    image

    This application is doing queries with the EncounterV1 type id. When a query is executed, the platform knows that the encounter type has multiple versions, and converts the query for the V1 type to a query for all encounter types (both EncounterV1 and EncounterV2).

    The platform finds all the instances of those two types, and looks at each one. If it’s an EncounterV1 instance, it just ships it off to the application.

    But, if it’s an EncounterV2 instance, the platform knows (by looking at the application configuration) that this application doesn’t know what to do with an EncounterV2. It therefore takes the EncounterV2 instance, runs a transform on the EncounterV2 xml to convert it into EncounterV1 xml, and ships that across to the application. The data is “down versioned” to the earlier version.

    The application is therefore insulated from the updated version – it sees the instances using the type that the application was coded against.

    Down version conversions are typically lossy – there are often fields in the V2 version that are missing in the V1 version.  The platform therefore prevents updating instances that are down versioned, and will throw an exception if you try. You can look at the IsDownVersioned property on an instance to check whether updating is possible to avoid the exception.

    Up Versioning

    The second scenario is an application written to use the new EncounterV2 type:

    image

    This time, the V1 data is transformed to the V2 version. An application can check the aptly-named IsUpVersioned property to tell whether an instance is up versioned.

    Higher versions typically contain a superset of the data in the old version, and the platform therefore allows the instance to be “upgraded” (ie updated to the new version).

    However, doing so will prevent an application using the V1 version from being able to update that instance, which may break some scenarios. The application should therefore ask the user for confirmation before upgrading any instances.

    This would be a good time to download the sample applications and run the EncounterV1 and EncounterV2 projects. Because this behavior is controlled by the application configuration, each application has its own certificate, which will need to be imported before the application can be run.

    Add some instances from both V1 and V2, and see how they are displayed in each application. Note that EncounterV1 displays both the instances it created and the down-versioned instances of the EncounterV2 type, and that EncounterV2 displays the instances it created and up-versioned instances of the EncounterV1 type.

    Version-aware applications….

    In the previous scenarios, the application was configured to only use a single version of a type.

    In some cases, an application may want to deal with both versions of a data type simultaneously, and this is known as a “version-aware” application.

    We expect such applications to be relatively uncommon, but there are some cases where versioning doesn’t do everything you want.

    One such case is our upcoming redesign to the aerobic session type. The AerobicSession type contains both summary information and sample information (such as the current heart rate collected every 5 seconds). In the redesign, this information will be split between the new Exercise and ExerciseSamples type. AerobicSession and Exercise will be versioned, but there will be no way to see the samples on AerobicSession through Exercise, nor will you be able to see ExerciseSamples through AerobicSession (there are a number of technical reasons why this is very difficult – let me know if you want more details). Therefore, applications that care about sample data will need to be version-aware.

    Writing a version-aware application adds one level of complexity to querying for data. Revisiting our original code, this time for an application that is configured to access both EncounterV1 and EncounterV2:

    HealthRecordSearcher searcher = PersonInfo.SelectedRecord.CreateSearcher();

    HealthRecordFilter filter = new HealthRecordFilter(Encounter.TypeId);
    searcher.Filters.Add(filter);

    HealthRecordItemCollection items = searcher.GetMatchingItems()[0];

    foreach (Encounter encounter in items)
    {

    }

    When we execute this, items contains a series of Encounter items. The version of those items is constrained to the versions that the application is configured (in the Application Configuration Center) to access. If the version in the health record is a version that the application is configured to access, no conversion is performed, *even if* the version is not the version specified by the filter.

    That means that the items collection may contain instances of different versions of the type – in this case, either Encounter or EncounterOld instances. Any code that assume that instances are only of the Encounter type won’t work correctly in this situation. You may have run into this if you are using the HelloWorld, as it has access to all versions of all types.

    Instead, the code needs to look at the instances and determine their types:

    foreach (HealthRecordItem item in items)

         Encounter encounter = item as Encounter; 
         if (encounter != null) 
         {
         } 

         EncounterOld encounterOld = item as EncounterOld;
         if (encounterOld != null)
         { 
         }
    }

    This is a good reason to create your own application configuration rather than just writing code against HelloWorld.

    Controlling versioning

    The platform provides a way for the application to override that application configuration and specify the exact version (or versions) that the application can support. For example, if my application wants to deal with the new type of Encounter all the time, it can do the following:

    filter.View.TypeVersionFormat.Add(Encounter.TypeId);

    which will always return all versions of encounter instances as the Encounter type. 

    EncounterBoth is a version-aware application in the sample code that handles the Encounter and EncounterV1 types. It will also let you set the TypeVersionFormat.

    Asking the platform about versioned types…

    If you ask nicely, the platform will be happy to tell you about the relationship between types. If you get fetch the HealthRecordItemTypeDefinition for a type, you can check the Versions property to see if there are other versions of the type.

    The VersionedTypeList project in the sample code shows how to do this.

    Naming Patterns

     

     

     

     

    There are a couple of different naming patterns in use. The types that were modified before we had versioning use the “Old” suffix (MedicationOld and EncounterOld) to denote the original types. For future types, we will be adopting version suffix on the previous type (so, if we versioned HeartRate, the new type would be named HeartRate, and the old one HeartRateV1). We are also planning to rename MedicationOld and EncounterOld to follow this convention.

    We will also be modifying the tools (application configuration center and the type schema browser) to show the version information. Until we get around to that, the best way is to ask the platform as outlined in the previous section.

  • Eric Gunnerson's Compendium

    100 skills every one should know...

    • 8 Comments

    Popular Mechanics has a list of "100 skills every one should know". How many have you have?

     (I'll give my count later...)

  • Eric Gunnerson's Compendium

    Triathlon report

    • 2 Comments
    For your "enjoyment", a report on the Triathlon I did last Sunday...
  • Eric Gunnerson's Compendium

    TV Calibration

    • 4 Comments

    A few years, ago, I bought one of the last high-end rear-projection TVs based on CRTs - a Pioneer Elite 620HD. I did some basic calibration with the Avia disc and did some other minor adjustments, but never got around to getting a real calibration done.

    Calibration is the process of getting the TV to be as close as possible to NTSC settings - the same settings that were used when the program was created. That means getting colors and gray levels as close as possible to what they should be.

    But, if it's a high-end TV, why isn't it set from the factory to meet NTSC settings? Well, the simple fact is that TV manufacturers play games to make their TVs stand out in showroom settings. That generally means pictures that are far brighter than they should be (with corresponding poor black levels), colors are off, and the picture is over-sharpened.

    Some newer TVs let you choose a setting that's close to NTSC, but in most cases, calibration can make a big difference. If you have a LDC or Plasma set, start with a disc like Avia and see what you get out of it (Avia also has calibration for surround sound, which may also be useful).

    In my case, my set needed cleaning, focus adjustments (because it's a rear-projection set), convergence (because it has 3 CRTs in it, one for red, blue, and green), and geometry (because it uses CRTs). You can find local techs in my area who can do calibration, but because my set is fairly rare these days, I wanted an expert, and hooked with David Abrams from Avical, working out of LA, when he was on a trip in the Seattle area.

    David is a really nice guy and did a wonderful job. He spent 90 minutes on the geometry of the set (making sure straight lines are straight in all 3 colors), and about 60 minutes on the convergence (aligning all three colors). I only have about 5% of the patience he has when he's doing those kinds of things.

    So, after cleaning the set, setting the focus, setting the geometry and convergence, he was on to setting the gray levels and colors. With his test pattern generator (running through the TV's component inputs, which is all I use...) and his $18K color analyzer, that part went pretty quickly. He then worked through all my sources (Tivo HD, DVD, XBox 360) and verified that everything was set right. Finally, we looked at some source and he did some final tuning.

    The results were pretty impressive.

    The downside is that the differences between HD feeds is now really obvious - some look great, but others really fall short.

    Recommended.

  • Eric Gunnerson's Compendium

    A box of toys...

    • 3 Comments

    I am a box of toys and notions. Among other things, I contain a hard box full of legos and a gross of superballs. On my outside I have a series of labels that tell me where I have lived.

    As a proud corrugated-American, I'd like to share my story.

    113/1136

    My location for the last 24 hours. My owner has moved to this office to be closer to the rest of the HealthVault partner team, which he has joined due to a recent organizational optimization. I like this location because lots of people stop by, but I'm worried it will be loud because of the standby diesel generator right outside.

    113/2172

    I moved back to main campus to this office. It faces east and has a decent view of a parking lot. My owner works on the HealthVault partner team.

    Northup/2036

    I love this office, which is big and has a nice view to the south. I do worry about degradation of my structure because the sunlight makes it hot and sometimes the A/C fails. My owner works on the HealthVault partner team, and enjoys the "small team" atmosphere.

    119/2262

    This office is on west campus. It's okay, but the building isn't great. My owner works on the Windows Live pictures and video team.

    50/3602

    I moved from one end of the hall to the other, and my corners are getting banged up. My owner is on a newly-reorganized team and isn't sure what he works on right now.

    50/3430 

    I moved two offices down to this one - I can look out through the window and watch my owner when he sits outside at lunch, though he does complain about the cafeteria now and then. My owner is on the Windows Movie Maker team working on the DVD Maker user interface.

    50/3501

    Welcome to the Movie Maker team, and to a new area of campus. Building 50 stands alone by itself and is a bit inconvenient to get to, but it seems nice enough, and there are lots of boxes next door that I can spend time with.

    41/1722

    A new office in the same building, this time looking at a wall of plants. My owner is a PM on the C# team, and is doing language design.

    41/2788

    Despair. My owner is happy with his assignment as the test lead on the C# compiler, but this office faces south, gets very hot, and has a lovely view of the top of a cafeteria.

    42/1816

    A pretty good office in a really nice building. I face north towards a set of vegetation. My owner co-owned office assignments on this move and got to choose his one before other people, so he got a nice one. He's a test lead for the C++ compiler front end, though there are rumors that there's something new coming along.

    4/2238

    My first window office, with a beautiful view of a forest. My owner is very happy to get such a nice window office, but he finds it hard not to get lost in the 1-4 building complex. He's a test lead for the C++ compiler.

    25/????

    I was never in building 25, but my owner often tells me stories about it late at night. He says he had two different offices there, both of them pretty good.

  • Eric Gunnerson's Compendium

    Fun with HealthVault transforms

    • 2 Comments

    It is sometimes useful to be able to display data in an application without having to understand the details of that data.

    To make this easier, HealthVault provides a couple of features: transforms, and the HealthRecordItemDataGrid class.

    Discovering transforms

    To find out what transforms are defined for a specific type, we just ask the platform:

        HealthRecordItemTypeDefinition definition =
                 ItemTypeManager.GetHealthRecordItemTypeDefinition(Height.TypeId, ApplicationConnection);

    In this case, we get back the definition of the Height thing type. In the returned HealthRecordItemTypeDefinition (which I’ll just call “type definition” for simplicity), you can find the schema for the type (in the XmlSchemaDefinition property), and two properties related to transforms.

    SupportedTransformNames lists the names of the transforms. In this case, the list is “form”, “mtt”, and “stt” (more on what these do later).

    TransformSource contains the XSL transforms themselves, keyed with the name of the transform.

    Applying transforms

    If you want to apply the transform, there are two ways to do it.

    The first way is to do it on the client side:

        string transformedXml = definition.TransformItem(“mtt”, heightInstance);

    That’s pretty straightforward, though you do need to fetch the type definition for the type first. You could also apply the transform using the .NET XML classes, if you wanted to do more work.

    The second option is to ask the platform to do the transform for you. You do this by specifying the transform name as part of the filter definition:

        HealthRecordFilter filter = new HealthRecordFilter(Height.TypeId);
        filter.View = new HealthRecordView();
        filter.View.TransformsToApply.Add("mtt");
        filter.View.Sections = HealthRecordItemSections.Core;

    and then the item will already have the transformed text inside of it:

        XmlDocument mttDocument = heightInstance.TransformedXmlData["mtt"];

    Available transforms

    Each type defines a set of transforms that let you look at a data instance in a more general way. The are named “mtt”, “stt”, and “form”.

    “mtt” transform

    The mtt (which stands for “multi type transform”) generates the most condensed view of data – it returns values for properties that are present on all data types, and a single summary string.

    For example, asking for the mtt transform of a Height instance returns the following:

    <row wc-id="8608696a-94b3-41a8-b9a6-219ecbbc87d1" 
         wc-version="1e715849-43d7-4a72-9c65-8163649c0f84" 
         wc-note="" 
         wc-tags="" 
         wc-date="2008-01-22 11:12:42" 
         wc-type="Height Measurement" 
         wc-typeid="40750a6a-89b2-455c-bd8d-b420a4cb500b" 
         wc-source="" 
         wc-brands="" 
         wc-issigned="False" 
         wc-flags="" 
         wc-ispersonal="false" 
         wc-relatedthings="" 
         summary="1.9405521428867 m" />

    The “wc-“ attributes are the common data items across all instances. The most interesting piece of data is the summary attribute, which gives you the (surprise!) summary string for the instance.

    “stt” transform

    The stt (“single type transform”) is similar to the mtt, but instead of a single summary attribute there are a series of attributes that correspond to the properties on the data type. It will generally contain a attribute for every important property, but if the property is a less important detail and/or the type is very complex, this may not be true.

    For our Height instance, we get this for the mtt transform

    <row
      wc-id="8608696a-94b3-41a8-b9a6-219ecbbc87d1"
      wc-version="1e715849-43d7-4a72-9c65-8163649c0f84"
      wc-note=""
      wc-tags=""
      wc-date="2008-01-22 11:12:42"
      wc-type="Height Measurement"
      wc-typeid="40750a6a-89b2-455c-bd8d-b420a4cb500b"
      wc-source=""
      wc-brands=""
      wc-issigned="False"
      wc-flags=""
      wc-ispersonal="false"
      wc-relatedthings=""
      when="2008-01-22 11:12:42"
      display="1.9405521428867 m"
      height-in-m="1.9405521428867" />

    How do we know what attributes are here and what to do with them?

    That information is stored in the ColumnDefinitions property of the type definition. Each of these (an ItemTypeDataColumn instance) corresponds to one of the attributes on the row created by the STT transform.

    The following code can be used to pull out the values:

        XmlNode rowNode = item.TransformedXmlData["stt"].SelectSingleNode("data-xml/row");

        foreach (ItemTypeDataColumn columnDefinition in definition.ColumnDefinitions)
        {
            XmlAttribute columnValue = rowNode.Attributes[columnDefinition.ColumnName];
        }

    This is the mechanism that the HealthVault shell uses to display detailed information about an instance. There is additional information in the ItemTypeDataColumn that it uses:

    The Caption property stores a textual name for the column.

    The ColumnTypeName property stores the type of the column.

    The ColumnWidth contains a suggested width to use to display this information.

    The VisibleByDefault property defines whether the column is visible in the shell view by default (the wc-<x> ones typically are not, with the exception of wc-date).

    HealthRecordItemDataGrid

    If you don’t want to decode all the column information yourself, you can use the HealthRecordItemDataGrid in your project.

    Put the following after the page directive in your .aspx file:

        <%@ Register TagPrefix="HV" Namespace="Microsoft.Health.Web"  Assembly="Microsoft.Health.Web" %>

    and then put an instance of the grid in the appropriate place:

        <HV:HealthRecordItemDataGrid ID="c_itemDataGrid" runat="server" />

    You then create a filter that defines the data to show in the grid in the page load handler:

        c_itemDataGrid.FilterOverride = new HealthRecordFilter();
        c_itemDataGrid.FilterOverride.TypeIds.Add(AerobicSession.TypeId);

    and the grid will be rendered using the STT transform view. If you want it to use the MTT transform view, you can set the TableView property on the grid to MultipleTypeTable, and it will show a summary view. You will also see this view if the filter returns more than one thing type.

    The form transform

    The final transform is the form transform. This transform exists on most, though not all types (we’re working to add form transforms where they’re absent). It provides an HTML view of the type.

    For our Height instance, we get the following from the form transform:

    <div class="xslThingTitle" id="genThingTitle">Height</div>
    <div class="xslThingValue">1.9405521428867 m</div>
    <table class="xslThingTable">
      <tr>
        <td class="xslTitleColumn">Date</td>
        <td class="xslValueColumn">2008-01-22 11:12:42</td>
      </tr>
    </table>

    which, when rendered, looks something like this:

    Height
    1.9405521428867 m
    Date 2008-01-22 11:12:42

    Other transforms

    Some thing types will return a list that contain other transforms with names like “wpd-F5E5C661-26F5-46C7-9C6C-7C4E99797E53” or “hvcc-display”. These transforms are used by HealthVault Connection Center, for things like transforming WPD data into the proper xml format for a HealthVault instance.

  • Eric Gunnerson's Compendium

    Fantastic contraption

    • 3 Comments

    I apologize ahead of time

    Fantastic Contraption...

  • Eric Gunnerson's Compendium

    In praise of boredom...

    • 5 Comments

    Raymond wrote an interesting post about the erosion of the car trip experience.

    Along with the desired to shield our kids from any discomfort, I think there's a big desire to shield them from boredom.

    Boredom is part of being an adult, and I think learning to deal with it is an important part of growing up.

    Back when I was a kid, every year or so we took a long trip from Seattle to Boise to visit my grandparents. Though we usually made the trip over in the night (to avoid the heat (no AC, of course)), there were lots of hours of watching the "scenery" roll by (as an adult, I find the area around along the Columbia river to be striking, but as it kid it's a whole lot a nothin'), and then similar hours while we were there.

    That meant we have to learn how to amuse ourselves and not annoy each other too much.

    But if you have video every time your in the car, you don't learn how to deal with being bored, and (as a parent) you miss some great opportunities for conversation, not to mention the chance to inflict your musical tastes on your offspring.

     

     

  • Eric Gunnerson's Compendium

    Dr. Horrible...

    • 1 Comments

    Last night I fixed the quicktime player on my somewhat aging home machine by turning of DirectX acceleration, and the family sat down to watch Dr. Horrible's Sing-Along Blog, which I had purchased through iTunes for the princely sum of $3.99.

    Dr. Horrible, if you aren't "in the know", was created by Joss Whedon, of Firefly fame. Firefly was a sci-fi western, and Dr. Horrible is a internet comic book superhero musical. It stars Felicia Day as the love interest, Nathan Fillion (aka Mal) as Captain Hammer, and Neil Patrick Harris (who will always be "Doogie" to me...) as the title character.

    Both the writing and the music are top-notch, as are the performances by the main characters. I'm hoping there will be more than the initial 3 episodes.

    Recommended.

  • Eric Gunnerson's Compendium

    HealthVault data type design...

    • 3 Comments

    From the "perhaps this might be interesting" file...

    Since the end of the HealthVault Solution Providers confererence in early June - and the subsequent required recharging - I've been spending the bulk of my time working on HealthVault data types, along with one of the PMs on the partner team. It interesting work - there's a little bit of schema design (all our data types are stored in XML on the server), a little bit of data architecture (is this one data type, or four?), a fair amount of domain-specific learning (how many ways are there to measure body fat percentage?), a bit of working directly with partners (exactly what is this type for?), a lot of word-smithing (should this element be "category", "type", or "area"?), and other things thrown in now and then (utility writing, class design, etc.).

    It's a lot like the time I spent on the C# design team - we do all the design work sitting in my office (using whiteboard/web/XMLSpy), and we often end up with fairly large designs before we scope them back to cover the scenarios that we care about. Our flow is somewhat different in that we have external touch points where we block - the most important of which is an external review where we talk about our design and ask for feedback.

    Since I'm more a generalist by preference, this suits me pretty well - I get to do a variety of different things and learn a variety of useful items (did you know that you can get the body fat percentage of your left arm measured?).

    Plus, I learn cool terms like "air displacement plethysmography"

  • Eric Gunnerson's Compendium

    Seattle Century 2008 ride report

    • 2 Comments
    Seattle Century 2008 ride report
  • Eric Gunnerson's Compendium

    Progrography

    • 1 Comments

    Back when I first started listening to music - in the days before there were CDs - if you were cool you bought your music on records, and then taped them onto 90-minute cassettes. You did this because it was hard to flip a record while you were driving, records wore out, and pre-recorded cassettes sounded a bit like a 48kbs MP3 stream, when they worked. Sometimes they didn't, and you had a $8 cassette afro.

    Oh, and you had to worry about azimuth (an adjustment that, when wrong, could kill all your high end), Dolby B, and, if you were really cool, Dolby C (twice the Dolby!).

    And you listened to what was called "album rock" in my area, otherwise known as "progressive rock". Those who wish to debate the differences between those two labels are welcome.

    Then, I went off to college, and CDs were released, but the cost $900, so nobody had one (actually one guy in my dorm had one, connected to his $10K high-end system). So, we kept buying albums.

    Then prices came down, we got jobs, and we bought CDs of the groups we had listened to, and reveled in the CD experience. The sound was so much clearer than cassettes, that we didn't realize that a lot of the early transfers were awful.

    Over time, many of our albums got remastered - first by Mobile Fidelity Sound Lab (who had made some pretty killer vinyl recordings), and then by the labels as they realized there was some money in it.

    By this point, you're wondering if there is a point, or if it's just an onion belt.

    Anyway, I have a fair number of CDs that are early transfers, and I'd like to replace them, but it can be a bit of a pain to find out what specific albums have been remastered, when they came out, etc.

    Enter Progrography, which has a bunch of information about progressive rock, including re-releases. So, if I want to know what remaster to get for Who's Next, there's a page that gives me all the info. Well, some of the info - some of the pages are a bit out of date - but what's there seems to be pretty good.

    And if not, I can remember listening to AC/DC, which was the style at the time...

     

     

  • Eric Gunnerson's Compendium

    #region. Sliced bread or sliced worms?

    • 23 Comments

    (editors note: Eric gave me several different options to use instead of sliced worms, but they were all less palatable. So to speak.)

    Jeff Atwood wrote an interesting post on the use of #regions in code.

    So, I thought I’d share my opinion. As somebody who’s was involved with C# development for a long time – and development in general for longer – it should be pretty easy to guess my opinion.

    There will be a short intermission for you to write your answer down on a piece of paper. While you’re waiting, here are some kittens dancing

    Wasn’t that great. No relation, btw.

    So, for the most part, I agree with Jeff.

    The problem that I have with regions is that people seem to be enamored with their use. I was recently reviewing some code that I hadn’t seen before, and it had the exact sort of region use that Jeff describes in the Log4Net project. Before I can read through the class, I have to open them all up and look at them. In that case, regions make it significantly harder for me to understand the code.

    I’m reminded of a nice post written by C# compiler dev Peter Hallam, where he talks about how much time developers spend looking at existing code. I don’t like things that make that harder.

    One other reason I don’t like regions is that they give you a misleading impression about how simple and well structured your code is. The pain of navigating through a large code file is a good reason to force you to do some refactoring.

    When do I like them? Well, I think that if you have some generated code in part of a class, it’s okay to put it in a region. Or, if you have a large static table, you can put it in a region. Though in both cases I’d vote to put it in a separate file.

    But, of course, I’m an old dog. Do you think regions are a new trick?

Page 4 of 46 (1,140 items) «23456»