Posts
  • Eric Gunnerson's Compendium

    Default parameters in C#

    • 4 Comments

    From an internal discussion we're having on the advisability of using default parameters in C#:

    Currently, the pain and limitation of doing overloads forces you to rethink how a method should work. Consider the following:

     Process(int a);

    Process(int a, float b);

    Process(int a, float b, string c);

     

    If I now need to change how that works in some situations, I could add a boolean to control that behavior, but it’s not obvious how to add it to the current overload scheme. I can do something like:

     

    Process(int a);

    Process(int a, bool doBackground);

    Process(int a, float b);

    Process(int a, float b, bool doBackground);

    Process(int a, float b, string c);

    Process(int a, float b, string c, bool doBackground);

     

    But not only is that a bit hard to write, it’s a bit confusing, and if I need to add another parameter in the future, I’m pretty much SOL. So, that forces me to consider the alternatives; should I go with a “Settings” class (like XmlWriterSettings), should I live with it, or should I think about refactoring the process to simplify things? That forced stop gives me the chance to think about better ways to approach things.

     

    With default parameters, it’s going to be really tempting to just add a default, and it’s more likely you’ll end up with methods like this:

     

    Process(int a, float b, string c, bool doBackground = false, bool writeToLog = true, string database=null, string method=”jumble”);

     

    That is bad not only for the caller of the api, but it suggests that the process method is pretty complex internally.

     

    On the other hand, I can’t count how many times I’ve written a well-constrained series of overloads that purely added in default values and had to write nearly duplicate xml docs for each of them, and I’d be really happy to save that time and not have those methods clutter up the code. 

     

    Or, to put it another way, default parameters are great if you use them to simplify scenarios that you would have written with overloads. If you start doing things that would be hard to express in overloads, I’d look harder at the overall design.

     

  • Eric Gunnerson's Compendium

    Writing Tests for HealthVault Applications

    • 0 Comments

    We have added some useful new functionality designed to make it easier to test a HealthVault application.

    The existing HealthVault SDKs didn’t make it easy if you wanted to write isolated (or method-level) unit tests for features that talked to the HealthVault Platform. You could usually do it by designing your own layer that could return simulated results (sometimes called “mocking” in agile methodologies), but that was sometimes difficult because of the way the SDK worked.

    In this release, we’ve made some changes that should make this a whole lot easier.

    For the sake of discussion, consider a bit of application code that fetches medications from HealthVault and filters them:

             HealthRecordItemCollection GetNewMedications(HealthRecordAccessor record)
             {
                 HealthRecordSearcher searcher = record.CreateSearcher(Medication.TypeId);
                 HealthRecordItemCollection items = searcher.GetMatchingItems()[0];
                 // filter items here... 
     
                 return  items;
             }
     

    We want to write a test that verifies that the “filter items here” part of the method works correctly, and we’d like the test to run without talking to the HealthVault platform. That’s not easy to do in the current SDK, because there’s no way to control what comes back from GetMatchingItems().

    In the new SDK, there is a way to do that. If we debug down we will find that the call to GetMatchingItems ends up in the following method in a new class named HealthVaultPlatform.

     

             public  static  ReadOnlyCollection <HealthRecordItemCollection > GetMatchingItems(
                 ApplicationConnection  connection,
                 HealthRecordAccessor  accessor,
                 HealthRecordSearcher  searcher)
             {
                 return  HealthVaultPlatformItem .Current.GetMatchingItems(connection, accessor, searcher);
             }
     

    The HealthVaultPlatform class centralizes all operations (except for one exception I’ll cover later) in a single class – if the SDK needs to talk to HealthVault it goes through that class. You can call into that class directly if you wish, or just troll through to see what operations can be performed.

    To create our test, we are going to be hooking in underneath that level. The method above just forwards into a method in the HealthVaultPlatformItem class, and that class provides a way for us to override the behavior.

    To get started, we need to create a class that derives from HealthVaultPlatformItem and overrides the GetMatchingItems() method. The first version looks like this:

         public  class  HealthVaultPlatformItemMock  : HealthVaultPlatformItem
         {
             HealthRecordItemCollection _itemsToReturn;
     
             public  HealthVaultPlatformItemMock(params  HealthRecordItem[] items)
             {
                 _itemsToReturn = new  HealthRecordItemCollection(items);
             }
     
             public  override  ReadOnlyCollection <HealthRecordItemCollection> GetMatchingItems(
                 ApplicationConnection connection, 
                 HealthRecordAccessor accessor, 
                 HealthRecordSearcher searcher)
             {
                 List <HealthRecordItemCollection> collections = 
                     new List <HealthRecordItemCollection>();
                 collections.Add(_itemsToReturn);
     
                 return  new  ReadOnlyCollection <HealthRecordItemCollection>(collections);
             }
         }
     

    We can use it like this (this is an NUnit test):

             [Test]
             public  void  GetMatchingItems()
             {
                 Medication medication = new  Medication(new  CodableValue("Ibuprofen" ));
                 Medication medication2 = new  Medication(new  CodableValue("Vitamin C" ));
     
                 HealthRecordItemCollection newItems = null ;
                 HealthVaultPlatformItemMock mock = new  HealthVaultPlatformItemMock(medication, medication2);
                 HealthVaultPlatformItem.EnableMock(mock);
                 ApplicationConnection connection = new  ApplicationConnection(Guid .NewGuid());
                 HealthRecordAccessor accessor = new  HealthRecordAccessor(connection, Guid .NewGuid());
                 newItems = GetNewMedications(accessor);
                 HealthVaultPlatformItem.DisableMock(mock);
     
                 Assert.AreEqual(2, newItems.Count);
                 Assert.AreEqual("Ibuprofen" , ((Medication)newItems[0]).Name.Text);
                 Assert.AreEqual("Vitamin C" , ((Medication)newItems[1]).Name.Text);
             }
     

    When the call to GetMatchingItems() gets down to HealthVaultPlatformItems, it will end up calling our mocked method rather than the built-in one.

    The code requires us to do a few things:

    1. Create an instance of the mock class.
    2. Enable the mock.
    3. Disable the mock.

    We can make it nicer by having the mock class itself handle enabling and disabling the mock, using the following:

         public  class  HealthVaultPlatformItemMock  : HealthVaultPlatformItem, IDisposable 
         {
             HealthRecordItemCollection _itemsToReturn;
     
             public  HealthVaultPlatformItemMock(params  HealthRecordItem[] items)
             {
                 _itemsToReturn = new  HealthRecordItemCollection(items);
                 HealthVaultPlatformItem.EnableMock(this );
             }
     
             public  override  ReadOnlyCollection <HealthRecordItemCollection> GetMatchingItems(
                 ApplicationConnection connection, 
                 HealthRecordAccessor accessor, 
                 HealthRecordSearcher searcher)
             {
                 List <HealthRecordItemCollection> collections = new  List <HealthRecordItemCollection>();
                 collections.Add(_itemsToReturn);
     
                 return  new  ReadOnlyCollection <HealthRecordItemCollection>(collections);
             }
     
             #region  IDisposable
             ~HealthVaultPlatformItemMock()
             {
                 Dispose(false );
             }
     
             /// <summary> 
             /// Disposes the request. 
             /// </summary> 
             ///  
             public  void  Dispose()
             {
                 Dispose(true );
                 GC.SuppressFinalize(this );
             }
     
             /// <summary> 
             /// Disables the mocking. 
             /// </summary> 
             ///  
             /// <param name="disposing"></param> 
             ///  
             protected  void  Dispose(bool  disposing)
             {
                 HealthVaultPlatformItem.DisableMock();
             }
     
             #endregion  IDisposable
      
         }
     
    That allows us to simplify our test code to this:
     
             [Test]
             public  void  GetMatchingItems()
             {
                 Medication medication = new  Medication(new  CodableValue("Ibuprofen" ));
                 Medication medication2 = new  Medication(new  CodableValue("Vitamin C" ));
     
                 HealthRecordItemCollection newItems = null ;
                 using  (HealthVaultPlatformItemMock mock = new HealthVaultPlatformItemMock(medication, medication2))
                 {
                     ApplicationConnection connection = new  ApplicationConnection(Guid .NewGuid());
                     HealthRecordAccessor accessor = new  HealthRecordAccessor(connection, Guid .NewGuid());
                     newItems = GetNewMedications(accessor);
                 }
     
                 Assert.AreEqual(2, newItems.Count);
                 Assert.AreEqual("Ibuprofen" , ((Medication) newItems[0]).Name.Text);
                 Assert.AreEqual("Vitamin C" , ((Medication) newItems[1]).Name.Text);
             }
     

    Special Classes

    There are a few classes where it’s not straightforward to create the class. For classes such as ServiceInfo, we don’t provide a way to create and modify them directly. To create an instance of those classes, you need to derive a new class and use that:

         public  class  ServiceInfoTest  : ServiceInfo
         {
             public  ServiceInfoTest(string  version)
             {
                 Version  = version;
             }
         }
     

    and the associated test uses this class instead of ServiceInfo:

             [Test]
             public  void  GetServiceInfoTest()
             {
                 ApplicationConnection connection = new  ApplicationConnection(Guid .NewGuid());
     
                 ServiceInfoTest serviceInfo = new  ServiceInfoTest("V2.x" );
     
                 ServiceInfo serviceInfoBack = null ;
                 using  (HealthVaultPlatformInformationMock mock = new  HealthVaultPlatformInformationMock(serviceInfo))
                 {
                     serviceInfoBack = connection.GetServiceDefinition();
                 }
     
                 Assert.AreEqual("V2.x" , serviceInfoBack.Version);
             }
     

     

    Limitations

    We currently don’t have mockable interfaces for blob operations. We hope to do that in a future release.

  • Eric Gunnerson's Compendium

    HealthVault Event Notifications

    • 0 Comments

    The HealthVault platform now provides the ability to notify applications when specific conditions are met.

    A scenario

    A blood-pressure-tracking application wants to be notified whenever a new blood pressure measurement is added to any of the user records that the application has access to, so it can perform some operation with the data.

    With previous releases, the only way to do this was for the application to periodically call GetUpdatedRecordsForApplication(), and then look at each record that was updated to see if the update was a new blood pressure instance.

    The solution

    Each application can now create a series of subscriptions, where each subscription specifies the event to detect and how to notify the application when the event occurs.

    The BloodPressureTracker application creates a new subscription, specifies that it wants to be notified when a blood pressure measurement is added, updated or removed, and that the notification should be sent to www.example.com/notificationBloodPressurePage.ashx. The subscription is persistent until the application deletes it.

    The notification page must be in a location that is accessible to HealthVault, which means it is accessible to other internet programs. To allow the application to verify that a notification came from the HealthVault platform, the application registers a key with the subscription, and when the notification arrives the application can verify that the HMAC in the message is identical to one computed by the application.

    The life and times of a notification

    Notification Dispatch

    The dispatching of a notification happens on the HealthVault Platform.

    • An operation such as PutThings is performed on the HealthVault Platform.
    • The HealthVault platform finds subscriptions that match the event.
    • The HealthVault platform notifies the application using the following steps:
    • The key registered with the matching subscription is used to create an HMAC of the notification payload.
    • That hash, a version id that was specified with the key, and the subscription id are included in the Authentication header of a request.
    • The request is sent to the URL defined in the subscription as a POST with the XML notification text as the POST payload.
    • The server waits for a response.
    • If it gets a “200 OK” response, it considers the notification to be delivered.
    • If it gets any other response or does not receive a response, it will hold onto the notification and try again later.
    • If the notification cannot be delivered after a period of days (currently set to 10 days but subject to change), the notification is abandoned.

      Notification Processing

      The notification processing happens in the HealthVault application.

    • The notification handler reads the XML notification text into a string.
    • The key version id, subscription id, and HMAC of the notification payload are extracted from the authentication header.
    • The notification handler determines which subscription was notified based on the key version id that was passed.
    • The expected key is determined based on the key version id that was passed. This allows keys to be updated to new versions while not breaking the handling using the old keys.
    • An HMAC of the xml notification text is calculated, and compared to the one passed in the header. If the hmac does not match the notification should be ignored and discarded as it did not originate from the HealthVault service.
    • The notification handler returns a status of “200 OK” so that the HealthVault platform knows that the delivery was successful.
    • The XML notification text is processed.

      The processing of the notification should be performed on a separate thread to prevent the possibility of taking so much time that the timeout is reached.

      Event types and notification methods

      For this release, the platform supports one event type – a change (add/update/delete) of an instance of a specific set of data types in a user’s record – and one delivery method – over an https: connection. We are planning to extend support in future releases – if you would like to influence which events and delivery methods we consider, please send us feedback.

      Health Record Item Changed Event details

      The health record item changed event passes the following information in the notification:

      • The person id and record id that specify the record in which the change was made.
      • A list of the health record item ids (aka “thing ids”) that were changed.

      After the notification is received the application will need to fetch the item to determine what change was made. If the object was deleted, it will not be returned from the fetch operation.

      Limitations

      Notification URL

      The notification URL must be on the same domain as the action url that is registered with the application.

      Authorization

      The user must have granted the application offline read access to the data type that the subscription refers to.

      Number of subscriptions

      An application can only register 25 subscriptions at a time. This number is subject to change.

      Delivery timeliness and guarantee

      The HealthVault platform makes a “best effort” to deliver each notification in a timely manner, but does not guarantee delivery. It is not designed for real-time monitoring scenarios.

      Notification of changes only

      Notifications are delivered only for changes that are detected in records – the platform does not notify for items that are already existing in a record when the user first authorizes the application, nor does it notify for deletion if a user de-authorizes the application.

      Eventing sample and test application

      We have created a sample application which serves three purposes:

      1. It demonstrates how to use the subscription manager api calls to create, modify, and delete subscriptions.
      2. It provides a sample implementation of a subscription notification handler that processes incoming notifications.

      Getting started

      The first time that you run the application, it will generate an application id and a key for you to use. The sample will tell you how to properly define these in the web.config file.

      Managing subscriptions

      The management part of the application is pretty simple – you merely add a new subscription and then list the data type ids that you want to be monitored.

      Testing notification handlers

      In many cases, developer machines are not directly reachable from the internet and therefore there is no address that can be used in a subscription. To make it easier to develop notification handlers, the sample application can send simulated notifications to a notification handler for debugging purposes. It provides the following options:

      Notification Destination

      Choose between the URL defined in the subscription, the test notification handler defined in the project, or a URL that you enter.

      Authentication

      Choose Normal to have correct authentication headers, send bad HMAC to send an HMAC that is incorrect, or send bad key version to send a key version that is different than the one in the subscription.

      Instances

      The sample application can generate fake instance ids (if you just want to check that the notification handler is set up correctly), send an empty instance list, or select actual instance IDs from the current record.

  • Eric Gunnerson's Compendium

    Naked came the null delegate

    • 0 Comments

    I few weeks ago James Curran came up with the idea of a number of .NET bloggers (or, in my case, bloggers who remember vaguely what .NET is about) write a serial story. I, who am easily flattered by the smallest of attentions to my previous brush with semi-fame, signed on.

    And then when it came around to me, I procrastinated for a few days, wrote something I didn’t like and threw it away, wrote something I liked that I couldn’t figure out how to fit into the existing story, then finally wrote something that I’m somewhat fond of that fits into the story, more or less. Which is kindof the point.

    My contribution is here. I recommend reading the first two chapters so that you won’t be lost. You can also read James' explanation for the title.

    If you have questions about the obscure parts (ie – what is he writing about) feel free to ask in comments, and I’ll try to answer when I get a few seconds away from my adoring fans.

  • Eric Gunnerson's Compendium

    HealthVault SDK and Visual Studio 2005

    • 4 Comments

    The HealthVault SDK is currently built on top of the .NET 2.0/Visual Studio 2005 toolset. We are thinking about moving forward a few years and switching to the .Net 3.5/Visual Studio 2008 toolset, but would like some feedback from customers on what they are using first.

    If you are building HealthVault applications, can you reply with the version of VS that you are using? Thanks.

  • Eric Gunnerson's Compendium

    Photographers at Microsoft Fundraiser

    • 0 Comments

    There’s a fairly active photography alias at Microsoft, and last year during October – the annual Microsoft Giving Campaign – about 200 photographers got together and produced a Blurb book titled “Photographers @ Microsoft”. They put it on sale for a price that would raise $25 per copy.

    The ended up raising $50,000.

    This year there has been more participation, and the book is printed on an offset press. I’ve seen advance copies and they’re as nice as any coffee table book you’ve seen (well, perhaps not as nice as Kramer’s…). They are slightly cheaper than the previous year’s books (offset printing is cheaper if you print enough), and still raise $25 per copy.

    You can preview and order the book here (if you’re at MS, enter your alias and employee number and matching will happen to make it $50 a copy).

    If you click on the image above, you can see small versions of the photos – they’re stunning. If you look carefully, you might find my contribution.

    Bubo bubo

  • Eric Gunnerson's Compendium

    Bohemiam Rhapsody… on the slide whistle…

    • 0 Comments

    Just wonderful.

    I love how he went to the trouble to overdub the way the original video was and matched the cheesy video.

  • Eric Gunnerson's Compendium

    Powerpoint, audio, and packaging…

    • 2 Comments

    This is a post about how to take conference audio and add it to a powerpoint presentation to give you a self-contained package you can give out. It took me a while to figure out how to do this effectively, so I’m hoping this will help others.

    But, since I’m all about the story, there’s a bit of background first. If you just want to understand the mechanics of how to do it, scroll down until you see “Mechanics”.

    Earlier this year I did a presentation (with Lowell Meyer) for the Microsoft Connected Health Conference, entitled “Practical HealthVault: Challenges and Opportunities”.

    The goal of the deck was to cover the things that are different and/or confusing about HealthVault – the things that we’ve answered over and over in the past couple of years. We wanted to come out with something that we could give to people who were new to HealthVault (anybody from the technical side, be they developer or manager) and therefore make our lives easier. And – which will come as no surprise to those who have seen me speak – I was interested in the adulation of my fans.

    In both writing and presenting, I’m a big fan of progressive revelation, where you start simple and build things up. For this talk, that meant a lot of custom animations in powerpoint. A *lot* of custom animations in powerpoint. Five of my slides have more custom animation steps than will fit in the custom animation box on the side.

    We had a company doing video production for the conference, and a few weeks after our presentation (which was a lot of fun – I always forget how much I enjoy presenting), the video showed up.

    It consists of a lot of mostly-dark shots of us presenting, cutting back and forth with the slides. If they happen to be showing the slides when the animation happens, things generally work okay, but at times they showed the design view. And the resolution is what you get with video, which isn’t great.

    What I want is a narrated presentation, so I set out to do that.

    Mechanics

    1. Pulling the audio off of the video

    For this I used AoA Audio Extractor, which gave me 155 MB MP3 file.

    2. Editing the audio

    For editing the audio, I used Audacity, which is pretty darn nice. My one caveat is that with the version I have, you can’t perform any operations on the audio if the audio is paused. So, you have to hit “stop” on the transport controls and then do a trim/cut/export/whatever.

    If you just took that whole audio track and put it on a powerpoint presentation, it would work fine at the beginning but would get out of sync on some machines. To prevent this from happening, we need to use per-slide audio.

    It’s also true that working with a 90-minute track is not a lot of fun, so breaking it up will help in that realm as well.

    I started by going though the audio track and putting labels on all of the slide breaks. This support is there to break albums up into songs, but it works very well for this as well. I had the presentation open on another monitor so I could reference it during the audio. Name the labels “Slide<x>”, where <x> is the number of the slide. Make sure to put one at the beginning for the start of the presentation.

    Once you have the labels, you choose “File->Export multiple”, and it creates individual .wav files for each slide.

    At that point I walked through all the slides and did some judicious editing. This is a bottomless pit of time consumption if you let it, so I tried to just pull out the things that were distracting where it was simple to do so. I also highly recommend adding 5 seconds of silence at the end of each slide’s audio – I didn’t do this initially and had to go back and re-edit all of them.

    3. Conversion to lossy format

    Audicity gave me files in .wav format, which are a bit wasteful in size. I download Lame and converted them into .mp3s. You can use the encoder of your choice (the list of formats that PP supports is here); I used lame because it’s what my home system runs on and is very simple to use from the command-line.

    4. Adding the audio to your presentation

    The following steps are all manual – if you have powerpoint 2010 (which has a macro recorder) and/or want to write some macros, you may be able to automate it. I just suffered doing it by hand in PP 2007.

    1. Select the slide
    2. Pick the insert tab
    3. Click on the sound icon
    4. Pick the appropriate audio file for the slide.
    5. Choose “automatic” when it asks you if it should start playing automatically.
    6. Drag the sound icon someplace that isn’t too annoying.
    7. In the custom-animation tab, drag the audio to the top of the list
    8. Right click on the audio, choose “Timing”, and then set “repeat” to “until end of slide”. If you don’t do this, the audio will only play until you hit the spacebar to start your animation.
    9. Repeat this 4000 times for the rest of your slides.

    At this point, you should be able to start the slideshow and have your audio start playing.

    5. Recording the animation timings

    For some reason that is not apparent to me, powerpoint calls this “rehearsing” the timings. And by default, it only supports it for the whole presentation, and there’s no (obvious) way to do it for a single slide.

    However, my mad search-fu led to a workaround. When you rehearse the timings, it only does it for the slides that are visible, so you can hide all the slides but one and rehearse the timings only for that slide.

    Here’s the progression:

    1. Hide all the slides
    2. Unhide the one you want to record the timings for.
    3. On the slideshow page, make sure “use rehearsed timings” is not checked.
    4. Choose “rehearse timings”
    5. Listen to your audio and hit the spacebar at the appropriate point to run the animation.
    6. When your audio is done, wait a couple of seconds and hit pause on the rehearse controls, then close the window.
    7. PP should ask you if you want to save the timings. You do.
    8. Repeat for each slide.

    You may find that you need to slightly modify your animations – some of mine seemed to change the “1” elements to “0” elements so they showed up too early. I think this happened when I added the audio to the slides.

    The 5 seconds of silence is critical here. If not, you get to step 6, and your audio will start to repeat.

    We had a few slides with 2 minutes of presentation and 10 minutes of questions. If you have this, stop the recording when the animation is done, and accept the timings. Then go to the Animations tab – on the right side you will see “advance slide automatically after” and a time. Look at the length of the audio for that slide (any player will tell you that), and set the value to 2 seconds shorter than that. That will put the advance right in the middle of the silent section you added.

    6. Polish

    At this point, you may need to polish the animation on certain slides. You may have to go back and redo some audio.

    7. Publish

    At this point you will want to publish the presentation. My plan is to just use PP’s publish to folder functionality, though there are also ways to publish to video if you are willing to deal with the loss in resolution. Publish to HD video might be worthwhile, but it would be pretty big, and right now the publish to folder is about 80MB in size.

  • Eric Gunnerson's Compendium

    Metricism

    • 4 Comments

    Your neologism for the day...

    Metricism

    A devotion to creating and implementing metrics for a system or process while loosing sight of the real goal.

    Examples:

    • Evaluating software developers on how many bugs they fix.
    • Evaluating newsgroup interaction quality based on the percentage of posts answered in the first day.
  • Eric Gunnerson's Compendium

    Floating point numbers and string representations

    • 0 Comments

    I’ve recently been doing some work on the HealthVault SDK to improve the consistency of how it deals with floating-point numbers, and thought that the information would be of general interest.

    First off, I’ll note that if you’re new to the often-surprising world of floating-point arithmetic, a few minutes reading What Every Computer Scientist Should Know About Floating-Point Arithmetic would be a good introduction.

    While my code examples are C#, this is a general issue, not just a .NET one.

    Anyway, off to the issue. Consider the following:

    static void Main(string[] args)
    {
        double original = 3.1;
        string stringRepresentation = original.ToString();
        double parsed = Double.Parse(stringRepresentation);

        Console.WriteLine("Equal: " + (original == parsed).ToString());
    }

    What is the output of this program?

    If you said “Equal: true”, you are correct. You might try a few more numbers before deciding that this is a general solution. Or maybe you don’t even think about it..

    But what if you choose a different number:

    double original = 667345.67232599994;

    What is the output this time?

    It is “false”.

    I hope that there was sufficient foreshadowing earlier in the post so that you are not uncomfortably disturbed by this turn of events.

    We could modify our code to check whether the two numbers are equal within an epsilon value. Or, we could dig a bit deeper…

    If we look at the wikipedia, we’ll find that under the IEEE 745 standard for floating-point arithmetic, there are 53 significant bits in the fractional part of the number. That is not quite 16 digits, so to be correct, we treat the numbers as if they only have 15 digits of precision when we convert them to strings. The last few bits are ignored.

    Another way of saying that is to say that it is possible to find two numbers that have the same ToString() representation but are different in those last few bits. Which is what is going on her.

    If we are printing out numbers for somebody to look at, the ToString() behavior is what we want, because the numbers really only have 15 digits of precision.

    In this scenario, however, what we want is a string format that will ensure that we get the exact same number back.  We can do that by using the “R” numeric format:

    string stringRepresentation = original.ToString("R");

    That gets us the behavior that we want. We could also have called XmlConvert.ToString(), which has the same behavior.

  • Eric Gunnerson's Compendium

    Uncyclopedia and Llamas

    • 2 Comments

    For some unfathomable reason, I didn't know about Uncyclopedia until recently. Like many things internet-related, there's a lot of junk there, but there is also some good stuff.

    Last night I finished my first contribution, an article about llamas.

  • Eric Gunnerson's Compendium

    A brief, incomplete, and mostly wrong history of programming languages

    • 0 Comments

    A brief, incomplete, and mostly wrong history of programming languages

  • Eric Gunnerson's Compendium

    Thoughts on “Thoughts on TDD”…

    • 0 Comments

    Brian Harry wrote a post entitled “Thoughts on TDD” that I thought I was going to let lie, but I find that I need to write a response.

    I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion.

    Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my “Seven Deadly Sins of Programming” series.

    Now, on to what he doesn’t like.

    He says that he finds it inefficient to have tests that he has to change every time he refactors.

    Here is where we part company.

    If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you don’t need to test (internal details rather than external implementation), your code perhaps isn’t as decoupled as it could be, or maybe you need a visit to refactorers anonymous.

    I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky.

    *Unless* we have a set of tests that have great coverage. And TDD (or “Example-based Design”, which I prefer as a term) gives those to us. Now, I don’t know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers I’ve worked with – and I count myself in that bucket – the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD.

    For me, it all comes down to the answer to the following question:

    How do you ensure that your code works now and will continue to work in the future?

    I’m willing to put up with a little efficiency on the front side to get that benefit later. It’s not the writing of the code that’s the expensive part, it’s everything else that comes after.

    I don’t think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you don’t help the guy in the future who doesn’t know what conditions were important if he has to change your code.

    His second part that he doesn’t like backing into an architecture (go read to see what he means).

    I’ve certainly had to work with code that was like this before, and it’s a nightmare – the code that nobody wants to touch. But that’s not at all the kind of code that you get with TDD, because – if you’re doing it right – you’re doing the “write a failing tests, make it pass, refactor” approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor.

    I also think Brian is missing an important point.

    We aren’t all as smart as he is.

    I’m reminded a bit of the lesson of Intentional Programming, Charles Simonyi’s paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer.

    In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you don’t have somebody of Brian’s talents to help you out. And that’s a good thing.

  • Eric Gunnerson's Compendium

    101110.11

    • 9 Comments
    101110.11
  • Eric Gunnerson's Compendium

    The power of "ouch"...

    • 0 Comments

    From my bicycle blog...

    Three of them

  • Eric Gunnerson's Compendium

    The power of "no"...

    • 3 Comments

    Eric Brechner wrote an interesting post titled "Don't panic", about how to deal with requests. I sometimes agree and sometimes disagree with what Eric writes, but it's usually a pretty good read.

    In this case, I agree with his approach, but disagree with his advice.

    He advocates that when anybody comes with you for a request, your first response should always be "Yes, I'd be happy to help".  Which is wrong.

    Way back when my daughter was 4, I was sitting on the couch and she came and asked me for something. My memory is a bit hazy on what she asked for, but I think it was some sort of snack. I didn't think deeply about it, and made a quick decision and answered "no". She got pretty upset and started crying and asked again.

    At that point I had a quandry. I thought about it a bit more, and realized that her request was a reasonable one (lunch was a long time ago) and that there was really no reason to grant it, but I knew that I couldn't because at that point is wasn't about the request but instead was about the way it was asked and the pattern me changing my mind would set. I didn't want to set up a behavior where getting upset and crying is expected to make your dad change his mind. So I held firm.

    And felt really bad about it, because I made the wrong initial call.

    The whole point of the story - assuming that there is a point - is that you need to look not at the current interaction but instead at the meta-level. What message is your response sending? What pattern of interaction is it reinforcing?

    If somebody comes and asks you to do something and you say, "yes", once that word leaves your lips you can assume that the person making the request will think that they are getting everything they want from you. They have in mind a big feature delivered on an impossible date, and your job then becomes trying to scope their expectations down to something manageable, but even if you succeed they'll still be stuck on what they originally had in mind, and will be disappointed with you.

    Not to mention the fact that if you immediately say "yes", it seems like you're either working on unimportant stuff and/or not working hard enough.

    The right thing to do is to say "no", but in the right way. Something like "I'm/we're currently booked and have a lot of high-priority work planned, so I think that would be hard to fit in, but let me understand exactly what you're asking". That puts the requester in the position of having to convince you of the importance of what they want to do and be able to explain the details coherently.

    At that point, you can start the nuanced discussions that Eric talks about - talking about the details of what they want, why they think it's important, etc. It becomes very clear very fast whether the requester has done his homework, and if they haven't you can point out the additional things he needs to figure out before you talk more. If he has done her homework, you can discuss where you think the feature ranks (always subject to the approval of whoever approves features), and how it might be broken apart/modified to bring it in earlier.

    At that point, the answer usually becomes, "yes, we can do <x> in timeframe <y>", which makes the requester happy - you've spent the time to explain to them why you put a specific priority on the request and you've made your life harder by rearranging things to slot their request into your current schedule and (probably) taking on the task of explaining to everybody who's below the new feature's priority why they aren't getting what they expected.

    And, you've set up a healthy pattern of behavior. Requesters are likely to do a bit of homework before they talk to you, and you are busy but willing to take on new work if it makes sense to do so.

    And that's the power of "no".

  • Eric Gunnerson's Compendium

    Lucky

    • 0 Comments
    Sometimes you're in the right place at the right time...
  • Eric Gunnerson's Compendium

    HealthVault 0908 SDK Highlights…

    • 0 Comments

    The 0908 SDK has dropped, and I’d like to talk about some of the highlights of this release.

    SODA

    The big one is SODA, our name for the ability to write non-web-based HealthVault applications. SODA leverages our master/child application infrastructure in a different way, and will provide some nice additional capabilities (I have an app or two that I want to write using it). More details will follow in the near future.

    I should note that our current plan is to require SODA apps to deploy with a HealthVault redist package, so that it will be possible for us to service the SDK assemblies through Microsoft Update. We’re working on that but it’s not ready quite yet.

    Application Creation

    In previous releases, it was a bit cumbersome to create a new application – you had to create the certificate, upload it, copy helloworld, and update the web.config appropriately. We have extended the HealthVault Application Manager to make this easier.

    If you just want to clone HelloWorld to try something out, you can choose “Create New HelloWorld Sample”, select VB or C# as your language, decide where your project should live, and you’ll be ready to hit F5 in VS to run the project.

    Or, if you want to get fancy, choose “Create New Application”. This does the same thing as the HelloWorld approach, except that it will also create a new application certificate, start the registration process to the application configuration center. Just hit F5, and you’re up and running with your new application.

    Certificate Storage

    Both of these approaches leverage the new ability to put an application’s certificate on the file store instead of in the certificate store. This should be more convenient in some situations. This is done by adding a key in the web.config file:

    <add key="ApplicationCertificateFileName" value="g:\mshealth\Nice App\cert\WildcatApp-3cdc0cea-6008-4c76-9169-36d44c3d63b4.pfx" />

    If the certificate has a password, that can be specified with the ApplicationCertificatePassword key.

    We do caution that applications should take care not to put the certificate in a web-accessible directory.

  • Eric Gunnerson's Compendium

    Blog refactoring 2.0

    • 0 Comments

    In the beginning, I started this blog just to write. I wrote a lot of C# stuff, some regex stuff, some HealthVault stuff, and a bunch of irrelevant stuff.

    Then, at some point, I started the RiderX blog to write about some more cycling-specific stuff that I – in a rare moment of discernment – didn’t want to put on this blog.

    Since that time, I picked up a HealthVault team blog (well, two team blogs, actually), and most of the time I wanted to write, I didn’t want to write anything work related, and wanted a place to do a bit less self-editing that I do here.

    Yes, I know, it does boggle the mind a bit, that what I post here is self-edited…

    So, anyway, to make things short(er), I went domain name hunting, found to my surprise that “thegunnersons.com” was available, and spun up a personal blog there.

  • Eric Gunnerson's Compendium

    Question of the day

    • 1 Comments
    Is Cinderella related to Mozzarella?
  • Eric Gunnerson's Compendium

    Ratings from Eric's Vacation

    • 0 Comments

    I've decided to be totally derivative (and likely considerable less good) than "The Book of Ratings".  A great book.

    State highway signs

    California

    It's really a bit sad. They spend all that time putting up "enforced by radar" or "enforced by aircraft" signs, but deep down they know that nobody is going to pay any attention.

    To distract themselves, they amuse themselves by putting "80" at the end of all the freeway names around San Francisco.

    Rating: C

    Washington

    Boring signs, though the state highway signs do have a nice outline of George's head on them.

    Or perhaps I'm just jaded from my long association, and am desperately seeking my midlife crisis of signs.

    Rating: C

    Oregon

    Perhaps the Oregon highway department was beaten up by the other highway departments when it was little, but for whatever reason, Oregon likes to do things big, with an outsize "55" telling you that they really mean business, and colossal "do not enter" signs at least 10' on a side. Definitely signs with an attitude.

    On the minus side, they insist on putting up "end of speed zone" signs, forcing me to try to remember how fast I was allowed to go at some point in the past.

    Rating: B

    Vacation Vehicles

    Mom's Old Car

    Mom's old car - chosen so the offspring could drive in WA and OR - is a sweet nicely-preserved 1998 Honda Accord, returning a bit over 30 MPG for the trip.

    Rating: B-

    ATVs

    5 minutes of recorded safety lecture, and we're out in the largest sandbox on the west coast, 32000 acres of duney bliss. The last off-road experience I had was a 50cc Honda when I was 12, but I do remember the most important rules - a) You get it stuck, you dig it out, b) don't cross over the unmarked boundaries, and c) you break it, you pay us.

    Not really useful for getting from here to there, but a hecka lot of fun (as the kids would say).

    Rating: A

    Sand Rail

    First off, "Sand Rail" is a killer name, beating up on "Dune Buggy", and taking the lunch money of "ATV" ("Hey, I know! People love names that are acronyms").

    Four point racing harnesses, goggles, and a suggestion to keep our hands off the top rail "in case we roll", and the first pass is a 30 mph trip down a 45 degree slope and up another. Great fun once I pried my eyes open.

    Sand Dunes Frontier

    A++

    Aquatic Creatures

    Hippocampinae

    Sometimes confused with the hippocampus. Yeah, we get it, they're shaped like horses, and there are tiny fish who act as jockeys, riding them around the tanks is daily races.

    Sure, the dudes are the ones that give birth, guaranteed to make all the guys in the audience wince a bit.

    Rating: C

    Cephalopods

    The 007 of the water, with adaptive camo, water-jet propulsion, super grippers, and a built-in smoke screen.

    Rating: B

    Pinnipeds

    Cute and cuddly, intelligent, mischievous - everything you want in an aquatic animal. Except for the fact that they mostly swim under the water and when outside the water spend their time doing impressions of jumbo furry sausages.

    Rating: B+

    Missing mountains

    Mount St. Helens

    A bit less known as its larger and better-behaved brother to the north, St. Helens is widely held up as the epitome of a missing mountain.

    2/3 of a cubic mile of mountain disappeared in 30 seconds, killing 57 humans, 7,000 big game animals and 12 million hatchery salmon. It inconvenienced people throughout the state, causing millions of people to have to wash their cars ahead of schedule.

    The glowing growing dome is a nice touch if you can see it at night, but it could really use some daytime pizzazz.

    Rating: B

    Mazama

    First of all, erupting 3000 years before the advent of mass media is a bad career choice, and the PR is largely non-existent. You still see Jordan's name all over the place, so I think Mazama needs a new agent.

    But it's hard to fault it on execution. 25 cubic miles of mountain vanish. What do you put in it's place? A 2000' deep lake.

    Brilliant.

    Rating: A

  • Eric Gunnerson's Compendium

    Introduction to HealthVault Development #13: More more than one person

    • 1 Comments

    In the last installment, we modified our application so that it could switch between family members for data display and entry. This time, we’re going to add a table at the top that shows the current weight for all family members.

    We add the table right after the <h1> title:

    Family Summary <br />
    <asp:Table ID="c_tableSummary" runat="server" BorderWidth="1px" CellPadding="2" CellSpacing="2" GridLines="Both"/>
    <br />

    Then we need a method to walk through the records and fetch the current weight from each of them. The following will do that:

    void GenerateSummaryTable()
    {
        TableHeaderRow headerRow = new TableHeaderRow();
        TableHeaderCell headerCell = new TableHeaderCell();
        headerCell.Text = "Name";
        headerRow.Cells.Add(headerCell);

        headerCell = new TableHeaderCell();
        headerCell.Text = "Weight";
        headerRow.Cells.Add(headerCell);

        c_tableSummary.Rows.Add(headerRow);

        foreach (HealthRecordInfo record in PersonInfo.AuthorizedRecords.Values)
        {
            HealthRecordSearcher searcher = record.CreateSearcher();

            HealthRecordFilter filter = new HealthRecordFilter(Weight.TypeId);
            filter.MaxItemsReturned = 1;

            searcher.Filters.Add(filter);

            HealthRecordItemCollection weights = searcher.GetMatchingItems()[0];

            if (weights.Count == 1)
            {
                TableRow row = new TableRow();

                TableCell nameCell = new TableCell();
                nameCell.Text = record.Name;
                row.Cells.Add(nameCell);

                Weight weight = weights[0] as Weight;

                TableCell weightCell = new TableCell();
                weightCell.Text = weight.Value.DisplayValue.ToString();
                row.Cells.Add(weightCell);

                c_tableSummary.Rows.Add(row);
            }
        }

    Currently, there is no way to make a single request that fetches data for more than one record, so we need to create an execute a separate query for each one.

    Next Time

    With all the operations that we are performing, the application is running a little bit slowly. We’ll do some investigation into what’s going on, and see if we can’t make some improvements.
  • Eric Gunnerson's Compendium

    Thoughts on agile design and platforms

    • 3 Comments

    I started by writing a broad post about design, and it got away from me (apparently I should have spent some time designing the post first…), so I deleted it all and decided to write something shorter, and, with any luck, more understandable and useful.

    I’ve been reading some discussions about how design relates to Agile. Some teams get into trouble because they think that agile means “no design”, when in fact it means “right design”.

    The whole point of design in my mind is to save you work later on. There’s a sweet spot between no design and big design that makes sense in a particular situation. I don’t think that’s a unique insight at all, though I have seen groups how use the “design document” approach where there’s a document with 18 sections in it that everybody has to use.

    The two areas I would like to talk about are about what you are building and the scope of what you are doing right now. I’ll talk about scope first.

    The amount of design you should do depends on the scope of what you are doing, and scope in this situation doesn’t mean “amount of code change” (though it often correlates somewhat with that), it means “impact of code change on the customer”. This is obviously different for every change you make to the code, and my general guideline is that spending 5 minutes bouncing your thoughts off of somebody else generally gives you a good enough conclusion about how much design is required. Notice that I said “guideline” – I expect that developers can use their best judgement about when they can make changes without consulting with others.

    Some people would say that not having the opportunity to make a wrong choice about when to consult with others is a strength of pair programming. I think that’s probably true, but obviously only works for teams that do pair, and that’s not that common in my neck of the woods (and, I suspect in others).

    So, anyway, that’s what I think about scope, and, once again, I don’t think it’s a unique insight.

    My second point – that the amount of design depends on what you are building – is something I haven’t heard talked about much, especially in agile circles. Because most software developers deliver applications, the agile processes are described in that context.

    And in that context, I think that many developers do far too much design up front. You can spend 3 days writing something up that covers how you will do something, or you can spend 2 days doing early implementations and then know which one works better, and have real code to show people. Trying to figure out how things should work before you write them is often less efficient than just writing them when you need them.

    Given that I consider premature generalization to be the #1 sin of developers, no surprises there.

    But – and I think I may be finally getting to the point – that perspective comes from applications, where you own the code that you’re building, and refactorings can be done when you learn more.

    Platforms are different.

    If you are building a platform, things get a bit schizophrenic. Internally, you are an application – you can refactor the internals without a lot of impact elsewhere, and therefore the amount of design you do should keep that in mind.

    But externally, people depend on your APIs to stay the same. This means that, for a given feature, you need to get it right (or as close to right as you can) on your first release. It also means that you need to think about how the feature that you’re doing right now might be extended for things you might do in the future.

    Which is exactly the thing that you shouldn’t be doing if you’re building an app, because it’s a pain, it’s expensive, you’ll make the wrong choices, and you’ll have to throw work away.

    In my current team, we own both applications and platform API, so we get to spend time in both of these areas.

    And now it’s lunchtime. I may write a future post on how I think you should do platform design.

  • Eric Gunnerson's Compendium

    RAMROD 2009 ride report

    • 0 Comments
    http://riderx.info/blogs/riderx/archive/2009/08/06/ramrod-2009.aspx
  • Eric Gunnerson's Compendium

    NASA images lunar descent stage and astronaut path on moon...

    • 0 Comments

    Just in time for the 40th anniversary of Apollo 11, a NASA orbiter images Apollo landing sites.

    The apollo 14 site is very impressive.

Page 2 of 46 (1,140 items) 12345»