November, 2005

  • Eric Gunnerson's Compendium

    C# Content Strategist...


    I got an email from Elden Nelson (the alter-ego of this guy), who is a muckety-muck (well, perhaps only a single muckety) in MSDN, and he let me know that there is an opening for MSDN for the C# Content Strategist, a position that at one time was held (very ably) by Duncan.

    If being the primary owner for the C# dev center sounds appealing, this might be the position for you. If you want to find out more, let me know, and I'll hook you up with Elden.

  • Eric Gunnerson's Compendium

    debugging tips


    VS debugger developer Jim Griesmer has been blogging a series of debugger tips.

    I'm pretty excited about tracepoints - the ability to add in logging code without having to modify your code seems really useful.

  • Eric Gunnerson's Compendium

    Dave Zabriskie website...


    Monday, I came across a reference to a website by Team CSC racer Dave Zabriskie in, of all places, a copy of Bicycling magazine. Bicycling is not known for their race coverage in the same way as Seattlelites are not know for their winter driving prowess.

    Apparently, Zabriskie has become known for doing one-sentence interviews with riders in the peleton. Here's a nice one:

    I know the Tour is well over but I do have one more interview to share. It is with Thor Hushvod of Credit Agricole:

    DZ: Thor what does it feel like to have the coolest name in the peleton?

    TH: I didn’t know it was a cool name.

    DZ: Trust me it is.

    TH: O.k. then. It feels pretty cool then.

    DZ: Thanks for the interview.

    You have to like somebody who does that.

  • Eric Gunnerson's Compendium

    Guide for snowboarders


    A while back, I wrote a "tongue in cheek" reference comparing snowboarders to drivers with cell phones. I got some negative feedback, including from my good friend Nick (reader #5 out of 20 total readers).

    On further reflection, I think that I may have been a bit unfair. Most snowboarders aren't bad people, they are just a bit misguided. So, I thought I'd try to improve the situation by giving some advice to those of you who prefer a single plank for your snow-riding activities:

    1. At most ski areas - with the possible exception of the Cascades during the winter of 2004-2005 - loose snow accumulates on the slopes. Unfortunately, it tends to accumulate more on the upper slopes than on the lower slopes, leading to a distinct snow deficit lower on the mountain. Do your best to scrape any loose snow off the upper slopes towards the lower slopes by sliding sideways down the slopes. This will make the mountain experience better for everyone.
    2. You don't get better by staying within your limits. You paid full price for your lift ticket (well, you might have paid $10 to that guy in the parking lot), so you deserve to use the whole mountain. Even if you have to slide sideways for 1200 vertical feet.
    3. You're good enough that you don't need to know what's behind you.
    4. Most ski areas have an ongoing peek-a-boo contest. To get the most points, find a slight rise on a busy slope, go just beyond the edge, and sit down. Say "peek-a-boo" as skiers appear at the top of the rise. Double your points by getting your friends to play along.
    5. Lift maze barriers are there for your comfort and convenience. If the line is slow, sit on the barrier.
    6. The skiers in your party are more tired than they let on. Take extra time to hook up to your bindings so that they have a chance to get a nice long rest.
    7. Congregate with other boarders wherever it's convenient. I suggest the top of the lift, right after it unloads, or at the entrance to the lift maze, but I'm sure you can come up with your own ideas.
    8. Come up with your own idea.


  • Eric Gunnerson's Compendium

    Regex 101 Exercise S6 - Change the extension on a file


    Regex 101 Exercise S6 - Change the extension on a file

    Given a filename including path, change the extension to .out.

    Input string example: 



    1. The best answer to this is really to use System.IO.Path.ChangeExtension(), but that wouldn't be much of a Regex exercise, now would it?
    2. It's not as simple as it looks
  • Eric Gunnerson's Compendium

    Background processing in ASP.NET


    As part of my bicycle climbs website, I need to spend some time calling a web service to fetch the elevation of 250 different points. Each call takes a few seconds.

    Ideally, what I would have is a way to start the processing but not have it block my normal page processing operation.

    Ideas? I looked at the MSDN docs here, but that requires me to continually refresh the page. I could do that if necessary if there's no other way.

  • Eric Gunnerson's Compendium



    Last night, I went on a bike ride with Richard Feynman (1918-1988). This was his first long ride, and like many less experienced riders, he was unprepared and got dehydrated, ultimately running off the edge of a road on a descent. The group retrieved him and his bike, got some fluid into him, and finished the ride. Maybe he was still recovering at the post-ride picnic, but he seemed unable to locate a table with the proper number of free spaces for our part of four (the two of us and two unnamed young females who he said would "be along shortly").

    Perhaps I'm being unfair, but I expected more from a legendary iconoclast and one of the greatest minds of the 20th century. Just because you win a Nobel prize for quantum electrodynamics it doesn't mean you get to hang out in the middle of the paceline and never take a pull at the front. And he didn't even thank me for the Camelback that I gave him...

  • Eric Gunnerson's Compendium

    Regex 101 Answer S5 - Strip out any non-letter, non-digit characters


    Remove any characters that are not alphanumeric.


    To remove these characters, we will first need to match them. We know that to match all alphanumeric characters, we could write:


    To match all characters except these, we can negate the character class:


    It's then simple to use Regex.Replace():

    string data = ...;

    Regex regex = new Regex("[^a-zA-Z0-9]");

    data = Regex.Replace(data, "");

    Another way of doing this would be to use the pattern:


    and then create the regex using RegexOptions.CaseInsensitive.

    Note: I've seen a few comments referring to Unicode and international characters. I haven't delved into that because I don't want to complicate the discussion, and, frankly, Unicode scares me. If you want the details, you can find them in the docs. For example, you can find out that \W is really equivalent to the Unicode categories [^\p{Ll}\p{Lu}\p{Lt}\p{Lo}\p{Nd}\p{Pc}].

  • Eric Gunnerson's Compendium

    Peanut butter and software


    Brad wrote an interesting post about peanut buttering software, in which he asks if I've ever seen that.

    Yes, I've seen it. Numerous times.

    Within a given team, it doesn't happen that often. The problem is when you get to multiple teams. When you start a new development cycle, the teams all get asked "what are you going to do", and they come up with features to fill whatever amount of time is available (well, they overfill it, but you know what I mean). The problem is that there is sometimes no evaluation of the features that team X is spending their time on versus the features that team Y is spending their time on, so even if team X has some really high-value work to do and team Y has some low-value work to do, there is no load-balancing, and team X can't get done what needs to be done. Some teams are good enough to shift people around, but it's hard to do in many cultures.

    As you move to larger orgs, the problem only gets worse. Team A1 may have customers absolutely screaming to address an important area of the product while team B2 has plans to write something that nobody really asked for, but because the planning is done individually in the orgs, the right thing does not happen. I've seen this happen repeatedly. I've had customers who have said, "You're Microsoft, why can't you fix <x>?", where fixing <x> is absolutely the right thing to do, but can't be done because the resources are out doing less important stuff.

    This behavior is not to be confused with what I'll call "New stuff bias", which is the bias towards adding new features rather than understand what is wrong with the existing features and them. This also drives customers - and anybody who talks with customers - crazy.



  • Eric Gunnerson's Compendium

    Ad Homonym

    Ad Homonym: The practice of attacking a person rather than the argument itself merely because it is expressed in words that sound alike.
  • Eric Gunnerson's Compendium

    Regex 101 Exercise S5 - Strip out any non-letter, non-digit characters


    An easy one for this holiday week...

    S5 - Strip out any non-letter, non-digit characters

    Remove any characters that are not alphanumeric from a string.



  • Eric Gunnerson's Compendium

    Regex 101 Exercise S4 - Extract load average from a string - Discussion


    Exercise S4 - Extract load average from a string

    The shop that you work with has a server that writes a log entry every hour in the following format:

    8:00 am up 23 day(s), 21:34, 7 users, load average: 0.13

    You need to write a utility that lets you track the load average on an hourly basis. Write a regex that extracts the time and the load average from the string.


    This is pretty close to the first thing I ever did with regular expressions. I had some logfile information I needed to process. I started writing in C++, and if you've ever tried to do lots of character manipulation in C++, you know how much fun that can be.

    For this sort of thing, I like to look for good delimiters. To get the time, I'll use "up" as the delimiter, which means I can match with:


    The \s is something new, it means "any whitespace character". I next need to pull out the load average. I'll use "load average:" as the delimiter, so the regex to pull that out is:

    load average:\s*[0-9.]+

    and I can string them together to get:

    .+\s*up                       # match time
    .+?                           # skip middle section
    load\ average:\s*[0-9.]+      # match load average

    I added the middle clause to skip the characters in the middle that I don't care about. I also switched to multi-line mode, which means that I need to use RegexOptions.IgnorePatternWhitespace, and that required me to change "load average" to "load\ average" so that the regex engine wouldn't ignore the space (after I stared at it for a minute, wondering why it wasn't working...)

    If I run this in regex workbench, it will report:

        0 => 8:00 am up 23 day(s), 21:34, 7 users, load average: 0.13

    That tells me that the match worked, but not much else. What I need is a way to extract certain parts of the string, which is done with a "capture" in the regex language. The simplest form of a capture is done by enclosing part of the regex in parenthesis:

    (.+)\s*up                       # match time
    .+?                             # skip middle section
    load\ average:\s*([0-9.]+)      # match load average

    Executing that gives:

        0 => 8:00 am up 23 day(s), 21:34, 7 users, load average: 0.13
        1 => 8:00 am
        2 => 0.13

    The first capture (index 0) is always the entire match, and then subsequent captures correspond to the portions of the match enclosed in parenthesis. In code, if I wanted to pull the time out, I would write something like:

    string time = match.Groups[1].Value;

    That works fine. I could declare victory, but I don't really like the "Groups[1]" part - it doesn't tell me much. Nicely, the .NET regex variant provides (as do some others) A way to name captures. That allows me to write:

    (?<Time>.+)\s*up                              # match time
    .+?                                           # skip middle section
    load\ average:\s*(?<LoadAverage>[0-9.]+)      # match load average

    Running that gives me:

        0 => 8:00 am up 23 day(s), 21:34, 7 users, load average: 0.13
        Time => 8:00 am
        LoadAverage => 0.13

    and I could now write code that looks like:

    string time = match.Groups["Time"].Value;

    which is very clear - clear enough that I often will not bother with the local variable.

    That's gets us to where I wanted to get. You may have noticed that I didn't try to validate the time nor did I use anchors for the beginning and end of the string. In this example, I'm dealing with well formed text - the server log is always going to look the way that it does - and it's not worth the effort or complexity to do more than what I did.

  • Eric Gunnerson's Compendium

    The mystery of Indian bread


    This week at a team dinner - which we sometimes do if people are staying late - we had indian food. Rice, various curries, and a package of the Indian bread known as "Nan" (pronounced "nahn").

    I like rice, and I'm open to a nice vindaloo now and then, but I must confess a particular weakness for the bread. When I was younger, I would sometimes eat endless pieces of this bread, and it ultimately got so bad that I had to go to "Nan-anon" for treatment...

    I've always wondered about that experience. I like bread, but I only have this compulsion around one kind of bread. But yesterday, I was doing some research, and finally figured out why I can never put enough slices on my place. It's like adding a slice has no effect at all. And now I know why:

    Nan + Nan = Nan

    See the last bullet point in the remarks section.

    (I do love the bread, especially the garlic version)



  • Eric Gunnerson's Compendium

    Piano recital

  • Eric Gunnerson's Compendium

    Regex 101 Exercise S4 - Extract load average from a string


    Exercise S4 - Extract load average from a string

    The shop that you work with has a server that writes a log entry every hour in the following format:

    8:00 am up 23 day(s), 21:34, 7 users, load average: 0.13

    You need to write a utility that lets you track the load average on an hourly basis. Write a regex that extracts the time and the load average from the string.

    [Update: The time you need to extract is the "8:00 am" part...]

  • Eric Gunnerson's Compendium

    Skiing and Cycling


    We've had unseasonably cold and wet conditions for the last few weeks, and that's meant that our local ski areas - which had a disastrous ski year last year - have opened earlier than the past 20 seasons, so we headed up to Stevens Pass yesterday. Warren Miller has long said that any day on skis beats any day sitting at a desk, but as I was sitting on skyline lift trying to shield myself from the 30 MPH blast of freezing rain scouring my face, I had moments of doubt. But with decent equipment, you can enjoy that sort of thing - in the same sense that once enjoys climbing the zoo.

    Anyway, that got me thinking about the similarities between the skiing and cycling. And by "skiing", I mean "alpine skiing". Here's a list I came up with:

    • Rain bike = rock skis. Rather than postponing their activity, both cyclists and skiers maintain dedicated equipment to keep participating when conditions are less than desirable.
    • Snowboarders = drivers with cell phones. Neither of these groups are looking for you, they aren't very attentive in general, and there is ample reason to suspect that they don't know that you exist.
    • 40 mph descents
    • Quads
    • Titanium and carbon fiber. Improvements in both are driven by racing, where small changes can make big differences. And not only can amateurs buy gear that's as good as the pros, they can afford to do so. (Note that "afford" is a relative term)
    • Rigidity and flexibility, the ying and yang of both sports.
    • Tradition. The tour started 1903. The rules for the downhill race were developed in 1921.
    • Hahnenkamm = Alp d'Huez
    • Mountains (and Alps!)
    • Beer
    • 35 degrees and raining sucks
  • Eric Gunnerson's Compendium

    Regex 101 Exercise S3 - Validate a zip+4 zip code - Discussion


    Exercise S3 - Validate a zip+4 zip code.

    The US has a 5 digit zip code with an optional 4 digit suffix. Write a regex to validate that the input is in the proper format:

    Sample strings



    This one is fairly similar to what we've done in the past. The most obvious way to match the first chunk of digits ("chunk" is a regex "term of art" that refers to a section of characters that you want to match (not really...)). We can do that with :


    And we can easily match the second version with:


    I got an email recently where the writer asked, "I've heard that it sometimes makes more sense to use two regexes rather than a single more complex one". Though crafting a single regex that covers all the cases can be an interesting intellectual exercise (a good idea if you want to avoid the heartbreak of flabby neurons), it sometimes makes more sense to cut your losses and simply use several regexes in sequence, and get out of work before happy hour is over.

    Which is a long-winded way to say that I could just declare victory at this point, but that wouldn't be very educational (I desperately hoped to link to a .wav file of Daryl Hannah saying "edu cational" from Splash, but alas, repeated web searches proved fruitless). So, onward.

    Regex provides an "or" option where you can match one of several things. To do that in this case, we would write:

    \d{5}-\d{4}     # zip + 4 format
    |              # or
    \d{5}           # standard zip format

    which would match either of these. This is a reasonable way to write this match.

    The final way is to use one of the quantifiers I discussed before. If we use the "?" quantifier, we can write:

    \d{5}           # 5 character zip code
    (-\d{4})?       # optional "+4" suffix

    I think this would be my preferred solution.

    Though we used parenthesis for grouping, they actually have other uses as well in regex. Tune in next week, where the word for the day will be "capture". Or, perhaps, "spongiferous".

    What do you think of the series so far? If you've used regex before, it should seem simple to you. What would you change? What would you leave the same?


  • Eric Gunnerson's Compendium

    C# Express for a very low price...


    Cyrus breaks the news.  Well, not really - it was actually Dan, but he succumbed to his marketing tendencies, and used "Announcing the release of Visual Studio 2005 Express Editions" as his title.

    So, for the next year, the express versions will be freely downloadable. And if you register a downloaded version, you can get a few goodies, explained in Dan's post.

    Props to all the MS people who worked so hard to make the express editions a reality.



  • Eric Gunnerson's Compendium

    Regex 101 Exercise S3 - Validate a zip+4 zip code


    Exercise S3 - Validate a zip+4 zip code.

    The US has a 5 digit zip code with an optional 4 digit suffix. Write a regex to validate that the input is in the proper format:

    Sample strings


    I'm going to keep comments enabled in case people have questions about the exercise, but please don't post answers in the comments.

  • Eric Gunnerson's Compendium

    Paying attention in physics class pays off BIG!


    After vacillating for months, Canon came up with the Kurosawa of rebate offers (ie compelling but really hard for me to understand), and I ordered a new Digital Rebel 350 XT from, of all places, Amazon. Amazon usually tends to be more expensive, especially when you factor in sales tax, but there were two factors that swayed me. First, they had the lens that I wanted (the Canon EF 28-135mm f/3.5-5.6 IS USM (EF being Canons mainline series of lenses (EF-S is their series for APS sensors), 28-135mm being the focal length (which you can multiply by 1.6 because of the APS sensor size, making this a 45mm-216mm lens on the Rebel), f/3.5-5.6 (the aperture measured in footcandles/joule), IS meaning image stabilization (a neat gyro-stabilized lens element), and USM being the better kind of focusing motor) in stock, and second, unlike the other companies that had that lens in stock, they had few online reviews that said, "I'd rather cuddle with an enraged grizzly than order something from these guys again"

    I added the camera, lens, a new flash card, and a skylight filter to my shopping cart, removed the skylight filter (anybody who thinks I'm going to pay $15 to ship a $20 filter is delusional), and checked the "ship all the items together".

    And, of course, the camera showed up Tuesday, without the lens. So, what do you do with a camera without a lens? Well, remembering your high school physics (or, perhaps, your college physics. How should I know when you took physics?), you should recall the camera obscura (literally "who hid my camera? C'mon guys, that's not funny..."). So, I got a piece of aluminum foil, put it in front of the lens mount, and captured a couple of images with 20 second exposures. Recognizable images, at least in the "guy who can pretty much navigate his room without his glasses" sense. So, your physics is good for something.

    Oh, and I got the black version of the camera.

    Comedy question: Is it funnier to cuddle with a grizzly or a tiger? Inquiring minds want to know...

  • Eric Gunnerson's Compendium

    Regex 101 Exercise S2 - Verify a string is a hex number - Discussion


    [Update: got rid of the 0x in front of the sample string...]

    Our task was the following:

    S2 - Verify a string is a hex number

    Given a string, verify that it contains only the digits 0-9 and the letters a through f (either in uppercase or lowercase).

    Sample string:



    We talked about character classes last time. The character class to match the valid characters is:


    We now need to get a string of those. The simplest way to do this is to use one of the predefined quantifiers:


    where "+" means "specifies one or more matches". There are predefined ones - "*" means "zero or more matches", and "?" means "zero or one matches". All three of these quantifiers are what is known in regex circles as "greedy" - they match as many characters as possible. In other words, if you use "+", the engine will choose a match of 100 characters over a match of 1 character if given the freedom to match. In this case, that means that if you match against:


    the expression will match that whole string, not just the first "0". We will talk about greediness at length in later exercises, so don't fret about it now.

    As we did last week, we would add the anchors to make sure we're matching the whole string:


    and we've satisfied the goal of the exercise. Note that because of the anchors, there is only one possible match, and greediness doesn't enter into the picture.

    Are we done? Well, mostly. It might be that what you wanted was to limit the hex number to 8 characters (so it would fit into 4 bytes). Doing that is left as an exercise to the reader...

    All these shortcut quantifiers - "*", "+", and "?" - are really just simpler ways to writing quantifiers using the full version of the "{<n>}" syntax that I discussed last week.


  • Eric Gunnerson's Compendium

    Persisting complex objects...


    Q: I am on a large C# project with a fortune 500 company, and one of our first design challenges is how to persist large, complex objects. We have 1000's of users, so bw is an issue. When a user requests this object to edit, they may only edit one field within on of the many aggregate objects. Do you of any efficient design patterns for storing only what changed, instead of resending/restoring the entire object?


    This is an interesting question. I'll give you my opinion, and then readers can chime in if they have any other suggestions (you should probably give their responses more credence than mine...)

    I like using serialization when I need to send live objects over a wire, or when the objects are tiny, or when I don't care about performance. But I'm an old database guy (in both senses of the word "old"), and this is the sort of situation that is tailor-made for using a database, with a separate column for each field. That gives you the ability to update a single value quickly and easily. To get a minimal update, you will need some sort of change tracking...

    You can get that through the .NET dataset class. I'm not a big fan of dataset, as I like to write much more close to the metal, but it is convenient to use and it has built-in change tracking. Another option is to write it yourself for each object - setting a property also sets a modifed bit, and when you update you put all those bits together into a SQL query. It's tedious code to write but not very tough. You could also do something where each object has a reference to a "Modifications" class. When a property is updated, you tell the class that the column updated is "Name" and its new value is "Fred", and it stores all the update information away, and when you go to commit the update, it strings it together in proper SQL. That's probably better than the custom way.

    Those are my quick thoughts. What do others think?


  • Eric Gunnerson's Compendium

    Over the weekend...


    There I was, minding my own business, without a care in the world. And then it happened...

    Well, to be truthful, it was at least partly my fault. Perhap I should back up a bit.

    I was at home, at about 5:15, glueing gravel to my face. It was going pretty well, and as I was adding the gel blood, I knocked my helmet off, and the moss fell out of my helmet. What a mess.

    Perhaps some details are in order:

    1. Embedded in my chest is a 30 tooth 105-series shimano chainring. Well, part of a chainring - the rest is coming out of my helmet
    2. There's a spoke coming out next to the wound near my right eye. Another coming out of my bike shorts.
    3. Wounds on my face, right arm, left wrist, and right knee.
    4. Chain (shimano ultegra 10 speed)
    5. Remnants of an inner tube coming out my left leg.
    6. Moss in the helmet

    Important safety tip: If you decide to do something like this, I suggest taping the aforementioned cycling parts to your shirt, if you're interested in the easy way out. If, however, you wish some assistance in projecting your injured tape, adhesive tape works well.

Page 1 of 1 (23 items)