I'm looking for a recommendation for a good Avalon book.
Which one do you like, and why?
For a while now, you've been able to buy digital levels. Amazingly, they will display angles within 0.1 degree.
As some of you know, I've been on a quest for a good way of mapping hills, and I found that you can order up a inclinometer from the same company that makes the digital levels. Which I did.
It's a pretty cool device - you give it 5 volts, and then it gives you the current angle back through a serial port. In the simple mode, it will send you this data every 5/8 of a second.
So, this weekend I hooked it up to a level converter (the inclinometer speaks TTL rs-232, so you need something like a maxim 232 to convert that to real RS-232), connected it to my laptop, and exercised the new Whidbey SerialPort object. I wrote a little logger, and headed out to gather some data on the hills around my house. Thirty minutes later, I had my data, and I imported it into excel.
And it looked weird. Some of the data looked really nice, and some of it had big spikes in it - both in the positive and negative directions.. Took me about twenty minutes to figure out the problem.
Care to guess what I forgot, and how I'm going to fix it?
Back in 1996, I was a QA lead for the C++ compiler, and our group wanted to incent people to fix and close bugs.
One of the other QA leads had the brilliant insight that lego blocks make excellent currency amongst development teams, and I - because of my demonstrated aptitude in generating reports from our bug-tracking system - became the "Lego Sheriff" for the group, handing out blocks. I believe the going rate was three blocks per bug.
Not surprisingly, some people started to game the system, to increase the number of blocks. Those of you who are surprised that somebody would go to extra effort to get blocks that retail at about a penny per block have never seen a millionaire fight for to get a free $10 T Shirt.
But I digress.
That there was a system to game was due to a very simple fact. Our goal wasn't really to get people to fix and close bugs, our goal was to get the product closer to shipping. But we didn't have a good way to measure the individual contribution to that, so we choose active and resolved bug counts as a surrogate measure - a measure that (we hoped) was well correlated with the actual measure.
This was a pretty harmless example, but I've seen lots of them in my time at Microsoft.
The first one I encountered was "bugs per tester per week". A lead in charge of testing part of the UI of visual studio ranked his reports on the number of bugs they entered per week, and if you didn't have at least <n> (where <n> was something like 3 or 5), you were told that you had to do better.
You've probably figured out what happened. Nobody ever dropped below the level of <x> bugs per week, and the lead was happy that his team was working well.
The reality of the situation was that the testers were spending time looking for trivial bugs to keep their counts high, rather than digging for the harder-to-find but more important bugs that were in there. They were also keeping a few bugs "in the queue" by writing them down but not entering them, so they could make sure they hit their limit.
Both of those behaviors had a negative impact, but the lead liked the system, so it stayed.
Another time I hit this was when we were starting the community effort in devdiv. We were tracked for a couple of months for things like "newsgroup post age", "number of unanswered posts", or "number of posts replied to by person <x>".
Those are horrible measures. Some newsgroups have tons of off-topic messages that you wouldn't want to answer. Some have great MVPs working them that answer so fast you can't a lot to say. Some have low traffic so there really aren't that many issues to address.
Luckily, sharper heads prevailed, and we stopped collecting that data. The sad part is that this is one situation where you *can* measure the real measure directly - if you have a customer interaction, you can *ask* the customer at the end of the interaction how it went. You don't *need* a surrogate.
I've also seen this applied to blogging. Things like number of hits, number of comments, things like that. Just today somebody on our internal bloggers alias was asking for ways to measure "the goodness" of blogs.
But there aren't any. Good blogs are good blogs because people like to read them - they find utility in them.
After this most recent incident of this phenomena presented itself, I was musing over why this is such a common problem at Microsoft. And I rememberd SMART.
SMART is the acronym that you use to remember the measures that tell you that you've come up with a good measure. M means measurable (at least for the purposes of this post. I might be wrong, and in fact I've forgotten what all the other letters mean, though I think T might mean Timely. Or perhaps Terrible...).
So, if you're going to have a "SMART goal", it needs to be *measurable*, regardless of whether what you're trying to do is measurable.
So, what happens is you pick a surrogate, and that's what you measure. And, in a lot of cases, you forget that it's a surrogate and people start managing to the surrogate, and you get the result that you deserve rather than the one you want.
If you can measure something for real, that's great. If you have to use a surrogate, try to be very up-front about it, track how well it's working, don't compare people with it, and please, please, please, don't base their review on it.
Somebody on an internal alias asked, "why is a HDMI cable $60 when the DVD player is also only $60", and here's what I wrote:
McDonalds sells you a hamburger for $0.69 (hypothetical price - I haven't bought a burger from them in quite a while). They charge you $0.89 for the coke that goes with it. The margin on the burner is miniscule (the cheap burger may even be a loss leader). The margin on the coke is about $0.80. People are price-sensitive to the burger price, but they're not price-sensitive to the price of the drinks.
Today, I rode the Mountain Populaire 100K, put on by the Seattle International Randonneurs.
Randonneuring is a long-distance cycling discipline that originated in France (hence the name) way back in the 1800s. It's organized around a series of rides known as "brevets", which are pronounced exactly the way you would expect if you speak French. The goal is to finish a ride of a specific distance within a specific time limit. For example, the 200 km brevet typically has a an overall time to finish of 13:30, the 300 km a time limit of 20:00, and so on - all the way up to 75 hours for a 1000 km ride.
Given the length of most of the brevets, some clubs host "populaires", which are shorter events for new riders for "introducing new riders to the ways of randonneuring".
These rides are different from most organized rides in the following ways:
Instead of doing a typical introduction, the folks at SIR decided to host a "Mountain Populaire". Instead of doing a typical course, it's a course with as much climbing as possible. (note that I'm assuming that SIR is different in this regard - it may be that all Populaires are like this).
In this case, the course packs 5480 feet of climbing into 110 km. The climbs are:
There is a claimed 8th hill, but I don't recall exactly where.
So, how does this compare to the Summits of Bothell or 7 Hills? Well, 7 Hills has a lot of climbing, but only seminary hill and winery hill are really challenging. Summits of Bothell has a lot of steep climbs, but most of them aren't very long. And they're both in the 40ish mile range.
This ride is 69 miles, and while it does have a couple of fairly easy climbs - 164th and Tiger mountain - it starts out with a 1000' climb, and then finishes with a 700' climb, both of which have slopes in excess of 15%. My legs were certainly tired when I got to Mountain Park, but I had to tack back and forth to make it to the top (I was not the only one).
Definitely the hardest ride I've been on, and a nice way to end the season. Beautiful day, and a nice group to ride with.
Yellow Sticky Exercise
Take one pack yellow stickies (aka "Post It" brand sticky paper notes). Place them strategically on your hands and arms, and wave them around for 10 minutes.
Wait... That's the wrong version.
The yellow sticky exercise is a tool that is used to collect feedback from a group in a manner that encourages everybody to give feedback, doesn't waste time, and lets people give the feedback (mostly) anonymously.
Microsoft has a tradition of doing "Post Mortems" on our projects, which are designed to figure out what went wrong, what should be done about it, and assign an owner. What typically happen is the group complains for an hour, three people dominate the conversation, several rat holes are encountered, a few things get written down as action items, and nothing ever happens with the results.
The yellow sticky exercise is an alternative. It works well whenever you want to figure out the combined opinion of the group. It was taught to me be a very sharp usability engineer.
Get everybody together in a room. Each person gets a pad of stickies and a pen. Both pens and stickies should be the same color, so things are anonymous.
In the first segment, each person writes down as many issues/problems as they can, one per sticky note. It's useful to tell people ahead of time so they can come with lists ahead of time, but that's not necessary. This generally takes about 10 minutes, and you continue until most of the people have run out of things to write down.
Ground rules for issues:
If you're doing this to collect the problems you have, it's good to ask people not to put down anything personal, and to include enough data so that
At this point you have a lot of diverse feedback items, and you need to get them placed in groups. That is done by the group. You ask the group to put the stickies up on your wall/whiteboard in meaningful groups, and just let them go at it. When there's a group of 5 or more stickies, ask somebody in the group to come up with a label for the group and write it on the whiteboard or on a sticky. You should also ask people to move stickies around if they belong in a different group.
When everything is on the wall and people are happy with the groups, you're done with the group. Somebody will need to stick the stickies on paper and own typing them all up.
If you do this, I'm confident that the breadth and the depth of the feedback will be much better than other methods.
White and Nerdy
I will not comment on how many of these apply to me...
Last week, Paul Thurrott wrote a post entitled, "The Dark Side of Windows Vista RC1", in which he discusses a few things that he doesn't like about Vista.
And the first item in his article talks about the DVD Maker UI. Paul's complaint is that the "back" button for wizards has migrated from the previous location at the bottom of the wizard to the upper-left corner of the window, and morphed into an "IE-style button".
This part of the DVD Maker UI comes from the new Aero Wizard framework (for those of you using wizards, you add PSH_AEROWIZARD to your wizard flags to use the new framework...). I agree with Paul about the location and style of the back button, and gave feedback to the Wizard UI folks about that issue.
The second point that Paul brings up is the behavior of ALT + Right Arrow. As Paul notes, doing this will start the burn process, which I agree may be a bit of a surprise, though unless you're really slow in hitting cancel, you won't make a coaster (ie junk disc) as there's a lot of work to do before we start writing the disc.
This behavior came from our goal of creating a clean user experience.
There are two required steps for creating a DVD with DVD Maker. First, you need to add some content, and then you need to select a style (you can skip selecting if you like "Full Screen"). At that point, you can pick burn, and you're done.
Then there are the optional actions, customization and previewing. One choice would have been to put the customization pages in line after the choose style pages (in traditional wizard fashion), but that adds two or three pages for every DVD, and implies to the user that they need to do something on each of those pages. It also would have left us trying to fit our interactive preview somewhere in the UI.
Instead, we chose to implement the customization and preview pages the way that dialog boxes work - you go to them, make your changes, and then either accept or roll back the changes you made.
I think that we made the right tradeoff in this case.
It's true, however, that DVD Maker is bigger and more complex than the other wizards I've seen, and the constraints of the wizard framework therefore had more impact. If we had to do it again, we might perhaps choose to make DVD Maker an application rather than a wizard.
When I started in the Video Memories group about 24 months ago, I got two new "dev boxes" from Dell.
A "dev box" is a configuration that is good enough to develop on, which at that time meant a dual-proc 3.4 Gig Pentium 4 with 2 gig of ram and about 60 gig of disk.
A nice box. One of them was dedicated to dev duties, and the other to running tests.
A couple of weeks ago, my dev box started misbehaving. It would reboot unexpectedly (which sounds like an oxymoron - aren't all reboots expected? Well, you aren't running our internal "keep things up to date" software...).
A tech visited, diagnosed swollen capacitors, and recommended a full motherboard transplant. After the surgery, the patient was fine.
And I looked with a bit of a cautious eye towards my test machine, purring happily under my desk.
Yesterday, I needed to update my installation of vista. This is usually done through a nice automated system - I boot under 2003 server, run the utility, and it goes off and installs vista. I rebooted to 2003 server, found that it had expired (how I got an evaluation copy on my system, I don't know...). Installed XP, put on the automated software, started to install Vista, left to go home. Came back in today, hit the power button to bring the system out of suspend, and the fans turned on. I'm used to that - the fans power up when there's lots of CPU or GPU load, and when the system is first started, but this time they kept powering up. Past "loud fan". Past "hairdryer". All the way to "jet engine". I power cycled a couple of times, got the same result, and called up our helpdesk again.
While I had the tech on the phone, I turned the system on, and asked him if he could hear it at "loud fan". Which he could. When it got to "hairdryer" he said, "that's loud", and when it got to "turbine", he said, "turn it off! turn it off".
My best guess is another motherboard issue, and that something is sending a bit more voltage than expected to the case fan. Or perhaps the power supply.
Kudos to whoever built the case fan - I wouldn't have thought it could take that much power.
For a *long* time, whenever I spent time with C# customers, I would invariably get asked, "How do I create docs for my assemblies?".
And my best answer was either, "there's a XSLT file that makes the XML look a little better" or "you might want to consider NDoc".
Neither of which was a great answer. The XSLT was quite inferior to what people wanted, and NDoc also had limitations.
That issue was not unnoticed inside MS, but it's taken a while to address it, and I was otherwise occupied at the time, so I didn't notice it.
But anyway, if you want to have MSDN-style docs, take a look at Sandcastle. Here's the post on the August CTP.
Kudos to the Sandcastle folks.
I did my time with the liberal arts, and I've got the papers to prove it.
I'm down with Plato and Epicurus, and I've rocked the Socratic method.
And yet, since I've been a child, despite numerous hours of thought, I remain perlexed by Marmaduke.
But no more, thanks to:
Joe Mathlete Explains Today's Marmaduke
A webcomic of romance, sarcasm, math, and language.
I really like this Venn diagram...
The windows shell team has started a blog at ShellRevealed.com
There's a blog, and some forums. There isn't a ton of content yet, but there is a short tutorial about the task dialog.