Random Disconnected Diatribes of a p&p Documentation Engineer
A couple of initially unconnected events last week conspired to nudge my brain into some kind of half-awake state where it combined them into a surreal view of "automatic" stuff. One of the events was the return from Tina, our editor and proof-reader, of my article about the Team System Management Model Designer Power Tool (a product that, thankfully, I'm legally permitted to refer to as just "TSMMD" - and will do so from now on). The second event was deciding that I ought to get a laptop sorted ready for an upcoming trip to Redmond. The combined result is some manic ravings on the meanings of stupid words, and the fact that Windows Vista obviously hates me.
TSMMD is a new add-in for Visual Studio Team System that I have been documenting for the previous few CTP releases. It's a really neat tool that allows you to build management models that describe the health states and instrumentation of an application, and then generate the appropriate instrumentation code as part of your VS project (see http://www.codeplex.com/dfo/ for details). The article is one of those "About..." and "Getting Started" things that compares what the product does to some commonplace everyday situation - in this case the way repair shops can do computerized diagnosis of faults in a modern motor car. So the article came back with editorial comments such as "Err...what does this mean?" where I had written stuff like "...without having to look under the bonnet" (Tina asked if I was taking part in an Easter parade), "...hatchback or saloon car" (is this one that has a drinks cabinet built in?), and "...look for some tools in the boot" (surely that's where you keep your feet?). And, of course, "When you say 'motor car' do you mean 'automobile'?"
Some time ago I rambled on about the way that the culture and language in the US and UK are so very similar, and yet so subtly different (see Two Nations Divided by Light Switches). But the one area where almost everything seems to be different is in the realm of motoring. I mean, to me, a car starts with a bonnet and ends with boot. Just like a person (and not necessarily one in an Easter parade). Makes perfect sense. As I said to Tina, here in England we just keep our wellingtons in our boots... except when going to a car boot sale. Why on earth would a car start with a hood and end with a trunk. Sounds more like an elephant going backwards. And what do you call the bit of an open-top sports car that keeps the passengers dry when it rains? Surely that's the hood (as in "It looks like rain, better put the hood up"). Still, I suppose cars have plenty of bits with stupid names anyway. I know that "dashboard" comes from the days of carts and wagons where they nailed a plank across the front to stop galloping (dashing?) horses hooves splashing mud onto the drivers new breeches.
Notice how I avoided saying "trousers" there. I once heard a conference organizer ask all the speakers to wear black pants for their presentations as part of a consistent dress code. I wondered how attendees would know what color underwear I had on. But that's a whole different topic area.
Anyway, getting back to cars, I suppose we now refer to the bit with the speedometer on it as the fascia. However, we still have a "fan belt" even though the radiator fan is electric and the belt drives the alternator and the pumps for the power steering and the air conditioning instead. And it isn't just the car itself. What about how, here in the UK, we have "slip roads" on our motorways? It's not like they are surfaced with super-smooth tarmac (asphalt) so you slide around. I guess the idea is that you use them to slip into the traffic stream (in which case, the way most people drive here, they should be called "barge-in roads"). The US "on-ramp" and "off-ramp" make more sense, even when they don't go uphill or downhill.
And why "freeway" in the US? My experience of driving in Florida is that you have to carry $20 in loose change for the toll booths that they planted every two miles. Although, around where I live, when they build a new bypass round a town or village, everyone refers to it as "the fast road". Even when there's traffic lights every 20 yards and a half-mile tailback most of the day. Again, the words conspire to confuse. "Traffic lights"? A nice sensible term I reckon. Yet when we were working on the Unity Application Block they wrote a sample they called "Stoplight". As the UI was just three oblong colored boxes it took me a while to figure that this was the US equivalent. Is it still a "stop light" when it's showing green?
But enough motoring invectives. The other conspiring event this week was battling, after a few months away from it, with Vista. I have to say that there are lots of things I really love about Vista, but it seems to have been designed to annoy the more technical and experienced computer user. A lot of the aggro is, I know, attributable to my long-established familiarity with XP and its internal workings. Vista is no doubt ideal for the less experienced user, as it hides lots of complexity and presents functionality that works automatically.
Yes, I've finally given up and turned off UAC so I can poke about as required and use weird scripts and macros required for my daily tasks. But it would be really nice to have an "expert" mode that lets you see (and change) all the hidden settings without having to go through several "inexperienced user" screens. I mean, it keeps complaining that my connection to the outside world through my proxy server is "not authenticated" even though it works fine, and I can't find any way to change this. And it won't let my FTP client list files, even though it works fine on the XP machine sitting next to it.
What finally got me going this week, however, it backing up the machine. I carry a small USB disk drive around with a full image of the C: drive that I can restore if it all falls over. For years I've been using TrueImage (see diary entries passim) and it works well. However, when I bought this laptop I paid extra for Vista Ultimate so I'd get the proper built-in backup software to do disk imaging. I imaged the machine when it was new, but the configuration is much changed since then so I thought I'd do a complete new backup. As there isn't a lot of space on the USB drive, I deleted the existing backup first. But Vista still thinks it's there (obviously there's some secret setting somewhere in the O/S, and it's not in the Registry 'cos I searched there) that makes Vista think I have an existing backup. So it will only do an incremental backup - there is no option to say I want a whole new one.
And it also insists on backing up the drive D: restore partition, even though I don't want that backed up. So I ran it anyway, but afterwards all it said was "the backup is complete". Did it do an incremental one or a full one? Did it skip stuff that it thinks is in the backup image I deleted? Will it actually restore to give me a working machine? In the end I deleted the backup and used TrueImage (I've got version 10 and it works fine with Vista). It asks you everything it needs to know to create the kind of image you want, and then just does it. And I've restored machines in the past using it, so I feel comfortable that I can get back to where I was when the sky falls in.
You see, this is where I worry about "automatic" stuff. For some things it seems like a really good idea, and often it "just works". Drifting back to cars, my latest acquisition has automatic climate control that just works. You can, if you wish, dive into the settings and specify hundreds of individual parts of the process, but why bother? Just set the temperature you want and let it get on with it. The car also has automatic window wipers (notice I avoided saying windscreen or windshield), which is great. They wipe the window when it rains.
But it also has automatic headlights that come on when it's dark. And this feature is turned off because I always worry that they'll come on just as I get to a junction and someone will think I'm flashing them and pull out right in front of me. Notice the important point. You can turn off the automation if you don't want it...
Here in our quiet little corner of the People's Republic of Europe, our Government decided a while ago to flog off the radio spectrum in order to pay for their countless spin doctors, pointless focus groups, endless ministerial jaunts, never-ending quangos, and failed experiments with Socialism. In return, they gave us the opportunity to enter the brave new world of Digital Broadcasting. And, rumor has it, they will eventualy build enough transmitters so that those of us who don't live in London will actually be able to receive it. Last I heard, the target date is 2013. Meanwhile, I've had to fill the entire attic of our house with bits of bent aluminium to try and drag some scraps of DAB (Digital Audio Broadcasting) out of the airwaves and down to the kitchen so my wife can have rock music on loud enough to drown out the sound of me washing the dishes.
Anyone brave enough to have tackled the diary entries from my previous life will know that, up until now, we've been using a rather nice stand-alone Soundbridge Internet Radio to get a constant stream of rock music that generally smothers my unfortunate domestic noises. However, since the BBC released their iPlayer, the fragility of the copper-wired Internet in our part of the country has been exposed for all to see. Now all we get from Virgin Classic Rock in the evenings and weekends is "It looks like you can't get our digital stream..." followed by several seconds of rebuffering and then another five minutes of music. So the boss gets to hear me clanking her best plates together.
Those distant diaretic ramblings also documented the problems of trying to get Microsoft Media Center working with the exciting new digital technologies in our forgotten little corner of Merry Olde England (well, actually almost the geographical center, but still a long way from London). Suffice to say that it basically involved turning the roof of our house into a miniature version of Jodrell Bank, but at least we now get (depending on weather conditions) around 80 channels of digital stuff on the TV. Which, apart from 30 TV channels that seem to still be showing programmes from 1973, includes loads of radio channels. As I couldn't figure a way to drag the 42" wall-mounted screen into the kitchen every time I did the dishes, I suggested to my wife that she just turn one of these up really loud and pretend we live next door to a rock festival, but she wouldn't go for that.
So, for her birthday the other week, I treated her to a shiny new DAB radio. It's a really neat thing that consists of five different lumps of plastic - two speakers, a control unit, a combined bass woofer, and a separate tiny little matrix display thingy that you stick on the wall. This means that I can hide everything but the display thingy on top of the cupboards out of the way of the soapy fountain that is me doing the dishes. And combined with some low-loss cable and the aluminium-filled attic, we can actually get Planet Rock and a couple of dozen other stations. In fact, there's even one that just plays birdsong all day!
One really neat thing with DAB is that you can view the secondary information stream. Saves loads of arguments about which band it was that recorded the track you're listening to. It even tells you the name of the program you're tuned to, and what's coming next. I guess this is pretty much underwhelming to those who have already had DAB for ages, but - as late arrivals to the digital scene - we got really excited about it. Shows how interesting my life is most of the time. However, after playing with this for a while, I suddenly realised that the people who make the hardware obviously don't do any field testing of their products. I mean, the options for the "extra info" on the neat little display thingy are the MUX channel name (such as "DigitalNetwork1"), the time and date (in case you don't own a clock), the signal strength, the bit rate, and the rather nice scrolling text messages.
Now, as a developer, what would you do? Have it remember what you selected last time and go back to that option automatically? Have it default to the rather nice scrolling text messages? Allow the user to select which they want as the default in the setup menu? All, in my opinion, obvious options. But no, they decided that it should always default to the MUX channel name every time, and you can't change this behavior. You have to press "Info/Display" twice every time you turn it on or change channel. Imagine if Windows started with a DOS prompt every time and you had to type "WIN" and click "Yes" to get to your desktop. Err... a bit like Windows 3.0 in fact. Maybe the radio's O/S developers were still using that.
And here's another thing with digital radio. It can't tell the time. With the old-fashioned steam radios of the FM and AM variety, the time signal was pretty much accurate. OK, so if you lived a long way from the transmitter you were maybe a picosecond or two behind as the radio waves fought their way through the clouds and trees, but it was near enough. Now its a second or two behind because it has to go through some magic process to get converted to digital and back again. How do I know? Because the kitchen clock is one of those radio-controlled things. Supposedly it uses proper radio waves so as to be accurate to a fraction of a second. Even when those waves have to come all the way from Rugby, which is nearly 60 miles away. And, of course, the same happens with digital TV. I recently read a letter in the paper from someone who has three DAB radios as well as digital TV, and they said they all vary by several seconds. So in our brave new digital world, you never actually know what time it is. Maybe that's what they mean by Internet time - everyone has their own version.
I suppose I could just use the fancy radio-controlled watch that my wife bought me for Christmas instead. Except it has 97 functions and only four buttons. And one of those just turns the backlight on. Every time I put it on it tells me the time in Hong Kong. I have to carry the instruction book around with me so I can reconfigure it - possibly another good example of lack of field testing. And they say that software is hard for "ordinary people" to understand. Just imagine how much fun we'll have once they get Word and Outlook to run on a wristwatch. Not only will you need to carry a box of instruction manuals around (which I guess is good for us here in the documentation team), you'll probably miss your train because you won't know what time it is, or if your time actually is the real one...
OK, so we don't actually make cheese sandwiches here at p&p. Well, as far as I know we don't (but if we did, they'd probably be the best cheese sandwiches in the world...). When I'm over in Redmond I have to stroll across the bridge to Building 4 and buy one from the canteen, though it's worth the effort because you get four different kinds of cheese in it - as well as some salad stuff. Only in the USA could someone decide that you need four different cheeses in a sandwich. Here in England a cheese sandwich is basically a chunk of Cheddar slapped between two slices of bread. Take it or leave it. Maybe it's because there is always so much choice over there, and people can't make up their mind which cheese to have.
And why is it so hard to order stuff in the States? I usually find it takes ten minutes just to order a coffee in Starbucks 'cos I have to answer endless questions. Do you want 2% milk sir? No, fill it up to the top please. Any syrup in it? No thanks, I want coffee not a cocktail. What about topping? Some froth would be nice. Am I going to "go" with it? No, I'll just leave it behind on the counter. In fact, when we go out for a meal I like to play the "No Questions" game. Basically, this involves waiting till last to place your order, and specifying all the details of your required repast in one go so the waiter doesn't have any questions left to ask. I've only ever won once, and that was in a pizza takeaway. I think they dream up extra imaginary questions just to make sure they get the last word.
Anyway, as usual, I'm wandering off-topic. We really need to get back to the cheese sandwiches. So, as a developer, how would you go about making a cheese sandwich? My guess would be something like this:
Looks like a good plan. So how would we do the same in the documentation department? How about:
Yep, it seems to be a completely stupid approach. But that's pretty much what we have to do to get documentation out of the door in the multitude of required formats. HTML and HxS files for MSDN, CHM files for HTML Help, merge modules for DocExplorer, and PDF for printing and direct publication. Oh, and occasionally Word documents, videos, and PowerPoint slide decks as well. Maybe you haven't noticed how complicated the doc sets in Visual Studio's DocExplorer tool and HTML Help actually are? There's fancy formatting, collapsible sections, selectable code languages (and it remembers your preference), multiple nesting of topics, inter-topic and intra-topic links, a table of contents, an index, and search capabilities. It even opens on the appropriate topic when you hit F1 on a keyword in the VS editor, click a link in a sample application, or click a Start menu or desktop shortcut. Yet it all starts off as a set of separate Word documents based on the special p&p DocTools template.
Yes, we have tools. We have a tool that converts multiple Word docs into a set of HTML files, one that generates a CHM file, and one that generates an HxS. But they don't do indexes, which have to be created by hand and then the CHM and HxS files recompiled to include the index. Then it needs a Visual Studio project to compile the HxS into a DocExplorer merge module, and another to create a setup routine to test the merge module. But if you suddenly decide you need to include topic keywords for help links, you have to edit the original Word documents, generate and then post-process the individual HTML files, and start over with assembly and compilation.
We have a tool (Sandcastle) that can create an API reference doc set as HTML files from compiled assemblies. But you need to modify all of them (often several hundred) if you want them indexed. But we have another (home-made and somewhat dodgy) custom tool for that. And then you have to recompile it all again. And then rebuild the merge module and the setup routine.
What about PDF? The starting point is the set of multiple Word docs that contain special custom content controls to define the links between topics, and there appears to be no suitable tool to assemble these. So you run the tool that converts them to a set of HTML files, then another dodgy home-built custom tool to process the HTML files and strip out all the gunk PDF can't cope with. Then you build a CHM file and compile in a table of contents and a much-tweaked style sheet. Finally, run it all through another tool that turns the CHM into a PDF document.
Need to make a minor change to the content because you added a late feature addition to the software? Found a formatting error (that word really should be in bold font)? Got a broken link because someone moved a page on their Web site? Found a rude word in the code comments that Sandcastle helpfully copied into the API reference section? For consistency, the change has to be made in the original Word document or the source code. So you start again from scratch. And it seems that there are only two people in the world who know how to do all this stuff!
Well, at least we've got a process that copes with changing demands and the unpredictability of software development. But it sure would be nice to have it all in a single IDE like Visual Studio. Even really good sandwiches with four different cheeses don't fully soothe the pain. Mind you, I hear from RoAnn that they have cheese sandwiches on flatbread in the fancy new building 37 cafeteria that they toast in a Panini grill. Being foreign (English), I'm not sure what "flatbread" actually is - surely if they used any other shape the cheese would fall out? Reminds me of the old story about the motorist who turns up at a repair shop and is told that the problem is a flat battery. "Well", says the customer, "What shape should it be?"...
A couple of years ago I (somewhat inadvertently) got involved in learning more about software design patterns than I really wanted to. It sounded like fun in the beginning, in a geeky kind of way, but soon - like so many of my "I wonder" ideas - spiralled out of control.
I was daft enough to propose a session about design patterns to several conference organizers, and to my surprise they actually went for it. In a big way. So I ended up having to keep doing it, even though I soon realized that I was digging myself into the proverbial hole. Best way to get flamed at a conference or when writing articles? Do security or design patterns. And, since I suffered first degree burns with the first topic some years ago, I can't imagine how I drifted so aimlessly into the second one.
Mind you, people seemed to like the story about when you are all sitting in a design meeting for a new product, figuring out the architecture and development plan, and some guy at the back with a pony tail, beard, and sandals suggests using the Atkinson-Miller Reverse Osmosis pattern for the business layer. Everyone smiles and nods knowingly; then they sneak back to their desk to search for it on the Web and see what this guy was talking about. And, of course, discover that there is no such pattern. Thing is, you never know. There are so many out there, and they even seem to keep creating new ones. Besides which, design patterns are scary things to many people (including me). Especially if UML seems to be just an Unfathomable Mixture of Lines.
Of course, design patterns are at the core of best practice software development, and part of most things we do at p&p. So I now find myself on a project that involves documenting architectural best practice. And, since somebody accidently tripped over one of my online articles, they decided I should get the job of documenting the most common and useful software patterns. No problem, I know most of the names and there is plenty of material out there I can use for research. And we have a team of incredibly bright dev and architect guys to advise me, so it's just a matter of applying the usual documentation engineering process to the problem. Gather the material, cross reference it, analyze it, review it, and document the outcome.
Ha! Would that it were that easy. I mean, take a simple pattern like Singleton. GOF (the Gang of Four in their book Design Patterns: Elements of Reusable Object-Oriented Software) define the definition of the pattern and the intended outcome. To quote: "Ensure a class only has one instance, and provide a global point of access to it." So documenting the intentions of the pattern is easy. The problem comes when you try and document the implementation.
It starts with the simple approach of making the constructor private and exposing a method that creates an instance on the first call, then returns it every time afterwards. But what about thread safety when creating the instance? No problem; put a lock around the bit that checks for an instance and creates one if there is no existing instance. But then you lock the thread every time you access the method. So start with the test for existence, and only lock if you need to create the instance. But then you need to check for an existing instance again after you lock the thread in case you weren't quick enough and another thread snuck in while you weren't looking. OK so far, but some compilers (including C# but not Visual Basic) optimise the code and will remove the second lock as there is nothing in the routine that can change the value of the instance variable between the two lock statements. So you need to mark the variable as volatile.
Now that you've actually got the "best" implementation of the pattern, you discover that the recommended approach in .NET is to allow the framework to create the instance as a shared variable automatically, which they say is guaranteed to be safe in a multi-user environment. However, this means that you don't get lazy initialization - the framework creates it when the program starts. But you can create it as a child class and let .NET create the instance only on demand. So which is best? What should I recommend? I guess the trick is to document both (as several Web sites already do). Problem solved? Not quite. Now you need to explain where and when use of the pattern is most appropriate.
At this point, I discovered I'd proverbially put my head into a hornet's nest. Out there in the real world, there seems to be a 50:50 split between people saying using Singleton is a worse sin than GOTO, and those who swear by it as a useful tool in the programmer's arsenal. In fact, I spent a hour reading more than 100 posts in one thread that (between flamings) never did provide any useful resolution. Instead of Singleton, they say, use a shared or global variable. As much of the online stuff seems to describe Singleton only as an ideal way to implement a global counter, I can see that reasoning. However, I've used it for things like exposing read-only data from an XML disk file, and it worked fine. The application only instantiates it on the few occasions that the data is required, but it's available quickly and repeatedly afterwards to all the code that needs it. I suppose that's one of the joys of the lazy initialization approach.
And then, having fought my way through all this stuff, I remembered the last project I worked on. If you use a dependency injection framework or container (such as the Unity Application Block) you have a built-in mechanism for creating and managing singletons. It even allows you to manage singletons using weak references where another process originally generated the instance. And you automatically get lazy initialization for your own classes as well as singleton behavior - even if you didn't originally define the class to include singleton capabilities. So I guess I need to document that approach as well.
And then there are sixty or so other patterns to tackle. And some of them might actually be complicated...
After the "hot stuff" article of a few weeks ago, I thought I might as well shift focus towards another similarly inane topic, like showers. You see, one thing they seem really good at in the U.S. is doing showers (the bathroom type, not the weather type - though Redmond does seem to have an equal share of both). Even when I stay in relatively down-market hotels, the rooms always seem to have a good shower. In fact, in one I used a while ago, I actually get a wet room; though my wife would probably suggest that any bathroom I use is a wet room after I'm finished.
So how difficult is it to provide a good shower? Let's face it, you only need three components and one remote service: a pipe in the wall for the water to come out of, a hole in the floor for it to run away, and a knob that controls the temperature. As for a remote service, someone has to provide hot and cold water, but I guess if you intend to install a shower that's a given anyway. Yet, here in Ye Olde England, we don't seem to have grasped the technology quite so well. Our house has what the builder referred to as "a top quality shower" installed. What this means is that the tray doesn't sag and crack when you stand on it (even if you do need to climb a 10 inch high step to get in), the cabinet panels are real toughened glass (instead of plastic), and the shower itself is a top of the range electric thing from one of the major manufacturers.
So why is it so useless? My wife always knows when I'm in the shower from the swearing and clattering noises because it's so small I keep banging my elbows against the side. And if you drop the soap, you have to turn off the shower and get out to pick it up because there isn't room to bend down. Not only that, the flow in wintertime is so poor you have to run around to get wet (or you would have to if there were room). But the worst part is that you spend the first ten minutes of your shower trying to get the temperature right, even if it was perfect yesterday, and then you get scalded when someone turns a tap on elsewhere in the house.
Now, I'm not a professional UI designer or an expert on domestic plumbing, but reckon the only thing you are really interested in with a shower is the temperature of the water coming out of the pipe in the wall (I'm assuming here it's not that hard to provide a hole in the floor for the water to run away). Yet the "top quality" thing installed in our bathroom has two user input devices: a three-position switch marked "high", "medium", and "low", and a knob labelled "flow" that goes from "low" to "high". Notice no mention of the important requirement "temperature". The idea is that you randomly fiddle with these two controls until the temperature is about right, and then hope nobody turns on a tap. Of course, there is a built-in delay while the two controls mutually interact with each other, so that any adjustment you make takes a minute or two to affect the water temperature. And the final temperature depends on the current water pressure and the incoming water temperature, so you can guarantee it will be different every time.
Just think if we built our software applications like this. Instead of Windows having a volume control in the taskbar, it would have a selector for choosing the particular integrated circuit on the sound card you want to use, and a slider for changing the voltage you send to it; and any changes you make would have no effect until the next song started playing. Or your enterprise application would have two controls where you entered the price of an order: a set of option buttons where you specify the number of zeros in the amount, and a button you click repeatedly until it randomly shows the actual value you want.
So, getting back to the wet stuff, we decided to have the shower replaced by something more usable. Of course, the main problem here is that this process involves a plumber - especially as we had to have the heating radiator moved to make room for a bigger shower cabinet. I don't know what it's like in other places, but here it tends to resemble the Flanders & Swann "The Gasman Cometh" affair. The process basically involves:
Mind you, what really amazed me is that the original builder and the plumber who is installing the new one seem to have come from different centuries. To save making holes in the wall, he suggested just having a single pipe hanging down from the ceiling and a "remote management console" fixed to the wall. While I initially had visions of an MMC snap in running on my domain controller, it seems that all it needs is some magic box hidden up in the attic and a neat little keypad thing with a built-in LCD display stuck on the wall inside the shower.
Wow, is this 21st century or what? Turns out that it's all wireless and automatic. You just dial the temperature you want, press the green button, wait until it beeps, and jump into the cubicle. Of course, being a confirmed technophobe, I wasn't fully convinced. Will it have Ctrl-Alt-Del buttons? Will I need to edit the registry when it goes wrong? And what happens when my wireless router finds it - will I get scalded every time my wife sends an email? Somehow, you just know that the reality will be different from the advertised nirvana.
However, the plumber then mentioned to my wife that it comes with a second "slave" remote unit that she can put by the bed, so she can turn on the shower before she gets up in the morning. Or even keep it in the car so she can have the shower running and ready when she pulls up in the driveway after a hard day at work. At this point, any influence I may have had in the decision-making process was lost.
Wouldn't it be great if persuading clients to buy your latest and greatest software creation were that easy...?
Due to a combination of wild assumption and striking incompetence, I recently ended up repeating a long and pointless journey and overnight stay in the following week. I'm pleased to say that only the wild assumption was on my behalf - I assumed that an email containing details of a definite appointment meant that I was supposed to turn up at the specified time and place - whereas the striking incompetence became apparent when there was nobody else there. I knew that things were turning fruit dimensional (pear shaped) when the receptionist searched in vain for my name in three folders and a ring binder, then started making random phone calls.
I guess you've been through this experience yourself at some time and will recognize the symptoms. However, besides the usual pondering on what shape pears are that haven’t gone wrong, what really got me thinking is how the costing policy of the (rather less than salubrious) hotel works. For the first trip, I booked a Sunday night stay about three weeks prior and it cost around 40 US dollars room-only for one night. For the second trip, I booked four days ahead and it cost something nearer to 100 US dollars. OK, so maybe I'm going to get a penthouse with wall-to-wall grand pianos, hot and cold running servants, and silk sheets. Or perhaps they included for a banquet meal and a West End show.
Turns out that I got the same room, the same level of non-service, the same single and very small towel (though they had washed it), and the same view of the same brick wall out of the window. There was approximately the same number of guests and the same number of cars in the car park. It was the same day of the week, and even the weather was about the same. It just cost two and a half times more.
Now, I can understand that prices change based on factors such as demand, the time of year, the day of the week, the general occupancy level trends, the cost of maintenance and bank loans, the number of months left before the owner needs to change their Rolls Royce for a new one, and hundreds of other variable factors. But it still seems a bit steep, just because I booked less than a week ahead.
I suppose this is one of the problems with the Web, online booking systems, and technology in general. Instead of printing a price list that people can see (and so has to at least appear to be relatively reasonable in the way charges are calculated), you can hide it all in the business logic behind a flashy Web page and make semi random (usually upwards) movements in the price on a whim. Ashtrays full in the Roller? Just change a configuration setting so everyone pays twice as much for the same thing for a couple of days.
So can I adopt this approach in my charging scheme? Maybe the documentation team here at p&p can figure a way to base our charge backs on some crafty business logic that combines essential factors such as the weather (we could be out in the garden), the day of the week (the pool hall charges half price on Wednesdays), or the time of year (we could be on a beach somewhere on vacation). Combined, of course, with the complexity of the task (need to find the appropriate reference book), the urgency (chance of keyboard friction burns), and the topic (might have to learn new stuff). In addition, to prevent unwelcome charge calculation transparency, we'll take the ANSI code of the first letter of the requester's name and add that on to the hourly rate.
Wow, sounds like a plan. So, do you need any documentation work doing this week...?
I suppose most people have a "natural" language. I pride myself on the fact that I speak three languages: English, American, and Shouting (used in all other situations). However, while the majority of us geeks are probably mono-dialectic or bi-dialectic in terms of spoken languages, we do tend to be multi-syntactic in terms of computer languages. In fact there can’t be many older members of the geek fraternity who don't have a passing knowledge of some dialect of BASIC. It might be GW Basic, Commodore Basic, or some variety of Visual Basic. Of course, these days, many refrain from admitting this, especially if they spent time working with what they see as "proper" languages (and I'm thinking C++ here).
Over time, the languages we've all used have changed. Some that looked especially promising in a "I've just got to learn that one" sort of way, such as Objective Camel, have dropped by the wayside. Others flourish and improve with each release. In the Microsoft world, the language that seems to be growing faster than any other is, of course, C#. I guess the acceptance as an ISO standard and the availability of the platform-agnostic CLI has helped. And, in most circumstances, the syntax is sufficiently simple compared to C++ that it is relatively easy to learn...
I often wonder where I stand in this changing world of languages. I learnt BASIC, FORTH, and assembly language programming on several home computers. I learnt Pascal, ADA, a little LISP, some COBOL, and a smattering of FORTRAN during university courses. I learnt the rudiments of RPG2 at one of my employers (even though I didn't work in the IT department, and had to sneak in when the boss was away). Then I took up the Microsoft cause and learnt Visual Basic, Access Basic, WordBasic, and any number of other myriad variations.
I remember reading somewhere a long while ago that any competent programmer can learn how to use a new language in a day, and master it in a week. There were many arguments on that bulletin board thread (see, I said it was a while ago), but I tend to believe that it's true. Of course, few languages are really "new". Knowledge of Pascal, ADA, or JScript makes learning C# and Java relatively easy. VBScript makes Visual Basic easier, and vice versa. The core programming concepts are mostly the same, and it’s just a matter of learning the syntax and keywords.
So where am I going with this rambling visit to computing language history? Well, it comes about because I increasingly fall out with the dev teams about how they write code. Yes, I know it's a wild assumption that a writer might know anything about real programming, but I don’t want to change how they write code - I just want to change how they decide on names and how they think about users of other languages.
A perfect example of the battles I seem to fight over and over again is the new Unity Application Block. The development team write in C#, and the original name of the method to retrieve object instances was "Get", which is - of course - a Visual Basic keyword. Thankfully, after I grumbled at them, it was changed to "Resolve". However, the real issues are things like variable names used in the code. I've struggled with this problem for many years, in articles, books, and documentation where the equivalent code is listed in more than one language; yet it seems I am getting no nearer to changing attitudes.
The technique I call "C++ naming" that many C# developers adopt is to simply lower-case or camel-case the class name, which means that Visual Basic has to prefix the variable name with something else for the code to remain remotely similar. And generally that "something" is the hated underscore (which doesn’t show up well in listings). In fact, the patterns & practices coding guidelines actually say "it is important to differentiate member variables from properties when the only difference in C# is capitalization". I hate to think how many hours I waste renaming variables in listings just to get the code to look the same in all the languages. And is it actually important? Do people ever read more than one language listing?
I don’t know about you, but I hate to see listings that differ between languages where this is not actually necessary. We tend to list only C# and Visual Basic, and in a few cases (such as anonymous delegates and methods), you do need to change the code between C# and Visual Basic. Plus, in cases where the developers use "C# best practice" or take advantage of features available only in C# syntax (which thankfully are few), I have to accept that I'll need to do some work to get to a Visual Basic translation - though automated tools do help a great deal.
Last week I mused about how some instruction manuals ("guidance documents" in p&p-speak) are wonderfully accurate, really useful, and may even have helpful pictures. I guess the quality of the documentation depends to some extent on how much you pay for the product; and, hopefully, how dangerous it can be if you get using it wrong. But, in terms of "can be dangerous", a colleague recently reported that she had an example of just the opposite.
She's had the builders in for what seems like the past year building a "deck" that measures about 800 square something-or-others. I asked if she was building a scale model of the Titanic, but it turns out that it's a garden feature type deck that goes twice round the house, and probably fills hers and next door's garden as well. Anyway, she decided to top it off by purchasing a nice big gas barbeque so she can throw a deck-warming party. However, it seems she was somewhat perturbed to discover that the instructions are totally incomprehensible, except for the parts that are completely wrong. As she said, "A bit worrying for something that's supposed to reach 800 degrees".
But best of all, according to the documentation, it comes "complete with four catsers". She said she'd spent ages sorting through the box and all the packaging, but could find nothing with fur, sharp claws, and whiskers. So I decided to do some research for her on the Internet (which is, of course, always completely accurate) and I am pleased to report that I have the answer. According to a definitive resource, a "catser" is the opposite of a "mouser". It's a large, and usually angry, black rat that chases and eats cats. I’m pleased to say that I got an invite to the deck-warming party, but I think I'll maybe give the first one a miss, or pretend I'm vegetarian, just in case she actually found the catsers afterwards.
Anyway, talking about things that get hot, why is it that stuff seems to get hotter as it gets older? OK, so I guess this happens with lots of things (I'm no spring chicken and I do tend to get very warm after ten minutes brisk exercise, such as typing fast). But where I'm coming from here is with an ADSL modem. The previous two I had were installed by the phone company and had integral mains transformers, so you’d expect them to get a bit hot. Especially as we get lots more volts for our money here in England than they do in many other countries.
These modems seemed to have a working life of about a year, and when the second one failed I couldn't face another fight with the phone company so I decided to purchase my own; making sure it had an external mains power pack. For the first couple of years it ran at a nice comfortable "just warm to the touch" temperature. However, suddenly it's started to fail regularly through overheating. Now I have to keep it on one of those laptop cooler things, and have a desk fan roaring away in the server cabinet as well.
If I could get the lid off, I'd vacuum it out like I do with the servers – but how much stuff can there be in an ADSL modem to get gummed up? A friend told me it was probably "thermal runaway" (I did manage to resist telling him that I keep the server cabinet doors locked, so it can't get out). Maybe he's right. At a certain temperature it just loses its self control and transforms into a microwave oven. But the ambient temperature here has not been above about 15 degrees Centigrade. Maybe they sent me a version designed for use in Alaska by mistake. Of course, if I had one designed to work at the kind of ambient temperatures we get here in English summers (it sometimes gets so hot you have to take one of your cardigans off) it would probably freeze up in winter.
Never mind, I've got the perfect solution. I'll swap it with the barbeque. After all, the modem has really good instructions, definitely no catsers, and is just reaching the right temperature now for cooking burgers.