Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Being Objective...

    • 0 Comments

    I was party to a discussion a couple of weeks ago that wandered off topic (as so many I'm involved in seem to do) into the concepts of whether a programmer is actually "OO" or not. I guess I have to admit to being a long-time railway (railroad) fanatic - an unfortunate tendency that has even, in the past, extended to model railways. So in real life (?),"OO" is the gauge of a model railway. But then someone suggested that many programmers, especially those coming from scripting languages such as classic ASP, are more "OB" than "OO". It turns out that what they mean, I'm given to understand, is that a large proportion of programmers write code that is object-based rather than object-oriented.

    I suppose that I've generally fallen into the "OB" category. Having proudly mastered using objects in my code (starting, I guess, with stuff like FileSystemObject in ASP), it was kind of disappointing to realize that I'm still a second-class citizen in the brave new world of modern programming languages. I mean, when I use Visual Basic in .NET I purposely avoid importing the VB compatibility assembly, and I use "proper" methods such as Substring and IndexOf rather than Mid and InStr. I even use .NET data types, such as Int32 instead of Integer, though I regularly get castigated for that. Especially when I write C# code and use Int32 instead of int, and Object with a capital "O". As I discovered, if you want to start a "discussion", tell a seasoned C# programmer that they are supposed to use the .NET classes instead of all those weird data type names.

    I'm told that the compiler (C# or VB) simply translates the language-specific type name into the appropriate type automatically, and it makes sense for programmers to use the types defined in the coding language rather than specifying the .NET data types directly. Does it make a difference? I'm not clever enough to know for sure, but I can't see how it would affect performance because it's all IL code in the end. My take is that more and more programmers are having to (and are expected to) cope with more than one language. If you do contract work, and prefer to work in C#, do you turn down a job working on a project originally written in Visual Basic? If we are in for a global recession, as many suggest, can you afford to turn work away?

    And what about those C# programmers who tell me they "can't understand Visual Basic". I can imagine that, if you've never used it, you would find it hard to write VB.NET code from scratch, though it surely can't take long to figure that you use "End" instead of a closing curly. OK, so the OO-features have quite different keywords (like "Friend" and "MustOverride"), but it's a lot easier than learning Portuguese (unless you're Portuguese, of course) or any other foreign language. Hey, almost all of it is .NET classes. Mind you, someone to crack you across the knuckles with stick whenever your fingers stray towards the semi-colon key would help. And I can't see how any C# programmer can say they can't read Visual Basic code. Again, the class modifiers and inheritance keywords are a bit different, but they aren't that hard to figure out.

    OK, so it may sound like another rant against C# programmers, but I can assure you it definitely is not. After all, I'm half C# programmer, and I really do like the language. Weirdly, though, most of what I write (in terms of programming rather than documentation) is in VB.NET - I suppose it's force of habit and the comfort factor having come from classic ASP. Yet most of what I read (docs and code) is in C#. And, as they say in the movies, some of my best friends are C# programmers...

    I suppose what I really want to know is: what's the test to see if you are "OO" rather than "OB". Is there a fixed number of interfaces and base classes you have to include in a project? Do you have to have at least 10% inherited properties and methods, and use polymorphism at least twice per 500 lines of code? If you forget to refactor one class, or use an array instead of a generic list, does that automatically disqualify you? Perhaps there is a minimum set of design patterns you have to implement per application. And what about the fact that I still use FileSystemObject in an old Web site I never got round to converting from classic ASP? Or is it that, secretly, you can only really be "OO" if you write in C#, Java, C++, and other "proper" languages...?

  • Writing ... or Just Practicing?

    Easter Bonnets and Adverse Automation

    • 0 Comments

    A couple of initially unconnected events last week conspired to nudge my brain into some kind of half-awake state where it combined them into a surreal view of "automatic" stuff. One of the events was the return from Tina, our editor and proof-reader, of my article about the Team System Management Model Designer Power Tool (a product that, thankfully, I'm legally permitted to refer to as just "TSMMD" - and will do so from now on). The second event was deciding that I ought to get a laptop sorted ready for an upcoming trip to Redmond. The combined result is some manic ravings on the meanings of stupid words, and the fact that Windows Vista obviously hates me.

    TSMMD is a new add-in for Visual Studio Team System that I have been documenting for the previous few CTP releases. It's a really neat tool that allows you to build management models that describe the health states and instrumentation of an application, and then generate the appropriate instrumentation code as part of your VS project (see http://www.codeplex.com/dfo/ for details). The article is one of those "About..." and "Getting Started" things that compares what the product does to some commonplace everyday situation - in this case the way repair shops can do computerized diagnosis of faults in a modern motor car. So the article came back with editorial comments such as "Err...what does this mean?" where I had written stuff like "...without having to look under the bonnet" (Tina asked if I was taking part in an Easter parade), "...hatchback or saloon car" (is this one that has a drinks cabinet built in?), and "...look for some tools in the boot" (surely that's where you keep your feet?). And, of course, "When you say 'motor car' do you mean 'automobile'?"

    Some time ago I rambled on about the way that the culture and language in the US and UK are so very similar, and yet so subtly different (see Two Nations Divided by Light Switches). But the one area where almost everything seems to be different is in the realm of motoring. I mean, to me, a car starts with a bonnet and ends with boot. Just like a person (and not necessarily one in an Easter parade). Makes perfect sense. As I said to Tina, here in England we just keep our wellingtons in our boots... except when going to a car boot sale. Why on earth would a car start with a hood and end with a trunk. Sounds more like an elephant going backwards. And what do you call the bit of an open-top sports car that keeps the passengers dry when it rains? Surely that's the hood (as in "It looks like rain, better put the hood up"). Still, I suppose cars have plenty of bits with stupid names anyway. I know that "dashboard" comes from the days of carts and wagons where they nailed a plank across the front to stop galloping (dashing?) horses hooves splashing mud onto the drivers new breeches.

    Notice how I avoided saying "trousers" there. I once heard a conference organizer ask all the speakers to wear black pants for their presentations as part of a consistent dress code. I wondered how attendees would know what color underwear I had on. But that's a whole different topic area.

    Anyway, getting back to cars, I suppose we now refer to the bit with the speedometer on it as the fascia. However, we still have a "fan belt" even though the radiator fan is electric and the belt drives the alternator and the pumps for the power steering and the air conditioning instead. And it isn't just the car itself. What about how, here in the UK, we have "slip roads" on our motorways? It's not like they are surfaced with super-smooth tarmac (asphalt) so you slide around. I guess the idea is that you use them to slip into the traffic stream (in which case, the way most people drive here, they should be called "barge-in roads"). The US "on-ramp" and "off-ramp" make more sense, even when they don't go uphill or downhill.

    And why "freeway" in the US? My experience of driving in Florida is that you have to carry $20 in loose change for the toll booths that they planted every two miles. Although, around where I live, when they build a new bypass round a town or village, everyone refers to it as "the fast road". Even when there's traffic lights every 20 yards and a half-mile tailback most of the day. Again, the words conspire to confuse. "Traffic lights"? A nice sensible term I reckon. Yet when we were working on the Unity Application Block they wrote a sample they called "Stoplight". As the UI was just three oblong colored boxes it took me a while to figure that this was the US equivalent. Is it still a "stop light" when it's showing green?

    But enough motoring invectives. The other conspiring event this week was battling, after a few months away from it, with Vista. I have to say that there are lots of things I really love about Vista, but it seems to have been designed to annoy the more technical and experienced computer user. A lot of the aggro is, I know, attributable to my long-established familiarity with XP and its internal workings. Vista is no doubt ideal for the less experienced user, as it hides lots of complexity and presents functionality that works automatically.

    Yes, I've finally given up and turned off UAC so I can poke about as required and use weird scripts and macros required for my daily tasks. But it would be really nice to have an "expert" mode that lets you see (and change) all the hidden settings without having to go through several "inexperienced user" screens. I mean, it keeps complaining that my connection to the outside world through my proxy server is "not authenticated" even though it works fine, and I can't find any way to change this. And it won't let my FTP client list files, even though it works fine on the XP machine sitting next to it.

    What finally got me going this week, however, it backing up the machine. I carry a small USB disk drive around with a full image of the C: drive that I can restore if it all falls over. For years I've been using TrueImage (see diary entries passim) and it works well. However, when I bought this laptop I paid extra for Vista Ultimate so I'd get the proper built-in backup software to do disk imaging. I imaged the machine when it was new, but the configuration is much changed since then so I thought I'd do a complete new backup. As there isn't a lot of space on the USB drive, I deleted the existing backup first. But Vista still thinks it's there (obviously there's some secret setting somewhere in the O/S, and it's not in the Registry 'cos I searched there) that makes Vista think I have an existing backup. So it will only do an incremental backup - there is no option to say I want a whole new one.

    And it also insists on backing up the drive D: restore partition, even though I don't want that backed up. So I ran it anyway, but afterwards all it said was "the backup is complete". Did it do an incremental one or a full one? Did it skip stuff that it thinks is in the backup image I deleted? Will it actually restore to give me a working machine? In the end I deleted the backup and used TrueImage (I've got version 10 and it works fine with Vista). It asks you everything it needs to know to create the kind of image you want, and then just does it. And I've restored machines in the past using it, so I feel comfortable that I can get back to where I was when the sky falls in.

    You see, this is where I worry about "automatic" stuff. For some things it seems like a really good idea, and often it "just works". Drifting back to cars, my latest acquisition has automatic climate control that just works. You can, if you wish, dive into the settings and specify hundreds of individual parts of the process, but why bother? Just set the temperature you want and let it get on with it. The car also has automatic window wipers (notice I avoided saying windscreen or windshield), which is great. They wipe the window when it rains.

    But it also has automatic headlights that come on when it's dark. And this feature is turned off because I always worry that they'll come on just as I get to a junction and someone will think I'm flashing them and pull out right in front of me. Notice the important point. You can turn off the automation if you don't want it...

  • Writing ... or Just Practicing?

    Whose Time Is It Anyway?

    • 0 Comments

    Here in our quiet little corner of the People's Republic of Europe, our Government decided a while ago to flog off the radio spectrum in order to pay for their countless spin doctors, pointless focus groups, endless ministerial jaunts, never-ending quangos, and failed experiments with Socialism. In return, they gave us the opportunity to enter the brave new world of Digital Broadcasting. And, rumor has it, they will eventualy build enough transmitters so that those of us who don't live in London will actually be able to receive it. Last I heard, the target date is 2013. Meanwhile, I've had to fill the entire attic of our house with bits of bent aluminium to try and drag some scraps of DAB (Digital Audio Broadcasting) out of the airwaves and down to the kitchen so my wife can have rock music on loud enough to drown out the sound of me washing the dishes.

    Anyone brave enough to have tackled the diary entries from my previous life will know that, up until now, we've been using a rather nice stand-alone Soundbridge Internet Radio to get a constant stream of rock music that generally smothers my unfortunate domestic noises. However, since the BBC released their iPlayer, the fragility of the copper-wired Internet in our part of the country has been exposed for all to see. Now all we get from Virgin Classic Rock in the evenings and weekends is "It looks like you can't get our digital stream..." followed by several seconds of rebuffering and then another five minutes of music. So the boss gets to hear me clanking her best plates together.

    Those distant diaretic ramblings also documented the problems of trying to get Microsoft Media Center working with the exciting new digital technologies in our forgotten little corner of Merry Olde England (well, actually almost the geographical center, but still a long way from London). Suffice to say that it basically involved turning the roof of our house into a miniature version of Jodrell Bank, but at least we now get (depending on weather conditions) around 80 channels of digital stuff on the TV. Which, apart from 30 TV channels that seem to still be showing programmes from 1973, includes loads of radio channels. As I couldn't figure a way to drag the 42" wall-mounted screen into the kitchen every time I did the dishes, I suggested to my wife that she just turn one of these up really loud and pretend we live next door to a rock festival, but she wouldn't go for that.

    So, for her birthday the other week, I treated her to a shiny new DAB radio. It's a really neat thing that consists of five different lumps of plastic - two speakers, a control unit, a combined bass woofer, and a separate tiny little matrix display thingy that you stick on the wall. This means that I can hide everything but the display thingy on top of the cupboards out of the way of the soapy fountain that is me doing the dishes. And combined with some low-loss cable and the aluminium-filled attic, we can actually get Planet Rock and a couple of dozen other stations. In fact, there's even one that just plays birdsong all day!

    One really neat thing with DAB is that you can view the secondary information stream. Saves loads of arguments about which band it was that recorded the track you're listening to. It even tells you the name of the program you're tuned to, and what's coming next. I guess this is pretty much underwhelming to those who have already had DAB for ages, but - as late arrivals to the digital scene - we got really excited about it. Shows how interesting my life is most of the time. However, after playing with this for a while, I suddenly realised that the people who make the hardware obviously don't do any field testing of their products. I mean, the options for the "extra info" on the neat little display thingy are the MUX channel name (such as "DigitalNetwork1"), the time and date (in case you don't own a clock), the signal strength, the bit rate, and the rather nice scrolling text messages.

    Now, as a developer, what would you do? Have it remember what you selected last time and go back to that option automatically? Have it default to the rather nice scrolling text messages? Allow the user to select which they want as the default in the setup menu? All, in my opinion, obvious options. But no, they decided that it should always default to the MUX channel name every time, and you can't change this behavior. You have to press "Info/Display" twice every time you turn it on or change channel. Imagine if Windows started with a DOS prompt every time and you had to type "WIN" and click "Yes" to get to your desktop. Err... a bit like Windows 3.0 in fact. Maybe the radio's O/S developers were still using that.

    And here's another thing with digital radio. It can't tell the time. With the old-fashioned steam radios of the FM and AM variety, the time signal was pretty much accurate. OK, so if you lived a long way from the transmitter you were maybe a picosecond or two behind as the radio waves fought their way through the clouds and trees, but it was near enough. Now its a second or two behind because it has to go through some magic process to get converted to digital and back again. How do I know? Because the kitchen clock is one of those radio-controlled things. Supposedly it uses proper radio waves so as to be accurate to a fraction of a second. Even when those waves have to come all the way from Rugby, which is nearly 60 miles away. And, of course, the same happens with digital TV. I recently read a letter in the paper from someone who has three DAB radios as well as digital TV, and they said they all vary by several seconds. So in our brave new digital world, you never actually know what time it is. Maybe that's what they mean by Internet time - everyone has their own version.

    I suppose I could just use the fancy radio-controlled watch that my wife bought me for Christmas instead. Except it has 97 functions and only four buttons. And one of those just turns the backlight on. Every time I put it on it tells me the time in Hong Kong. I have to carry the instruction book around with me so I can reconfigure it - possibly another good example of lack of field testing. And they say that software is hard for "ordinary people" to understand. Just imagine how much fun we'll have once they get Word and Outlook to run on a wristwatch. Not only will you need to carry a box of instruction manuals around (which I guess is good for us here in the documentation team), you'll probably miss your train because you won't know what time it is, or if your time actually is the real one...

  • Writing ... or Just Practicing?

    How p&p Makes Cheese Sandwiches

    • 0 Comments

    OK, so we don't actually make cheese sandwiches here at p&p. Well, as far as I know we don't (but if we did, they'd probably be the best cheese sandwiches in the world...). When I'm over in Redmond I have to stroll across the bridge to Building 4 and buy one from the canteen, though it's worth the effort because you get four different kinds of cheese in it - as well as some salad stuff. Only in the USA could someone decide that you need four different cheeses in a sandwich. Here in England a cheese sandwich is basically a chunk of Cheddar slapped between two slices of bread. Take it or leave it. Maybe it's because there is always so much choice over there, and people can't make up their mind which cheese to have.

    And why is it so hard to order stuff in the States? I usually find it takes ten minutes just to order a coffee in Starbucks 'cos I have to answer endless questions. Do you want 2% milk sir? No, fill it up to the top please. Any syrup in it? No thanks, I want coffee not a cocktail. What about topping? Some froth would be nice. Am I going to "go" with it? No, I'll just leave it behind on the counter. In fact, when we go out for a meal I like to play the "No Questions" game. Basically, this involves waiting till last to place your order, and specifying all the details of your required repast in one go so the waiter doesn't have any questions left to ask. I've only ever won once, and that was in a pizza takeaway. I think they dream up extra imaginary questions just to make sure they get the last word.

    Anyway, as usual, I'm wandering off-topic. We really need to get back to the cheese sandwiches. So, as a developer, how would you go about making a cheese sandwich? My guess would be something like this:

    1. Requirements analysis. Survey all the sandwichees to discover what kind of bread they want (rye, brown, granary, white, toasted?), what cheese they like (Wensleydale, Stilton, Gruyere, Jack?), whether they want butter or margarine spread, etc.
    2. Resource and materials review. How thin can you slice the cheese to get optimum taste while maximizing efficient usage. How thick should you slice the bread to get good sandwich stability with maximum return on loaf investment.
    3. Project planning. Decide on tests that will check for a correct result, and choose a suitable development environment such as a machine that can slice both bread and cheese. Formulate an iterative milepost-based plan for manufacture.
    4. Agile test-driven development. Pair one bread/cheese slicing operative with one bread spreader, and pass the resulting components to an assembler who merges one slice of cheese with two slices of bread.
    5. Test phase. Eat one.

    Looks like a good plan. So how would we do the same in the documentation department? How about:

    1. Buy some cheese. Slice it up into random thicknesses and sizes and mark each slice with a number using a thick black marker pen so you know what order they fit together afterwards.
    2. Order a loaf from The Variable Baker Inc. Discover it's a different size from the cheese slices, so trim each one to fit.
    3. As nobody really knows what the sandwichees will actually want at this stage, spread the bread with real butter as that seems like the obvious option.
    4. Carefully organize the numbered slices of cheese, index them, create a table of sandwich contents, wash the numbers off each slice of cheese, and assemble the sandwiches neatly on a plate. Stick a little cocktail-stick flag in each one to identify it.
    5. As each sandwichee arrives, ask them what kind of cheese and what kind of spread they want. Tip all the sandwiches off the plate, reorganize and modify the contents, then reassemble the plate of sandwiches with different little cocktail-stick flags.
    6. Repeat step 5 until everyone is fed up with cheese sandwiches.

    Yep, it seems to be a completely stupid approach. But that's pretty much what we have to do to get documentation out of the door in the multitude of required formats. HTML and HxS files for MSDN, CHM files for HTML Help, merge modules for DocExplorer, and PDF for printing and direct publication. Oh, and occasionally Word documents, videos, and PowerPoint slide decks as well. Maybe you haven't noticed how complicated the doc sets in Visual Studio's DocExplorer tool and HTML Help actually are? There's fancy formatting, collapsible sections, selectable code languages (and it remembers your preference), multiple nesting of topics, inter-topic and intra-topic links, a table of contents, an index, and search capabilities. It even opens on the appropriate topic when you hit F1 on a keyword in the VS editor, click a link in a sample application, or click a Start menu or desktop shortcut. Yet it all starts off as a set of separate Word documents based on the special p&p DocTools template.

    Yes, we have tools. We have a tool that converts multiple Word docs into a set of HTML files, one that generates a CHM file, and one that generates an HxS. But they don't do indexes, which have to be created by hand and then the CHM and HxS files recompiled to include the index. Then it needs a Visual Studio project to compile the HxS into a DocExplorer merge module, and another to create a setup routine to test the merge module. But if you suddenly decide you need to include topic keywords for help links, you have to edit the original Word documents, generate and then post-process the individual HTML files, and start over with assembly and compilation.

    We have a tool (Sandcastle) that can create an API reference doc set as HTML files from compiled assemblies. But you need to modify all of them (often several hundred) if you want them indexed. But we have another (home-made and somewhat dodgy) custom tool for that. And then you have to recompile it all again. And then rebuild the merge module and the setup routine.

    What about PDF? The starting point is the set of multiple Word docs that contain special custom content controls to define the links between topics, and there appears to be no suitable tool to assemble these. So you run the tool that converts them to a set of HTML files, then another dodgy home-built custom tool to process the HTML files and strip out all the gunk PDF can't cope with. Then you build a CHM file and compile in a table of contents and a much-tweaked style sheet. Finally, run it all through another tool that turns the CHM into a PDF document.

    Need to make a minor change to the content because you added a late feature addition to the software? Found a formatting error (that word really should be in bold font)? Got a broken link because someone moved a page on their Web site? Found a rude word in the code comments that Sandcastle helpfully copied into the API reference section? For consistency, the change has to be made in the original Word document or the source code. So you start again from scratch. And it seems that there are only two people in the world who know how to do all this stuff!

    Well, at least we've got a process that copes with changing demands and the unpredictability of software development. But it sure would be nice to have it all in a single IDE like Visual Studio. Even really good sandwiches with four different cheeses don't fully soothe the pain. Mind you, I hear from RoAnn that they have cheese sandwiches on flatbread in the fancy new building 37 cafeteria that they toast in a Panini grill. Being foreign (English), I'm not sure what "flatbread" actually is - surely if they used any other shape the cheese would fall out? Reminds me of the old story about the motorist who turns up at a repair shop and is told that the problem is a flat battery. "Well", says the customer, "What shape should it be?"...

  • Writing ... or Just Practicing?

    Preaching What You Practice

    • 0 Comments

    A couple of years ago I (somewhat inadvertently) got involved in learning more about software design patterns than I really wanted to. It sounded like fun in the beginning, in a geeky kind of way, but soon - like so many of my "I wonder" ideas - spiralled out of control.

    I was daft enough to propose a session about design patterns to several conference organizers, and to my surprise they actually went for it. In a big way. So I ended up having to keep doing it, even though I soon realized that I was digging myself into the proverbial hole. Best way to get flamed at a conference or when writing articles? Do security or design patterns. And, since I suffered first degree burns with the first topic some years ago, I can't imagine how I drifted so aimlessly into the second one.

    Mind you, people seemed to like the story about when you are all sitting in a design meeting for a new product, figuring out the architecture and development plan, and some guy at the back with a pony tail, beard, and sandals suggests using the Atkinson-Miller Reverse Osmosis pattern for the business layer. Everyone smiles and nods knowingly; then they sneak back to their desk to search for it on the Web and see what this guy was talking about. And, of course, discover that there is no such pattern. Thing is, you never know. There are so many out there, and they even seem to keep creating new ones. Besides which, design patterns are scary things to many people (including me). Especially if UML seems to be just an Unfathomable Mixture of Lines.

    Of course, design patterns are at the core of best practice software development, and part of most things we do at p&p. So I now find myself on a project that involves documenting architectural best practice. And, since somebody accidently tripped over one of my online articles, they decided I should get the job of documenting the most common and useful software patterns. No problem, I know most of the names and there is plenty of material out there I can use for research. And we have a team of incredibly bright dev and architect guys to advise me, so it's just a matter of applying the usual documentation engineering process to the problem. Gather the material, cross reference it, analyze it, review it, and document the outcome.

    Ha! Would that it were that easy. I mean, take a simple pattern like Singleton. GOF (the Gang of Four in their book Design Patterns: Elements of Reusable Object-Oriented Software) define the definition of the pattern and the intended outcome. To quote: "Ensure a class only has one instance, and provide a global point of access to it." So documenting the intentions of the pattern is easy. The problem comes when you try and document the implementation.

    It starts with the simple approach of making the constructor private and exposing a method that creates an instance on the first call, then returns it every time afterwards. But what about thread safety when creating the instance? No problem; put a lock around the bit that checks for an instance and creates one if there is no existing instance. But then you lock the thread every time you access the method. So start with the test for existence, and only lock if you need to create the instance. But then you need to check for an existing instance again after you lock the thread in case you weren't quick enough and another thread snuck in while you weren't looking. OK so far, but some compilers (including C# but not Visual Basic) optimise the code and will remove the second lock as there is nothing in the routine that can change the value of the instance variable between the two lock statements. So you need to mark the variable as volatile.

    Now that you've actually got the "best" implementation of the pattern, you discover that the recommended approach in .NET is to allow the framework to create the instance as a shared variable automatically, which they say is guaranteed to be safe in a multi-user environment. However, this means that you don't get lazy initialization - the framework creates it when the program starts. But you can create it as a child class and let .NET create the instance only on demand. So which is best? What should I recommend? I guess the trick is to document both (as several Web sites already do). Problem solved? Not quite. Now you need to explain where and when use of the pattern is most appropriate.

    At this point, I discovered I'd proverbially put my head into a hornet's nest. Out there in the real world, there seems to be a 50:50 split between people saying using Singleton is a worse sin than GOTO, and those who swear by it as a useful tool in the programmer's arsenal. In fact, I spent a hour reading more than 100 posts in one thread that (between flamings) never did provide any useful resolution. Instead of Singleton, they say, use a shared or global variable. As much of the online stuff seems to describe Singleton only as an ideal way to implement a global counter, I can see that reasoning. However, I've used it for things like exposing read-only data from an XML disk file, and it worked fine. The application only instantiates it on the few occasions that the data is required, but it's available quickly and repeatedly afterwards to all the code that needs it. I suppose that's one of the joys of the lazy initialization approach.

    And then, having fought my way through all this stuff, I remembered the last project I worked on. If you use a dependency injection framework or container (such as the Unity Application Block) you have a built-in mechanism for creating and managing singletons. It even allows you to manage singletons using weak references where another process originally generated the instance. And you automatically get lazy initialization for your own classes as well as singleton behavior - even if you didn't originally define the class to include singleton capabilities. So I guess I need to document that approach as well.

    And then there are sixty or so other patterns to tackle. And some of them might actually be complicated...

  • Writing ... or Just Practicing?

    Wet Stuff

    • 0 Comments

    After the "hot stuff" article of a few weeks ago, I thought I might as well shift focus towards another similarly inane topic, like showers. You see, one thing they seem really good at in the U.S. is doing showers (the bathroom type, not the weather type - though Redmond does seem to have an equal share of both). Even when I stay in relatively down-market hotels, the rooms always seem to have a good shower. In fact, in one I used a while ago, I actually get a wet room; though my wife would probably suggest that any bathroom I use is a wet room after I'm finished.

    So how difficult is it to provide a good shower? Let's face it, you only need three components and one remote service: a pipe in the wall for the water to come out of, a hole in the floor for it to run away, and a knob that controls the temperature. As for a remote service, someone has to provide hot and cold water, but I guess if you intend to install a shower that's a given anyway. Yet, here in Ye Olde England, we don't seem to have grasped the technology quite so well. Our house has what the builder referred to as "a top quality shower" installed. What this means is that the tray doesn't sag and crack when you stand on it (even if you do need to climb a 10 inch high step to get in), the cabinet panels are real toughened glass (instead of plastic), and the shower itself is a top of the range electric thing from one of the major manufacturers.

    So why is it so useless? My wife always knows when I'm in the shower from the swearing and clattering noises because it's so small I keep banging my elbows against the side. And if you drop the soap, you have to turn off the shower and get out to pick it up because there isn't room to bend down. Not only that, the flow in wintertime is so poor you have to run around to get wet (or you would have to if there were room). But the worst part is that you spend the first ten minutes of your shower trying to get the temperature right, even if it was perfect yesterday, and then you get scalded when someone turns a tap on elsewhere in the house.

    Now, I'm not a professional UI designer or an expert on domestic plumbing, but reckon the only thing you are really interested in with a shower is the temperature of the water coming out of the pipe in the wall (I'm assuming here it's not that hard to provide a hole in the floor for the water to run away). Yet the "top quality" thing installed in our bathroom has two user input devices: a three-position switch marked "high", "medium", and "low", and a knob labelled "flow" that goes from "low" to "high". Notice no mention of the important requirement "temperature". The idea is that you randomly fiddle with these two controls until the temperature is about right, and then hope nobody turns on a tap. Of course, there is a built-in delay while the two controls mutually interact with each other, so that any adjustment you make takes a minute or two to affect the water temperature. And the final temperature depends on the current water pressure and the incoming water temperature, so you can guarantee it will be different every time.

    Just think if we built our software applications like this. Instead of Windows having a volume control in the taskbar, it would have a selector for choosing the particular integrated circuit on the sound card you want to use, and a slider for changing the voltage you send to it; and any changes you make would have no effect until the next song started playing. Or your enterprise application would have two controls where you entered the price of an order: a set of option buttons where you specify the number of zeros in the amount, and a button you click repeatedly until it randomly shows the actual value you want.

    So, getting back to the wet stuff, we decided to have the shower replaced by something more usable. Of course, the main problem here is that this process involves a plumber - especially as we had to have the heating radiator moved to make room for a bigger shower cabinet. I don't know what it's like in other places, but here it tends to resemble the Flanders & Swann "The Gasman Cometh" affair. The process basically involves:

    • Phone three plumbers to ask them to come and give you a quote, and arrange appropriate times and dates (preferably, so they don't all arrive together).
    • Rearrange the dates as required when they phone to say that urgent jobs have come up, and they can't make the agreed date or time.
    • If one actually does turn up, show them what's required and listen to the sucking-teeth-and-raising-eyebrow noises as they tell you how complicated (and, of course, expensive) the job will be. "Hmmm, that's a concrete block wall so it will be difficult getting pipes in," and "we'll have to find a way to get hot water from the tank." Like they didn't expect that installing a shower would involve pipes and water...?
    • Put up with two weeks of dust, banging noises, no water, and visits from various tradesmen; including two plumbers, a tiler, an electrician, two delivery van drivers, and some guy who just seems to have wandered in off the street to see what's going on.

    Mind you, what really amazed me is that the original builder and the plumber who is installing the new one seem to have come from different centuries. To save making holes in the wall, he suggested just having a single pipe hanging down from the ceiling and a "remote management console" fixed to the wall. While I initially had visions of an MMC snap in running on my domain controller, it seems that all it needs is some magic box hidden up in the attic and a neat little keypad thing with a built-in LCD display stuck on the wall inside the shower.

    Wow, is this 21st century or what? Turns out that it's all wireless and automatic. You just dial the temperature you want, press the green button, wait until it beeps, and jump into the cubicle. Of course, being a confirmed technophobe, I wasn't fully convinced. Will it have Ctrl-Alt-Del buttons? Will I need to edit the registry when it goes wrong? And what happens when my wireless router finds it - will I get scalded every time my wife sends an email? Somehow, you just know that the reality will be different from the advertised nirvana.

    However, the plumber then mentioned to my wife that it comes with a second "slave" remote unit that she can put by the bed, so she can turn on the shower before she gets up in the morning. Or even keep it in the car so she can have the shower running and ready when she pulls up in the driveway after a hard day at work. At this point, any influence I may have had in the decision-making process was lost.

    Wouldn't it be great if persuading clients to buy your latest and greatest software creation were that easy...?

  • Writing ... or Just Practicing?

    Random In-house Publishing

    • 0 Comments

    Due to a combination of wild assumption and striking incompetence, I recently ended up repeating a long and pointless journey and overnight stay in the following week. I'm pleased to say that only the wild assumption was on my behalf - I assumed that an email containing details of a definite appointment meant that I was supposed to turn up at the specified time and place - whereas the striking incompetence became apparent when there was nobody else there. I knew that things were turning fruit dimensional (pear shaped) when the receptionist searched in vain for my name in three folders and a ring binder, then started making random phone calls.

    I guess you've been through this experience yourself at some time and will recognize the symptoms. However, besides the usual pondering on what shape pears are that haven’t gone wrong, what really got me thinking is how the costing policy of the (rather less than salubrious) hotel works. For the first trip, I booked a Sunday night stay about three weeks prior and it cost around 40 US dollars room-only for one night. For the second trip, I booked four days ahead and it cost something nearer to 100 US dollars. OK, so maybe I'm going to get a penthouse with wall-to-wall grand pianos, hot and cold running servants, and silk sheets. Or perhaps they included for a banquet meal and a West End show.

    Turns out that I got the same room, the same level of non-service, the same single and very small towel (though they had washed it), and the same view of the same brick wall out of the window. There was approximately the same number of guests and the same number of cars in the car park. It was the same day of the week, and even the weather was about the same. It just cost two and a half times more.

    Now, I can understand that prices change based on factors such as demand, the time of year, the day of the week, the general occupancy level trends, the cost of maintenance and bank loans, the number of months left before the owner needs to change their Rolls Royce for a new one, and hundreds of other variable factors. But it still seems a bit steep, just because I booked less than a week ahead.

    I suppose this is one of the problems with the Web, online booking systems, and technology in general. Instead of printing a price list that people can see (and so has to at least appear to be relatively reasonable in the way charges are calculated), you can hide it all in the business logic behind a flashy Web page and make semi random (usually upwards) movements in the price on a whim. Ashtrays full in the Roller? Just change a configuration setting so everyone pays twice as much for the same thing for a couple of days.

    So can I adopt this approach in my charging scheme? Maybe the documentation team here at p&p can figure a way to base our charge backs on some crafty business logic that combines essential factors such as the weather (we could be out in the garden), the day of the week (the pool hall charges half price on Wednesdays), or the time of year (we could be on a beach somewhere on vacation). Combined, of course, with the complexity of the task (need to find the appropriate reference book), the urgency (chance of keyboard friction burns), and the topic (might have to learn new stuff). In addition, to prevent unwelcome charge calculation transparency, we'll take the ANSI code of the first letter of the requester's name and add that on to the hourly rate.

    Wow, sounds like a plan. So, do you need any documentation work doing this week...?

  • Writing ... or Just Practicing?

    Lost in the Translation

    • 0 Comments

    I suppose most people have a "natural" language. I pride myself on the fact that I speak three languages: English, American, and Shouting (used in all other situations). However, while the majority of us geeks are probably mono-dialectic or bi-dialectic in terms of spoken languages, we do tend to be multi-syntactic in terms of computer languages. In fact there can’t be many older members of the geek fraternity who don't have a passing knowledge of some dialect of BASIC. It might be GW Basic, Commodore Basic, or some variety of Visual Basic. Of course, these days, many refrain from admitting this, especially if they spent time working with what they see as "proper" languages (and I'm thinking C++ here).

    Over time, the languages we've all used have changed. Some that looked especially promising in a "I've just got to learn that one" sort of way, such as Objective Camel, have dropped by the wayside. Others flourish and improve with each release. In the Microsoft world, the language that seems to be growing faster than any other is, of course, C#. I guess the acceptance as an ISO standard and the availability of the platform-agnostic CLI has helped. And, in most circumstances, the syntax is sufficiently simple compared to C++ that it is relatively easy to learn...

    I often wonder where I stand in this changing world of languages. I learnt BASIC, FORTH, and assembly language programming on several home computers. I learnt Pascal, ADA, a little LISP, some COBOL, and a smattering of FORTRAN during university courses. I learnt the rudiments of RPG2 at one of my employers (even though I didn't work in the IT department, and had to sneak in when the boss was away). Then I took up the Microsoft cause and learnt Visual Basic, Access Basic, WordBasic, and any number of other myriad variations.

    Next came the brave new world of the Internet, with scriptable browsers, and I learnt VBScript, JScript/JavaScript, and some Java. Plus, if you can call them languages, HTML, XML, XSL, XSL-T, and many other variations of XML-based dialects. Then, with the advent of .NET, it was back to Visual Basic - and learning C#. And having to finally get to grips with object oriented programming. In fact, my ASP.NET colleagues actually christened me "an upgraded VBScripter" because I'd always scraped by without really learning about inheritance, interfaces, and the like.

    I remember reading somewhere a long while ago that any competent programmer can learn how to use a new language in a day, and master it in a week. There were many arguments on that bulletin board thread (see, I said it was a while ago), but I tend to believe that it's true. Of course, few languages are really "new". Knowledge of Pascal, ADA, or JScript makes learning C# and Java relatively easy. VBScript makes Visual Basic easier, and vice versa. The core programming concepts are mostly the same, and it’s just a matter of learning the syntax and keywords.

    So where am I going with this rambling visit to computing language history? Well, it comes about because I increasingly fall out with the dev teams about how they write code. Yes, I know it's a wild assumption that a writer might know anything about real programming, but I don’t want to change how they write code - I just want to change how they decide on names and how they think about users of other languages.

    A perfect example of the battles I seem to fight over and over again is the new Unity Application Block. The development team write in C#, and the original name of the method to retrieve object instances was "Get", which is - of course - a Visual Basic keyword. Thankfully, after I grumbled at them, it was changed to "Resolve". However, the real issues are things like variable names used in the code. I've struggled with this problem for many years, in articles, books, and documentation where the equivalent code is listed in more than one language; yet it seems I am getting no nearer to changing attitudes.

    The technique I call "C++ naming" that many C# developers adopt is to simply lower-case or camel-case the class name, which means that Visual Basic has to prefix the variable name with something else for the code to remain remotely similar. And generally that "something" is the hated underscore (which doesn’t show up well in listings). In fact, the patterns & practices coding guidelines actually say "it is important to differentiate member variables from properties when the only difference in C# is capitalization". I hate to think how many hours I waste renaming variables in listings just to get the code to look the same in all the languages. And is it actually important? Do people ever read more than one language listing?

    I don’t know about you, but I hate to see listings that differ between languages where this is not actually necessary. We tend to list only C# and Visual Basic, and in a few cases (such as anonymous delegates and methods), you do need to change the code between C# and Visual Basic. Plus, in cases where the developers use "C# best practice" or take advantage of features available only in C# syntax (which thankfully are few), I have to accept that I'll need to do some work to get to a Visual Basic translation - though automated tools do help a great deal.

    Still, one nice feature that they did include in the Unity Application Block is a set of method overrides that do not use Generics. This means that you can use Unity in languages that don’t support the Generic coding principles available in Visual Basic and C#. Just a shame you can't use Unity in VBScript or JavaScript...

Page 40 of 41 (323 items) «3738394041