Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    Profligate Profiles

    • 0 Comments

    At last I can document something that might, possibly, be of marginal use to those one or two readers who mistakenly stray into the quagmire of my unrelated weekly ramblings. For the last few months I've been fulfilling my new role as an out-of-sight-and-out-of-mind remote documentation engineer (that's a documentation engineer who's remote, not an engineer who writes remote documentation - though, having read some of my scribblings, you might dispute that assertion). Anyway, recently I was beamed up to the mother-ship to spend a couple of weeks onsite at Redmond. I suspect they just wanted to see if the weird English guy they accidently offered a job to actually does exist.

    So a while ago, while preparing my shiny and tiny XPS laptop for the trip, I had a minor moan about Vista's backup facility. But little did I realize that worse was coming my way. My laptop usually lives on the corporate domain, even though I only connect to the domain over a VPN and am not connected during login to the machine. The cached domain credentials allow me to log into the machine without being connected, and then the O/S authenticates me on the domain when I connect to the network.

    All fine so far. Except that, for some reason, one day it suddenly stopped me from logging in to the machine. I assumed that this was because I hadn't physically connected to the domain for several weeks. So it should all sort itself out when I can reduce the 5,000 mile gap between my laptop and the nearest corporate Ethernet socket. The domain controller would be forced to accept that I'm not just some ethereal figment of its Active Directory, and would gleefully welcome home it's long-lost prodigal son.

    Now, I'm not generally known to suffer from wild spasms of optimism. And so I wasn't really surprised when the long-awaited fusing of my tiny little Dell with the corporate big-iron simply resulted in an Event Viewer full of error messages, a pop-up telling me there was an error in my profile, and the machine refusing to persist any changes to the environment. Every logon was accompanied by a long wait while Vista "prepared my desktop" (I could have built a whole desk in the time it takes), a task bar full of useless notification icons (what the **** do they all do?), dialogs welcoming me to the exciting world of Windows Vista (yes please, I'd love to watch the introduction video again), the Home page of Dell Support's Web site (just in case something broke while booting up?), and - of course - none of the useful stuff I set up last time.

    After a quick consultation with the onsite server admin guy (hi Mike), it seemed like this is not an unknown issue - he had a laptop suffering from just this malady. While he hadn't had time to investigate, he did confirm that it was a problem with Vista not being able to create the profile folder on the local disk. But why? The relevant folder did not exist, the parent folder was not read-only, and a quick test by manually creating the required folder had no effect. So it looked like I was in for another of my regular hunt-round-the-Registry-and-delete-stuff exercises.

    Of course, as per accepted administrative guidelines, I need to warn you about editing the Registry. So, before you start, make sure you write out your will, lay in supplies of pizza and cola, and say goodbye to the kids and the dog. You might also like to wear some protective clothing (a T-shirt with a suitably inane slogan will do), and make sure there is nothing valuable in the direction you are likely to throw the unbootable machine afterwards.

    Once prepared, locate the Registry Key:
    HKEY_LOCAL_MACHINE
     \SOFTWARE
      \Microsoft
       \Windows NT
        \CurrentVersion
         \ProfileList
    This contains a subkey named with a SID for each user account on the machine. In theory, there is one for each of the named folders in the C:\Users folder on your disk. If you select a SID subkey, you'll see the user name in its subkeys, so you can compare each one with the disk folders. You should find ones for the Administrator, Local System, and Network accounts; as well as ones for all the users registered on the machine or in the domains you've joined.

    My machine has led a busy existence during its short life, moving between three domains and having a couple of local users. During a "routine clean up", I seem to remember deleting some of the subfolders in the Users folder - though I was, of course, careful not to delete any currently registered users. What I discovered was the Registry still held references to profile folders for accounts that no longer exist on the machine, but not one for the domain account I was trying to use. I have no idea why this caused a problem, but I decided to tidy up the Registry while I was in there and deleted the SID subkeys, including all their child subkeys, for all of the non-existent user accounts.

    Then, having decided which wall would be the target for an airborne laptop, I rebooted. And, magically, it all worked. Instant logon to the domain, wireless certificates downloaded and installed, and no errors. Quick reconfigure of the desktop, log off, log on again, and it remembered my settings. No more "Welcome to Vista" stuff; and everything working just as it should. Even Outlook managed to find its Exchange Server without the usual application of a large stick. Amazing. Maybe I can actually get to love Vista after all...

  • Writing ... or Just Practicing?

    Sunless In Seattle

    • 1 Comments

    They probably won't invite me over to Redmond again. After telling me for weeks about the wonderful summer weather there, it rained for most of the two weeks I was on site. Not many people would suggest that I have a magnetic personality, but it sure looks like the English weather followed me across the pond. We even had hail one day (in the middle of August), followed by a small tornado. And I'd taken shorts and sun cream with me. But I suppose after it rained almost the whole time during my last two trips, I should expect it. Maybe I can earn a few dollars extra by selling people my travel plans so they can plan their holidays around my trips to Redmond.

    Mind you, Washington State is one of the most beautiful places I've ever been to, and it wouldn't be lush and green without the rain. And, by some extreme good fortune, the two days when the sun did shine I was treated to a boat trip round Lake Union and Lake Washington (we waved to Bill, but he didn't invite us in for tea); and a trip to see Snoqualmie Falls (absolutely amazing!) and the Northwest Railway Museum (they obviously read my blog where I confessed to being a railway enthusiast). Though I had to laugh when a colleague said how she'd told her friend that I was a train spotter, and her friend was horrified - "What?", she said, "You mean like in that awful film...?"

    So the nice-weatherness was a great deal less than I expected, yet the breathtakingness of the falls was substantially more than expected. But that's how life works. Sometimes you get more than you expect, and just as often you get less. In fact, only last week my wife told me about how one of the girls who works in her office was complaining after going to see "Mamma Mia" that the film was full of Abba songs. She doesn't like Abba...

    Here in the little office in England where I engineer documentation for p&p, expectation is something that constantly raises its ugly head. What do people actually want or need in terms of guidance when they install a product such as Enterprise Library or read the Web Services Security Guide? As you'd expect, we go to considerable lengths to collect feedback from focus groups consisting of specialists, customers, and industry leaders; from satisfaction surveys; as well as through in-house reviews and direct feedback from documentation links and CodePlex forums. And, in general, the results are encouraging. In the past, in some software companies, it seems to have been the case that the code was king and the documentation an afterthought. But, generally that's changed as companies realize that users expect more. And, let's face it, as the sole purpose of my job is to create guidance rather than code, what people expect has got to be a major focus.

    But what exactly do customers expect from documentation and guidance? While there is never going to be a single answer, getting it right should give the maximum help to the most people. One of the things we are working to achieve is to provide a range of documentation on architectural and pattern-based topics that suits the wide range of consumers. That means everything from introductory "Getting Started" topics to reference documentation such as API references, key scenarios, and checklists.

    And, of course, there are different formats. Videos are a great way to introduce technologies and practices, and to show development stages and techniques. But they can be limited in reach due to bandwidth limitations and the time commitments required both to create them and to watch them. Slide decks are also useful, but without a commentary like you'd get at a conference presentation, it's often hard to get real depth with just bullet points and schematics. Ideally, you would sit down with an expert in the technology who could explain it all to you and answer your questions. Microsoft do just this through summits, conferences, and their evangelism groups, but unfortunately we don't have enough staff in p&p to provide every developer with their own personal trainer.

    In the end, the major source of information has to be the written word; as a help file, Web pages, a white paper, a PDF, or a printed book. We need to explain scope, concepts, objectives, scenarios, and details. The guidance should work for relative beginners, skilled developers, technology experts, and software architects. And for each group it must be set at a suitable technical level, and written using the appropriate terminology. It also needs to be localizable into a number of different languages.

    And here we run into one of the major issues we're trying hard to tackle. Microsoft publishes a style guide for documentation (it's the bane of my life sometimes) that specifies in excruciating detail exactly which words you can use, how to phrase them, and how to construct the required grammatical content. This helps to avoid confusion, provides standardization across products, gives a common appearance that developers recognize, suits different cultures, and works with automatic translation tools. So the tone and style of the documentation is pretty strictly controlled.

    However, at the same time, there's an increasing feeling that our documentation should be more "approachable", "personable", and "entertaining" - and even, shock, horror, "exciting". I'm not sure exactly how exciting you can make a table describing the methods that resolve objects in Unity - maybe throw in a few bad puns like "...the object of this method..." or "...you should resolve to use this method when...". As to exciting, how about "When you click this button, something will happen ... perhaps ... why not try it and see...?".

    But, seriously, how do you decide where to pitch the level of the content? Would something lively with a few jokes thrown in make learning easier? How much would this style grate after a while? Should we use the "we apostrophe" approach instead of the "you formal" style (for example, "Now we'll add a handler to validate input data" rather than "Now you will add a handler that validates input data")? And would jokes made up by some semi-deranged Englishman (me) seem funny to people in other countries? I told the joke about the golfers to a colleague before I published last week's INAG article and he had no idea what I was talking about - I had to explain it to him. And he was from the US. So much for us sharing a common culture.

    Maybe, instead, we should just continue to publish a range of guidance aimed at different levels of developer/architect and at different levels of product complexity. For example, "Getting Started" guides that compare features to common household objects and situations (like the "Getting Started" articles linked from the p&p Home page) as well as high-level bullet-point technical feature reviews for architects to use when planning major projects. And, alongside, reference-style documentation like that we produce now for Enterprise Library and similar products. Let me know what you want to see - I'm open to suggestions...

    Meanwhile, by some strange coincidence, I had previously been ruminating about guidance phrasing while sitting in the departure lounge at Amsterdam airport for several hours on my way out to Redmond. You know what it's like. After a couple of hours you have read all the warning and information signs, counted the ceiling tiles (twice, just in case you got the total wrong the first time), given up trying to decipher the language the two people behind you are speaking, and started reading your food. In my case, the food was a bag of potato crisps/chips made by a company I'd never heard of before: Sirhowey Valley Foods of Gwent in Wales.

    Now these guys make serious crisps and chips, and are proud to say so. The front of their packets talks about how they make their chips using real onions and mature Cheddar cheese, and how they definitely would never cut corners. Then, on the back, the packet has the usual "If you are not completely satisfied..." bit. Except theirs says:

    "Although all our chips are made with the finest natural ingredients, some may contain traces of corners. If in the unlikely event a packet does contain any corners, please do not hesitate to send the packet and contents back to us. Don't forget to enclose the offending corners. We would also be interested to see any spare photos from your holiday, if you have some." (see http://www.realcrisps.com/REAL-(Potato-Chips)/REAL-Words.html)

    Maybe this is the way forward. We can describe how hard our people work creating the code and documentation. The endless hours and countless pizzas, and how they only pause to wipe the rivers of sweat from their keyboards. And how they ruthlessly test almost to destruction the code and documentation, until it begs for mercy and crawls exhausted into the MSI.

    Then on the last page of the documentation we would add our "If you are not completely satisfied..." message:

    "Although our documentation undergoes rigorous editing and testing, some may contain traces of bad jokes, pointless puns, and unsuccessful attempts at humor. If, in the unlikely event a guidance item does contain a combination of such words, please do not hesitate to translate it into a different language and see if it is still funny. We cannot be held liable for damage caused by spilled coffee or rapidly expelled particles of pizza. Also note that persons of nervous disposition should avoid reading any sections of the documentation denoted by a "This part may be exciting" warning label."

    And please don't send your spare holiday photos to us....

  • Writing ... or Just Practicing?

    "INAG"

    • 1 Comments

    I thought I’d better start off this week with that well-known email disclaimer "INAG" (I’m Not A Golfer). Mind you, when I was a lot younger and fitter, I did occasionally caddy for a few affluent visitors to the R.A.F. Changi course in Singapore. Though when I say "caddy", what I actually mean is "carry the bag and look for lost balls", but you get the drift.

    Anyway, the story goes that two of these affluent golfers were walking down the third fairway one day when one said to the other "I hear that John has bought another set of golf clubs." "Really", replied the second golfer, "TaylorMade? Wilson? Dunlop?" "No", said the first, "St. Andrews, Augusta National, and Royal Troon."

    Sorry about that ... but isn’t it a strange coincidence how much golf is similar to developing software. They both involve a very limited start location, provide various ways to progress, and have an extremely hard to reach destination.

    I mean, for a golfer, you can start each hole from anywhere you like as long as it’s within a patch of grass about 4 yards square. For the software developer, you start with whatever system the customer is running. If you are lucky, it will be some reasonably modern hardware, a reasonably up to date operating system, and a database that is at least contemporary with modern thinking. If you are unlucky, you’ll start in 1980. It will be an old lump of big-iron and a database that uses text files with fixed width fields ("...and can you make it do Web services please...?").

    As to progressing towards the target, thoughtful golf course designers generally provided a range of routes that include plenty of rough, some out of bounds if you're lucky, and usually a nice selection of bunkers. In fact, one guy I used to caddy for never once ended up at the green with the same ball he started the hole with, and religiously navigated via each of the bunkers in turn. We sometimes had to come back the next day to finish the round.

    Likewise the software developer has a vast range of tools, development environments, software frameworks, and languages to choose from. Mind you, for the golfer, the rough and the bunkers are generally in the same place when they come back next month, next year, or even five years later. This is, of course, unheard of for the developer. They’ll be lucky if the software, tools, and languages are the same next week. In a year’s time they will be "legacy", and in five years time they will be obsolete.

    Then there’s the target. OK so it can be tough for golfers. Course managers have a habit of moving the pin around, so the player may have to exert extraordinary powers of observation by standing on the edge of the green and scanning up to 20 degrees left and right to find the large pole with a flag nailed to the top. But it only takes half a dozen putts to get the ball into that little white cup, and you know that you are there.

    However, developers probably never even get to see the hole at all. Effectively arriving at the green with a prototype, they are likely to be greeted with "Why does this button do that?" "How do I tell if a customer has paid their last bill when I'm entering an order?" "Three of our operators are allergic to Arial font." or even "Why does it keep asking me to enter a customer order? We asked for a stock control system".

    Maybe we should all investigate taking up golf as a profession instead...

  • Writing ... or Just Practicing?

    Being Objective...

    • 0 Comments

    I was party to a discussion a couple of weeks ago that wandered off topic (as so many I'm involved in seem to do) into the concepts of whether a programmer is actually "OO" or not. I guess I have to admit to being a long-time railway (railroad) fanatic - an unfortunate tendency that has even, in the past, extended to model railways. So in real life (?),"OO" is the gauge of a model railway. But then someone suggested that many programmers, especially those coming from scripting languages such as classic ASP, are more "OB" than "OO". It turns out that what they mean, I'm given to understand, is that a large proportion of programmers write code that is object-based rather than object-oriented.

    I suppose that I've generally fallen into the "OB" category. Having proudly mastered using objects in my code (starting, I guess, with stuff like FileSystemObject in ASP), it was kind of disappointing to realize that I'm still a second-class citizen in the brave new world of modern programming languages. I mean, when I use Visual Basic in .NET I purposely avoid importing the VB compatibility assembly, and I use "proper" methods such as Substring and IndexOf rather than Mid and InStr. I even use .NET data types, such as Int32 instead of Integer, though I regularly get castigated for that. Especially when I write C# code and use Int32 instead of int, and Object with a capital "O". As I discovered, if you want to start a "discussion", tell a seasoned C# programmer that they are supposed to use the .NET classes instead of all those weird data type names.

    I'm told that the compiler (C# or VB) simply translates the language-specific type name into the appropriate type automatically, and it makes sense for programmers to use the types defined in the coding language rather than specifying the .NET data types directly. Does it make a difference? I'm not clever enough to know for sure, but I can't see how it would affect performance because it's all IL code in the end. My take is that more and more programmers are having to (and are expected to) cope with more than one language. If you do contract work, and prefer to work in C#, do you turn down a job working on a project originally written in Visual Basic? If we are in for a global recession, as many suggest, can you afford to turn work away?

    And what about those C# programmers who tell me they "can't understand Visual Basic". I can imagine that, if you've never used it, you would find it hard to write VB.NET code from scratch, though it surely can't take long to figure that you use "End" instead of a closing curly. OK, so the OO-features have quite different keywords (like "Friend" and "MustOverride"), but it's a lot easier than learning Portuguese (unless you're Portuguese, of course) or any other foreign language. Hey, almost all of it is .NET classes. Mind you, someone to crack you across the knuckles with stick whenever your fingers stray towards the semi-colon key would help. And I can't see how any C# programmer can say they can't read Visual Basic code. Again, the class modifiers and inheritance keywords are a bit different, but they aren't that hard to figure out.

    OK, so it may sound like another rant against C# programmers, but I can assure you it definitely is not. After all, I'm half C# programmer, and I really do like the language. Weirdly, though, most of what I write (in terms of programming rather than documentation) is in VB.NET - I suppose it's force of habit and the comfort factor having come from classic ASP. Yet most of what I read (docs and code) is in C#. And, as they say in the movies, some of my best friends are C# programmers...

    I suppose what I really want to know is: what's the test to see if you are "OO" rather than "OB". Is there a fixed number of interfaces and base classes you have to include in a project? Do you have to have at least 10% inherited properties and methods, and use polymorphism at least twice per 500 lines of code? If you forget to refactor one class, or use an array instead of a generic list, does that automatically disqualify you? Perhaps there is a minimum set of design patterns you have to implement per application. And what about the fact that I still use FileSystemObject in an old Web site I never got round to converting from classic ASP? Or is it that, secretly, you can only really be "OO" if you write in C#, Java, C++, and other "proper" languages...?

  • Writing ... or Just Practicing?

    Easter Bonnets and Adverse Automation

    • 0 Comments

    A couple of initially unconnected events last week conspired to nudge my brain into some kind of half-awake state where it combined them into a surreal view of "automatic" stuff. One of the events was the return from Tina, our editor and proof-reader, of my article about the Team System Management Model Designer Power Tool (a product that, thankfully, I'm legally permitted to refer to as just "TSMMD" - and will do so from now on). The second event was deciding that I ought to get a laptop sorted ready for an upcoming trip to Redmond. The combined result is some manic ravings on the meanings of stupid words, and the fact that Windows Vista obviously hates me.

    TSMMD is a new add-in for Visual Studio Team System that I have been documenting for the previous few CTP releases. It's a really neat tool that allows you to build management models that describe the health states and instrumentation of an application, and then generate the appropriate instrumentation code as part of your VS project (see http://www.codeplex.com/dfo/ for details). The article is one of those "About..." and "Getting Started" things that compares what the product does to some commonplace everyday situation - in this case the way repair shops can do computerized diagnosis of faults in a modern motor car. So the article came back with editorial comments such as "Err...what does this mean?" where I had written stuff like "...without having to look under the bonnet" (Tina asked if I was taking part in an Easter parade), "...hatchback or saloon car" (is this one that has a drinks cabinet built in?), and "...look for some tools in the boot" (surely that's where you keep your feet?). And, of course, "When you say 'motor car' do you mean 'automobile'?"

    Some time ago I rambled on about the way that the culture and language in the US and UK are so very similar, and yet so subtly different (see Two Nations Divided by Light Switches). But the one area where almost everything seems to be different is in the realm of motoring. I mean, to me, a car starts with a bonnet and ends with boot. Just like a person (and not necessarily one in an Easter parade). Makes perfect sense. As I said to Tina, here in England we just keep our wellingtons in our boots... except when going to a car boot sale. Why on earth would a car start with a hood and end with a trunk. Sounds more like an elephant going backwards. And what do you call the bit of an open-top sports car that keeps the passengers dry when it rains? Surely that's the hood (as in "It looks like rain, better put the hood up"). Still, I suppose cars have plenty of bits with stupid names anyway. I know that "dashboard" comes from the days of carts and wagons where they nailed a plank across the front to stop galloping (dashing?) horses hooves splashing mud onto the drivers new breeches.

    Notice how I avoided saying "trousers" there. I once heard a conference organizer ask all the speakers to wear black pants for their presentations as part of a consistent dress code. I wondered how attendees would know what color underwear I had on. But that's a whole different topic area.

    Anyway, getting back to cars, I suppose we now refer to the bit with the speedometer on it as the fascia. However, we still have a "fan belt" even though the radiator fan is electric and the belt drives the alternator and the pumps for the power steering and the air conditioning instead. And it isn't just the car itself. What about how, here in the UK, we have "slip roads" on our motorways? It's not like they are surfaced with super-smooth tarmac (asphalt) so you slide around. I guess the idea is that you use them to slip into the traffic stream (in which case, the way most people drive here, they should be called "barge-in roads"). The US "on-ramp" and "off-ramp" make more sense, even when they don't go uphill or downhill.

    And why "freeway" in the US? My experience of driving in Florida is that you have to carry $20 in loose change for the toll booths that they planted every two miles. Although, around where I live, when they build a new bypass round a town or village, everyone refers to it as "the fast road". Even when there's traffic lights every 20 yards and a half-mile tailback most of the day. Again, the words conspire to confuse. "Traffic lights"? A nice sensible term I reckon. Yet when we were working on the Unity Application Block they wrote a sample they called "Stoplight". As the UI was just three oblong colored boxes it took me a while to figure that this was the US equivalent. Is it still a "stop light" when it's showing green?

    But enough motoring invectives. The other conspiring event this week was battling, after a few months away from it, with Vista. I have to say that there are lots of things I really love about Vista, but it seems to have been designed to annoy the more technical and experienced computer user. A lot of the aggro is, I know, attributable to my long-established familiarity with XP and its internal workings. Vista is no doubt ideal for the less experienced user, as it hides lots of complexity and presents functionality that works automatically.

    Yes, I've finally given up and turned off UAC so I can poke about as required and use weird scripts and macros required for my daily tasks. But it would be really nice to have an "expert" mode that lets you see (and change) all the hidden settings without having to go through several "inexperienced user" screens. I mean, it keeps complaining that my connection to the outside world through my proxy server is "not authenticated" even though it works fine, and I can't find any way to change this. And it won't let my FTP client list files, even though it works fine on the XP machine sitting next to it.

    What finally got me going this week, however, it backing up the machine. I carry a small USB disk drive around with a full image of the C: drive that I can restore if it all falls over. For years I've been using TrueImage (see diary entries passim) and it works well. However, when I bought this laptop I paid extra for Vista Ultimate so I'd get the proper built-in backup software to do disk imaging. I imaged the machine when it was new, but the configuration is much changed since then so I thought I'd do a complete new backup. As there isn't a lot of space on the USB drive, I deleted the existing backup first. But Vista still thinks it's there (obviously there's some secret setting somewhere in the O/S, and it's not in the Registry 'cos I searched there) that makes Vista think I have an existing backup. So it will only do an incremental backup - there is no option to say I want a whole new one.

    And it also insists on backing up the drive D: restore partition, even though I don't want that backed up. So I ran it anyway, but afterwards all it said was "the backup is complete". Did it do an incremental one or a full one? Did it skip stuff that it thinks is in the backup image I deleted? Will it actually restore to give me a working machine? In the end I deleted the backup and used TrueImage (I've got version 10 and it works fine with Vista). It asks you everything it needs to know to create the kind of image you want, and then just does it. And I've restored machines in the past using it, so I feel comfortable that I can get back to where I was when the sky falls in.

    You see, this is where I worry about "automatic" stuff. For some things it seems like a really good idea, and often it "just works". Drifting back to cars, my latest acquisition has automatic climate control that just works. You can, if you wish, dive into the settings and specify hundreds of individual parts of the process, but why bother? Just set the temperature you want and let it get on with it. The car also has automatic window wipers (notice I avoided saying windscreen or windshield), which is great. They wipe the window when it rains.

    But it also has automatic headlights that come on when it's dark. And this feature is turned off because I always worry that they'll come on just as I get to a junction and someone will think I'm flashing them and pull out right in front of me. Notice the important point. You can turn off the automation if you don't want it...

  • Writing ... or Just Practicing?

    Whose Time Is It Anyway?

    • 0 Comments

    Here in our quiet little corner of the People's Republic of Europe, our Government decided a while ago to flog off the radio spectrum in order to pay for their countless spin doctors, pointless focus groups, endless ministerial jaunts, never-ending quangos, and failed experiments with Socialism. In return, they gave us the opportunity to enter the brave new world of Digital Broadcasting. And, rumor has it, they will eventualy build enough transmitters so that those of us who don't live in London will actually be able to receive it. Last I heard, the target date is 2013. Meanwhile, I've had to fill the entire attic of our house with bits of bent aluminium to try and drag some scraps of DAB (Digital Audio Broadcasting) out of the airwaves and down to the kitchen so my wife can have rock music on loud enough to drown out the sound of me washing the dishes.

    Anyone brave enough to have tackled the diary entries from my previous life will know that, up until now, we've been using a rather nice stand-alone Soundbridge Internet Radio to get a constant stream of rock music that generally smothers my unfortunate domestic noises. However, since the BBC released their iPlayer, the fragility of the copper-wired Internet in our part of the country has been exposed for all to see. Now all we get from Virgin Classic Rock in the evenings and weekends is "It looks like you can't get our digital stream..." followed by several seconds of rebuffering and then another five minutes of music. So the boss gets to hear me clanking her best plates together.

    Those distant diaretic ramblings also documented the problems of trying to get Microsoft Media Center working with the exciting new digital technologies in our forgotten little corner of Merry Olde England (well, actually almost the geographical center, but still a long way from London). Suffice to say that it basically involved turning the roof of our house into a miniature version of Jodrell Bank, but at least we now get (depending on weather conditions) around 80 channels of digital stuff on the TV. Which, apart from 30 TV channels that seem to still be showing programmes from 1973, includes loads of radio channels. As I couldn't figure a way to drag the 42" wall-mounted screen into the kitchen every time I did the dishes, I suggested to my wife that she just turn one of these up really loud and pretend we live next door to a rock festival, but she wouldn't go for that.

    So, for her birthday the other week, I treated her to a shiny new DAB radio. It's a really neat thing that consists of five different lumps of plastic - two speakers, a control unit, a combined bass woofer, and a separate tiny little matrix display thingy that you stick on the wall. This means that I can hide everything but the display thingy on top of the cupboards out of the way of the soapy fountain that is me doing the dishes. And combined with some low-loss cable and the aluminium-filled attic, we can actually get Planet Rock and a couple of dozen other stations. In fact, there's even one that just plays birdsong all day!

    One really neat thing with DAB is that you can view the secondary information stream. Saves loads of arguments about which band it was that recorded the track you're listening to. It even tells you the name of the program you're tuned to, and what's coming next. I guess this is pretty much underwhelming to those who have already had DAB for ages, but - as late arrivals to the digital scene - we got really excited about it. Shows how interesting my life is most of the time. However, after playing with this for a while, I suddenly realised that the people who make the hardware obviously don't do any field testing of their products. I mean, the options for the "extra info" on the neat little display thingy are the MUX channel name (such as "DigitalNetwork1"), the time and date (in case you don't own a clock), the signal strength, the bit rate, and the rather nice scrolling text messages.

    Now, as a developer, what would you do? Have it remember what you selected last time and go back to that option automatically? Have it default to the rather nice scrolling text messages? Allow the user to select which they want as the default in the setup menu? All, in my opinion, obvious options. But no, they decided that it should always default to the MUX channel name every time, and you can't change this behavior. You have to press "Info/Display" twice every time you turn it on or change channel. Imagine if Windows started with a DOS prompt every time and you had to type "WIN" and click "Yes" to get to your desktop. Err... a bit like Windows 3.0 in fact. Maybe the radio's O/S developers were still using that.

    And here's another thing with digital radio. It can't tell the time. With the old-fashioned steam radios of the FM and AM variety, the time signal was pretty much accurate. OK, so if you lived a long way from the transmitter you were maybe a picosecond or two behind as the radio waves fought their way through the clouds and trees, but it was near enough. Now its a second or two behind because it has to go through some magic process to get converted to digital and back again. How do I know? Because the kitchen clock is one of those radio-controlled things. Supposedly it uses proper radio waves so as to be accurate to a fraction of a second. Even when those waves have to come all the way from Rugby, which is nearly 60 miles away. And, of course, the same happens with digital TV. I recently read a letter in the paper from someone who has three DAB radios as well as digital TV, and they said they all vary by several seconds. So in our brave new digital world, you never actually know what time it is. Maybe that's what they mean by Internet time - everyone has their own version.

    I suppose I could just use the fancy radio-controlled watch that my wife bought me for Christmas instead. Except it has 97 functions and only four buttons. And one of those just turns the backlight on. Every time I put it on it tells me the time in Hong Kong. I have to carry the instruction book around with me so I can reconfigure it - possibly another good example of lack of field testing. And they say that software is hard for "ordinary people" to understand. Just imagine how much fun we'll have once they get Word and Outlook to run on a wristwatch. Not only will you need to carry a box of instruction manuals around (which I guess is good for us here in the documentation team), you'll probably miss your train because you won't know what time it is, or if your time actually is the real one...

  • Writing ... or Just Practicing?

    How p&p Makes Cheese Sandwiches

    • 0 Comments

    OK, so we don't actually make cheese sandwiches here at p&p. Well, as far as I know we don't (but if we did, they'd probably be the best cheese sandwiches in the world...). When I'm over in Redmond I have to stroll across the bridge to Building 4 and buy one from the canteen, though it's worth the effort because you get four different kinds of cheese in it - as well as some salad stuff. Only in the USA could someone decide that you need four different cheeses in a sandwich. Here in England a cheese sandwich is basically a chunk of Cheddar slapped between two slices of bread. Take it or leave it. Maybe it's because there is always so much choice over there, and people can't make up their mind which cheese to have.

    And why is it so hard to order stuff in the States? I usually find it takes ten minutes just to order a coffee in Starbucks 'cos I have to answer endless questions. Do you want 2% milk sir? No, fill it up to the top please. Any syrup in it? No thanks, I want coffee not a cocktail. What about topping? Some froth would be nice. Am I going to "go" with it? No, I'll just leave it behind on the counter. In fact, when we go out for a meal I like to play the "No Questions" game. Basically, this involves waiting till last to place your order, and specifying all the details of your required repast in one go so the waiter doesn't have any questions left to ask. I've only ever won once, and that was in a pizza takeaway. I think they dream up extra imaginary questions just to make sure they get the last word.

    Anyway, as usual, I'm wandering off-topic. We really need to get back to the cheese sandwiches. So, as a developer, how would you go about making a cheese sandwich? My guess would be something like this:

    1. Requirements analysis. Survey all the sandwichees to discover what kind of bread they want (rye, brown, granary, white, toasted?), what cheese they like (Wensleydale, Stilton, Gruyere, Jack?), whether they want butter or margarine spread, etc.
    2. Resource and materials review. How thin can you slice the cheese to get optimum taste while maximizing efficient usage. How thick should you slice the bread to get good sandwich stability with maximum return on loaf investment.
    3. Project planning. Decide on tests that will check for a correct result, and choose a suitable development environment such as a machine that can slice both bread and cheese. Formulate an iterative milepost-based plan for manufacture.
    4. Agile test-driven development. Pair one bread/cheese slicing operative with one bread spreader, and pass the resulting components to an assembler who merges one slice of cheese with two slices of bread.
    5. Test phase. Eat one.

    Looks like a good plan. So how would we do the same in the documentation department? How about:

    1. Buy some cheese. Slice it up into random thicknesses and sizes and mark each slice with a number using a thick black marker pen so you know what order they fit together afterwards.
    2. Order a loaf from The Variable Baker Inc. Discover it's a different size from the cheese slices, so trim each one to fit.
    3. As nobody really knows what the sandwichees will actually want at this stage, spread the bread with real butter as that seems like the obvious option.
    4. Carefully organize the numbered slices of cheese, index them, create a table of sandwich contents, wash the numbers off each slice of cheese, and assemble the sandwiches neatly on a plate. Stick a little cocktail-stick flag in each one to identify it.
    5. As each sandwichee arrives, ask them what kind of cheese and what kind of spread they want. Tip all the sandwiches off the plate, reorganize and modify the contents, then reassemble the plate of sandwiches with different little cocktail-stick flags.
    6. Repeat step 5 until everyone is fed up with cheese sandwiches.

    Yep, it seems to be a completely stupid approach. But that's pretty much what we have to do to get documentation out of the door in the multitude of required formats. HTML and HxS files for MSDN, CHM files for HTML Help, merge modules for DocExplorer, and PDF for printing and direct publication. Oh, and occasionally Word documents, videos, and PowerPoint slide decks as well. Maybe you haven't noticed how complicated the doc sets in Visual Studio's DocExplorer tool and HTML Help actually are? There's fancy formatting, collapsible sections, selectable code languages (and it remembers your preference), multiple nesting of topics, inter-topic and intra-topic links, a table of contents, an index, and search capabilities. It even opens on the appropriate topic when you hit F1 on a keyword in the VS editor, click a link in a sample application, or click a Start menu or desktop shortcut. Yet it all starts off as a set of separate Word documents based on the special p&p DocTools template.

    Yes, we have tools. We have a tool that converts multiple Word docs into a set of HTML files, one that generates a CHM file, and one that generates an HxS. But they don't do indexes, which have to be created by hand and then the CHM and HxS files recompiled to include the index. Then it needs a Visual Studio project to compile the HxS into a DocExplorer merge module, and another to create a setup routine to test the merge module. But if you suddenly decide you need to include topic keywords for help links, you have to edit the original Word documents, generate and then post-process the individual HTML files, and start over with assembly and compilation.

    We have a tool (Sandcastle) that can create an API reference doc set as HTML files from compiled assemblies. But you need to modify all of them (often several hundred) if you want them indexed. But we have another (home-made and somewhat dodgy) custom tool for that. And then you have to recompile it all again. And then rebuild the merge module and the setup routine.

    What about PDF? The starting point is the set of multiple Word docs that contain special custom content controls to define the links between topics, and there appears to be no suitable tool to assemble these. So you run the tool that converts them to a set of HTML files, then another dodgy home-built custom tool to process the HTML files and strip out all the gunk PDF can't cope with. Then you build a CHM file and compile in a table of contents and a much-tweaked style sheet. Finally, run it all through another tool that turns the CHM into a PDF document.

    Need to make a minor change to the content because you added a late feature addition to the software? Found a formatting error (that word really should be in bold font)? Got a broken link because someone moved a page on their Web site? Found a rude word in the code comments that Sandcastle helpfully copied into the API reference section? For consistency, the change has to be made in the original Word document or the source code. So you start again from scratch. And it seems that there are only two people in the world who know how to do all this stuff!

    Well, at least we've got a process that copes with changing demands and the unpredictability of software development. But it sure would be nice to have it all in a single IDE like Visual Studio. Even really good sandwiches with four different cheeses don't fully soothe the pain. Mind you, I hear from RoAnn that they have cheese sandwiches on flatbread in the fancy new building 37 cafeteria that they toast in a Panini grill. Being foreign (English), I'm not sure what "flatbread" actually is - surely if they used any other shape the cheese would fall out? Reminds me of the old story about the motorist who turns up at a repair shop and is told that the problem is a flat battery. "Well", says the customer, "What shape should it be?"...

  • Writing ... or Just Practicing?

    Preaching What You Practice

    • 0 Comments

    A couple of years ago I (somewhat inadvertently) got involved in learning more about software design patterns than I really wanted to. It sounded like fun in the beginning, in a geeky kind of way, but soon - like so many of my "I wonder" ideas - spiralled out of control.

    I was daft enough to propose a session about design patterns to several conference organizers, and to my surprise they actually went for it. In a big way. So I ended up having to keep doing it, even though I soon realized that I was digging myself into the proverbial hole. Best way to get flamed at a conference or when writing articles? Do security or design patterns. And, since I suffered first degree burns with the first topic some years ago, I can't imagine how I drifted so aimlessly into the second one.

    Mind you, people seemed to like the story about when you are all sitting in a design meeting for a new product, figuring out the architecture and development plan, and some guy at the back with a pony tail, beard, and sandals suggests using the Atkinson-Miller Reverse Osmosis pattern for the business layer. Everyone smiles and nods knowingly; then they sneak back to their desk to search for it on the Web and see what this guy was talking about. And, of course, discover that there is no such pattern. Thing is, you never know. There are so many out there, and they even seem to keep creating new ones. Besides which, design patterns are scary things to many people (including me). Especially if UML seems to be just an Unfathomable Mixture of Lines.

    Of course, design patterns are at the core of best practice software development, and part of most things we do at p&p. So I now find myself on a project that involves documenting architectural best practice. And, since somebody accidently tripped over one of my online articles, they decided I should get the job of documenting the most common and useful software patterns. No problem, I know most of the names and there is plenty of material out there I can use for research. And we have a team of incredibly bright dev and architect guys to advise me, so it's just a matter of applying the usual documentation engineering process to the problem. Gather the material, cross reference it, analyze it, review it, and document the outcome.

    Ha! Would that it were that easy. I mean, take a simple pattern like Singleton. GOF (the Gang of Four in their book Design Patterns: Elements of Reusable Object-Oriented Software) define the definition of the pattern and the intended outcome. To quote: "Ensure a class only has one instance, and provide a global point of access to it." So documenting the intentions of the pattern is easy. The problem comes when you try and document the implementation.

    It starts with the simple approach of making the constructor private and exposing a method that creates an instance on the first call, then returns it every time afterwards. But what about thread safety when creating the instance? No problem; put a lock around the bit that checks for an instance and creates one if there is no existing instance. But then you lock the thread every time you access the method. So start with the test for existence, and only lock if you need to create the instance. But then you need to check for an existing instance again after you lock the thread in case you weren't quick enough and another thread snuck in while you weren't looking. OK so far, but some compilers (including C# but not Visual Basic) optimise the code and will remove the second lock as there is nothing in the routine that can change the value of the instance variable between the two lock statements. So you need to mark the variable as volatile.

    Now that you've actually got the "best" implementation of the pattern, you discover that the recommended approach in .NET is to allow the framework to create the instance as a shared variable automatically, which they say is guaranteed to be safe in a multi-user environment. However, this means that you don't get lazy initialization - the framework creates it when the program starts. But you can create it as a child class and let .NET create the instance only on demand. So which is best? What should I recommend? I guess the trick is to document both (as several Web sites already do). Problem solved? Not quite. Now you need to explain where and when use of the pattern is most appropriate.

    At this point, I discovered I'd proverbially put my head into a hornet's nest. Out there in the real world, there seems to be a 50:50 split between people saying using Singleton is a worse sin than GOTO, and those who swear by it as a useful tool in the programmer's arsenal. In fact, I spent a hour reading more than 100 posts in one thread that (between flamings) never did provide any useful resolution. Instead of Singleton, they say, use a shared or global variable. As much of the online stuff seems to describe Singleton only as an ideal way to implement a global counter, I can see that reasoning. However, I've used it for things like exposing read-only data from an XML disk file, and it worked fine. The application only instantiates it on the few occasions that the data is required, but it's available quickly and repeatedly afterwards to all the code that needs it. I suppose that's one of the joys of the lazy initialization approach.

    And then, having fought my way through all this stuff, I remembered the last project I worked on. If you use a dependency injection framework or container (such as the Unity Application Block) you have a built-in mechanism for creating and managing singletons. It even allows you to manage singletons using weak references where another process originally generated the instance. And you automatically get lazy initialization for your own classes as well as singleton behavior - even if you didn't originally define the class to include singleton capabilities. So I guess I need to document that approach as well.

    And then there are sixty or so other patterns to tackle. And some of them might actually be complicated...

Page 40 of 41 (326 items) «3738394041