Random Disconnected Diatribes of a p&p Documentation Engineer
Usually the only time I feel like digging a big hole and climbing in is when I make some inappropriate remark at an important social event, or tell a rather too risqué joke during a posh dinner party. However, since I never get invited to posh dinner parties, and extremely rarely have the opportunity to attend any "cream of society" gatherings, I've so far avoided the need to invest in a new shovel. And, not being a polar bear, I don't have a tendency to view large holes in the snow as suitable resting places for the winter either. In fact, even though I'm quite adept at sleeping, it turns out I'm a rather late convert to the notion of hibernation.
As we've probably already reached the "what on earth is he rambling on about this week" moment, perhaps I need to mention that I'm rambling on about my recent epiphany in terms of turning off the computer at the end of a working day. Maybe it's because of the many years working with operating systems where you could quite safely just pull the plug or do the BRST (Big Red Switch Time) thing, yet be confident that the whole caboodle would happily start up again fully refreshed and ready to go the next day as soon as you applied some volts to it.
None of my old home computers, Amstrad PCWs, or DOS-based boxes ever minded an abrupt termination of electricity (except you lost whatever you forgot to save), and the Wii and other more consumer-oriented stuff we have also seems to cope with being abruptly halted. But not Windows Vista (and, I assume, Windows 7). You get those nagging reminders that you've been naughty, and a threat that it will spend the next four hours rummaging round your system just to see if you did any damage. I suspect this is probably just a long "wait" loop that prints random stuff on the screen, designed to teach you a lesson.
Of course, when XP was king, we were offered the chance to "Hibernate", and sometimes even "Sleep" rather than turning the thing off. I don't know if anyone actually made this work reliably - it never did on any of the laptops I've owned, and the XP-based Media Center box we had up till recently only managed it through some extra bolt-on hardware and software. Even then, I had to reboot it at least once a week to let it catch up again. So I've been extremely reticent about anything other than a proper "shut down" at the end of each session.
But recently I've noticed that colleagues seem to be able to shut the lid and briefcase their laptop, yet have it spring almost instantly into life when they open it up again; and without burning a hole in the side of the bag, as my tiny Dell XPS tried to do last time I attempted this. Aha! It turns out they are running Vista. Maybe the time it takes to actually get started from cold (even if you hadn't been naughty last time you turned it off) was a contributing factor to their hibernation behavior...
For me, matters came to a head with the machine we use to view the signal from the IP camera that my wife uses to watch night-time wildlife (foxes, badgers, etc.) in our garden. Like most software written by companies that actually specialize in hardware, the viewer application is quirky - and that's being kind. It won't remember connection details, has no automation facilities, and accepts no startup parameters. The only way to get it running is to mousily fiddle with the UI controls. There aren't even any shortcut keys or a recognizable tab order, so my usual kludge of using a program that generates key presses won't work either.
This means that, even though I can enable auto login for Windows (it's not part of my domain), I can't get the **** viewer to connect automatically at startup. It was only after a lot of fiddling about that I decided to try hibernating the machine with the "login when waking from sleep" option disabled, so you only have to close the lid to turn it off, then hit the power button to get back to watching wildlife. And, amazingly (to me at least) it seems to work flawlessly. The only time it actually gets turned off is when it needs to reboot for an update patch.
Suitably impressed, I enabled Sleep mode on the new Media Center box; which runs Vista Home Premium Edition. I managed to get the screensaver I adapted some while back (see The Screensaver Ate My Memory) to run on Vista. It shows assorted photos from our collection for a specified time and then terminates, allowing the machine to go to sleep. Yet it reliably wakes up in time to record scheduled TV programs, collect guide data, and do all the other complicated stuff that Media Center seems to require (to see how many things it does, just take a look in Task Scheduler).
So, somewhat late to the party, I'm now a confirmed sleeper and hibernater. My laptop is happily slumbering away (though not in a large hole) as I write this - on another machine obviously. And the incredible thing is that it comes back to life faster than my (somewhat dated) mobile phone does. In fact, it takes the Wii box, the consumer DVD player, and the TV longer to get going than my laptop. I've even got my wife's tiny Vista-based laptop set up to hibernate so she can get to her vital email inbox more quickly. Maybe we're at last reaching "consumer-ready" status for computers? Though I'd have to say that I haven't needed to reboot my phone three times in the last month to install updates.
And while we're on the subject of screensavers (yes we are), I still can't figure why I had to completely rebuild the one that worked fine on XP to make it run on Vista. The problem was that it has a series of configuration settings, which include the path to the root folder containing the pictures to display. It saves these using the VB built-in persistence mechanism, which quite happily remembers the settings each time you open the configuration window. But when Vista fires up the screensaver after the requisite period of inactivity, it suddenly forgets them all again.
At first I thought it was to do with the weird path mappings Vista uses for the Public Pictures folder, but no amount of twiddling would make it work (have you ever tried debugging a screensaver?). And I can't find any sample code that shows how you get to it using environment variables. However, after a lot of poking about in the code, it seems that Vista may actually run the screensaver under a different account or context from the current user context (though I haven't been able to confirm this), so the user-specific settings you make in the configuration window can't be located. Finally, after applying my usual large-hammer-based approach to writing code (I made it store and read the settings from a simple text file in the C:\Temp folder), it works again.
At last I can sleep (and hibernate) easy...
It's probably safe to say that only a very limited number of the few people who stroll past my blog each week were fans of the Bonzo Dog Doo Dah Band. Or even, while they might recall their 1968 hit single "I'm the Urban Spaceman" (which, I read somewhere, was produced by Paul McCartney under the pseudonym of Apollo C. Vermouth), are aware of their more ground-breaking works such as "The Doughnut In Granny's Greenhouse". So this week's title, based on their memorable non-hit "Can Blue Men Sing The Whites" is pretty much guaranteed to be totally indecipherable to the majority of the population. Except for the fact that the BBC just decided to use it as the title of a new music documentary.
But, as usual, I'm wandering aimlessly away from the supposed topic of this week's verbal perambulation. Which is, once again, about agileness (agility?) in terms of documenting software under development. No, really, I have actually put some solid thought into this topic over the past several months, and even had one or two of the conclusions confirmed by more learned people than me, so they are not (wholly) the kind of wild stabs into the dark that more commonly make up the contents of these pages.
Where's The Plan, Dan?
One of the major factors in agile development is "stories". Users supposedly provide a list of features they would like to see in the product; the development team evaluates these; and draws up a final list of things that they think the software should include. They then sort the list by "nice-to-haveness" (taking into account feasibility and workload), and produce a development plan. But they don't know at that stage how far down the list they will actually get. Most products are driven by a planned (or even fixed) release date, so this approach means that the most important stuff will get done first, and the "nice to haves" will be included only if there is time.
It would be interesting if they applied agile methods in other real world scenarios. Imagine taking your car to the dealer to get it serviced. Their worksheet says a service takes two hours, and there is a list of things they're supposed to look at. You'd like to think that if the mechanic doesn't get them all done in the allocated time they would actually finish them off (even if you had to pay a bit more) rather than leaving the wheel nuts loose or not getting round to putting oil in the engine. Or maybe not having time to check if you need new brake pads.
Of course, every sprint during the dev cycle should produce a releasable product, so multiple revisions of the same section of code can often occur. So how do you plan documentation for such an approach? You can assume that some of the major features will get done, but you have no idea how far down the list they will get. Which means you can't plan the final structure or content of the docs until they get to the point where they are fed up fiddling with the code and decide to freeze for final testing. You end up continually reorganizing and reworking sections and topics as new features bubble to the surface.
But whilst the code may just need some semi-mechanized refactoring and tidying up to accommodate new features, the effect on the docs may require updates to feature overviews, links, descriptions, technical details, tables of methods, schematics, code samples, and the actual text - often in multiple locations and multiple times. The burden increases when the doc set is complex, contains many links, or may need to support multiple output formats.
What's the Story, Jackanory?
Each feature in the list of requirements is a "story", so in theory you can easily document each one by simply reading what the developers and architects say it does. And you can look at the source code and unit tests to see the way it works and the outcomes of new features. Or, at least, you can if you can understand the code. Modern techniques such as dependency injection, patterns such as MVC, and language features such as extension methods and anonymous typing mean that - unless you know what you are looking for and where to find it - it can be really hard to figure what stuff actually does.
In addition, the guys who write the unit tests don't have clarity and education as objectives - they write the most compact (and unrealistic in terms of "real world" application) code possible. OK, so you can often figure out what some feature does from the results it produces, but getting an answer to simple questions that are, in theory, part of the story is not always easy. I'm talking about things like "What does it actually do (in two sentences)?", "Why would I use this feature?", "How does it help users?", and "When and how would you recommend it be used".
Even a demo or walkthrough of the code (especially from a very clever developer who understands all of the nuances and edge cases) can sometimes be only of marginal help - theory and facts whooshing over the top of the head is a common feeling in these cases. Yes, it showcases the feature, but often only from a technical implementation point of view. I guess, in true agile style, you should actually sit down next to the developer as they build the feature and continually ask inane questions. They might even let you press a few keys or suggest names for the odd variables, but it seems a less than efficient way to create documentation.
And when did you last see a project where there were the same number of writers as developers? While each developer can concentrate on specific features, and doesn't really need to understand the nuances of other features, the writer has no option but to try and grasp them all. Skipping between features produces randomization of effort and workload, especially as feedback usually comes in after they've moved on to working on another feature.
Is It Complete, Pete?
One of the integral problems with documenting agile processes is the incompatible granularity of the three parts of the development process. When designing a feature, the architect or designer thinks high level - the "story". A picture of what's required, the constraints, the objectives, the overall "black boxes on a whiteboard" thing. Then the developer figures out how to build and integrate the feature into the product by breaking it down into components, classes, methods, and small chunks of complicated stuff. But because it's agile, everything might change along the way.
So even if the original plan was saved as a detailed story (unlikely in a true agile environment), it is probably out of date and incorrect as the planned capabilities and the original nuances are moulded to fit the concrete implementations that are realistic. And each of the individual tasks becomes a separate technical-oriented work item that bears almost no direct relationship to the actual story. Yet, each has to be documented by gathering them all up and trying to reassemble them like some giant jigsaw puzzle where you lost the box lid with the picture on.
And the development of each piece can be easily marked complete, and tested, because they are designed to fit into this process. But when, and how, do you test the documentation and mark it as complete? An issue I've come up against time and time again. If the three paragraphs that describe an individual new feature pass review and test, does that mean I'm done with it? How do I know that some nuance of the change won't affect the docs elsewhere? Or that some other feature described ten pages later no longer works like it used to? When you "break the build", you get sirens and flashing icons. But how do you know when you break the docs?
Is It A Bug, Doug?
So what about bugs? As they wax and wane during the development cycle, you can be pretty sure that the project management repository will gradually fill up with new ones, under investigation ones, fixed ones, and ones that are "by design". I love that one - the software seems to be faulty, or the behavior is unpredictable, but it was actually designed to be like that! Though I accept that, sometimes, this is the only sensible answer.
Trouble is that some of these bugs need documenting. Which ones? The ones that users need to know about that won't get fixed (either because it's too difficult, too late, or they are "by design")? The ones that did get fixed, but change the behavior from a previous release? Those that are impossible to replicate by the developers, but may arise in some esoteric scenario? What about the ones that were fixed, yet don't actually change anything? Does the user need to be told that some bug they may never have come across has been fixed?
And what if the bug was only there in pre-release or beta versions? Do you document it as fixed in the final release? Surely people will expect the beta to have bugs, and that these would be fixed for release. The ones you need to document are those that don't get fixed. But do you document them all and then remove them from the docs as they get fixed, or wait till the day before release and then document the ones they didn’t get time to fix? I know which seems to be the most efficient approach, but it's not usually very practical.
Can I Reduce the Hurt, Bert?
It's OK listing "issues", but a post like this is no help to anyone if it doesn't make some suggestions about reducing the hurt, in an attempt to get better docs for your projects. And, interestingly, experience shows that not only writers benefit from this, but also the test team and other non-core-dev members who need to understand the software. So, having worked with several agile teams, here's my take (with input from colleagues) on how you can help your writers to create documentation and guidance for your software:
While the core tenets of agile may work well in terms of getting better code out of the door, an inflexible "process over product" mentality doesn’t work well for user documentation. Relying on individuals and interactions over processes and tools, working software over comprehensive documentation, and responding to change over following a plan, can combine to make the task of documentation more difficult.
Common effects are loss of fluidity and structure, and randomization. One colleague mentioned how, at a recent Scrum workshop for content producers, she came across the description: "We're in a small boat with fogging goggles riding in the wake of an ocean liner that is the engineering team".
Was It A Success, Tess?
I was discussing all these issues with a colleague only the other day, and he made an interesting comment. Some teams actually seem to measure their success, he said, by the way that they embraced agile methods during development. The question he asked was "Is the project only a success if you did agile development really well?" If analysis of all the gunk in your project management repository tells you that the agile development process went very well, but the software and the documentation sucks, is it still a triumph of software development?
Or, more important, if you maybe only achieved 10% success in terms of agile development methods but everyone loves the resulting software, was the project actually a failure...?
...that is the question. Whether 'tis nobler in the server cabinet to suffer the outrageous lack of valuable new functionality, or to take arms against the powerful improvements to the core Windows Server operating system. And by opposing, manage without them? To sleep (or hibernate): perchance to dream of an easy upgrade. I guess you can see why I don't write poetry very often - it always seems to end up sounding like somebody else's.
So the disks for Server 2008 R2 dropped through my letter box the other week, and since then I've pondered on whether to upgrade. It's less than a year since I spent a whole week crawling around inside the server cabinet installing two sparkly new servers running Windows Server 2008, upgraded the networking, set up four virtual machines on Hyper-V, and generally dragged my infrastructure screaming and cursing into the twenty-first century. And now it seems it was all to no avail. I'm out of date and running legacy systems all over again.
OK, so I assumed that there would be a Windows Server 201x at some point, and that I'd once again fall by the wayside, but I never expected it to be this soon. While the hardware might not last out the next decade, I kind of hoped that I'd just have to drop the VMs onto a couple of new boxes when the existing ones decided it was time for the bits of bent wire and plastic to give up the ghost. But now it seems the ones and zeros need to be replaced as well. Maybe they're nearly worn out too.
So I printed off all the stuff about fixing upgrade problems (with the fair assumption that - if they exist - I'm going to find them), read the release notes, and then tossed the disk into the drive of the standby machine. At least if I break that one I can reinstall from a backup without interrupting day-to-day service. Of course, it would also be an interesting test of my backup strategy, especially as I've not yet had the misfortune to need to resurrect a Windows 2008 box using the built-in backup and restore feature.
After a few minutes rummaging about inside the machine, the installer produced its verdict. OK, so I did forget about domain prep (it's also the backup domain controller), but it also said it needed 18+ GB of free space on Drive C. Not something I was expecting. But I have 17GB free, so I could probably move the swap file to another drive (there's over 100GB available there), but would that break the upgrade? And the VMs have a lot less free disk space. I'll need to grow the partition for them, and then try and shrink it afterwards - otherwise it will take even longer to export backups. Hmmm, not such a simple decision now is it?
One thing is clear, next time I order any machine I'm going to specify it with 4 x 1 terabyte drives. I seem to spend my life trying to find extra disk space, even though the current boxes have nearly 400 GB in them. And they spend 99.9% of their time with the performance counter showing 1% load. It's a good thing I'm not trying to do something enterprisy with them.
So with it looking likely that I'll be confined to my legacy version of Windows 2008 for the foreseeable future, I decided to review what I'd be missing. Maybe it's only a facelift of the O/S, and there are just a few minor changes. Well, not if you look at the "What's New in Windows Server 2008 R2" page. There's tons of it. Pages and pages of wonderful new features that I can drool over. But do I need them? I guess the one area I'm most interested in is updates to Hyper-V, and that list seems - to say the least - a little sparse. I don't need live migration, and I'm definitely convinced that, with the minimal workload on my systems, I don't need enhanced processor support or bigger network frames. And dynamic virtual machine storage won't help unless I stuff the box with bigger disks.
The one feature I would like is the ability to remove the redundant connections* that Hyper-V creates in the base O/S (see "Hyper-Ventilation, Act III"), but I guess I can live without that as well. So what happens if I don't upgrade? Will I become a pariah in the networking community? Will my servers fall apart in despair at not getting the latest and greatest version of the operating system? Will I be forever dreaming about the wonderful new applications that I can't run on my old fashioned O/S? Will I still be able to get updates to keep it struggling on until I get round to retiring?
Or maybe a couple of bruisers from the Windows Server division will pop round with baseball bats to persuade me to upgrade...
* Update: In Windows Server 2008 R2 you can untick the Allow management operating system to share this network adapter option in Virtual Network Manager to remove these duplicated connections from the base O/S so that updates and patches applied in the future do not re-enable them.
One of the features of working from home is that, if you aren't careful, you can suddenly find that you haven't been outside for several days. In fact, if you disregard a trip to the end of the drive to fetch the wheely bin, or across the garden to feed the goldfish, I probably haven't been outside for a month. I suppose this is why my wife, when she gets home from work each day, feels she has to appraise me of the current weather conditions.
Usually this is something along the lines of "Wow, it's been scorching today!" or "Gosh, it's parky enough to freeze your tabs off!" (tip for those not familiar with the Derbyshire dialect: "parky" = "cold", "tabs" = "ears"). However, last week she caught me by suprise by explaining that "It's rather Autumnal, even Wintery". Probably it's my rather weird fascination with words that set me off wondering why it wasn't "Autumny and Wintery", or even "Autumnal and Winteral". OK, so we also say "Summery", but "Springy" doesn't seem right. And, of course, I soon moved on to wondering if Americans say "Fally" or "Fallal". I'll take a guess that they end up with one of the poor relations in the world of words (ones that have to use a hyphen) with "Fall-like", just as we say "Spring-like".
Note: I just had a reply from a reader in the US North West who says that the only words they use for "Fall" are "rainy", "misty", "stormy", and "cloudy". I guess that agrees with my own experience of all the trips I've made there, including a few in "Summer".
The suffix "al" comes from Latin, and means "having the character of". So Autumnal is obvious. But what about "lateral". Assuming the rules for the "al" prefix, it means "having the character of being late or later" (or, possibly, "dead"), whereas I've always assumed it meant something to do with sideways movement or thinking outside the box. Maybe I need to splash out $125 to buy Gary Miller's book "Latin Suffixal Derivatives in English: and Their Indo-European Ancestry". It sounds like a fascinating bedtime read. Though a more compact (and less expensive) option is the useful list of suffixes at http://www.orangeusd.k12.ca.us/yorba/suffixes.htm. This even includes a page full of rules to apply when deciding how to add different suffixes to words, depending on the number of syllables and the final letter(s).
So, coming back to my occasional trips outdoors, why do I have a "wheely" bin instead of a "wheelal" bin or even a "wheelive" bin. The suffix "y" is supposedly of Anglo Saxon (and Greek) origin and means "having the quality of" or "somewhat like". Pretty much the equivalent of the "al" suffix, which also means "suitable for". The "ive" suffix means much the same, but - like "al" - derives from Latin. What I really have, however, is a "wheelable" or "wheelible" bin, using one of the Latin suffixes that means "capable of being (wheeled)". Is it any wonder that people find English a difficult language to learn? Though I suppose the definitive indication of this is "Ghoti" (see http://en.wikipedia.org/wiki/Ghoti).
Perhaps I can make the code samples in the technical documentation I create more interesting by applying descriptive suffixes? The "ory" suffix means "place for" (as in "repository"), so my names for variables that hold references to things could easily be customerAccountory and someTemporaryStringory. And a class that extends the Product base class could be the Productive class. That should impress the performance testing guys.
Or maybe I could even suggest to the .NET Framework team that they explore the possibility of suffixing variable types for upcoming new language extensions. An object that can easily be converted to a collection of characters should obviously be defined as of type Stringable, and something that's almost (but not quite) a whole number could be Intergeral, Intergeric, Intergerous ("characterized by"), or just Intergery.
I'm not quite sure how she did it, but this year my wife managed to convince me to follow the latest weekly pandering to public opinion that is "The X Factor" - our annual TV search here in Britain for the next major singing and recording star. I did manage to miss most of the early heats; except for those entrants so excruciatingly awful that my wife saved the recording so she could convince me that there's a faint possibility I don't actually have the worst singing voice in the world. Though I suspect it's a close-run thing.
And I have to admit that some of the finalists do have solid performing capabilities. There's a couple of guys in the "over 25s" section that really look like they could actually make it as recording artists. Though, to really tell, you probably should watch (or rather, listen) with the screen turned off so you aren't distracted by the accompanying (and very talented) dancers, the audience madly waving banners, and the pyrotechnics and light shows that accompany every performance. I mean, it's supposed to be all about the voice.
And here we come to the crux of the matter. After one particularly controversial decision where a young lady with probably the purest and most versatile singing voice was voted off, the show's owner Simon Cowell said that "he trusts public opinion" and that he "wouldn't organize a show like this if he didn't". Unlike what seems to be the majority of the baying public out there, I actually agreed with his decision. If the show is supposed to be about finding the best artist based on the opinions of the "Great British Public", then they should be allowed to make the decisions.
What's worryingly clear, of course, is that the public don't actually vote based on the principles of the competition - they vote for their favorite. I suspect that the tall and attractive blonde lady gets a lot of votes from young men, and the teenage lad (who, to be honest, doesn't have the greatest voice) gets a lot of votes from teenage girls. Meanwhile my wife wanted the guy with the mad hairdo who looks a bit like Brian May from Queen (and is a very passable rock music singer) to win. Or maybe the one who wears funny hats.
But more than anything, you have to assume that a great many people vote - mainly out of spite - for the act that Simon Cowell (currently Britain's most hated person) has been trying to get voted off for the past many weeks. Rather like the last TV-based ballroom dancing contest where the lumbering overweight TV reporter actually had to resign from the competition because it was clear that he was the worst every week, but the public kept voting him in.
And here we come to the crux of the matter. Who is best placed to choose the optimum outcome for any activity that involves choice? The principles of democracy suggest that allowing everyone to have their say, and choosing the most popular outcome, is the way to achieve not only popularity, but success as well. It's based on the assumption that everyone will make a logical choice based on their situation, and the resulting policy will therefore satisfy the largest number of people and achieve the optimum outcome. Though that doesn't appear to be the way that the People's Republic of Europe works, where the public gets no choice, but that's a whole other ballgame.
In our world of technology development generally, and particularly here in p&p, we rely on public opinion a great deal. We use advisory boards and public votes to figure out the future direction and features for our offerings, and to provide an overall guide for the optimum direction of our efforts. In theory, this gives us a solid view of the public opinion of our products, and ensures that we follow the path of improvement in line with need and desires.
But is this actually the case? If 6.5 million people watched "X Factor", but only a few thousand voted, is the result valid? Could it be that most people (like me) have an opinion on who should win, but have neither the professional ability to make a properly informed decision on their real talent, or who just can't be bothered picking up the phone? In a similar way, if only 35% of the population actually vote in a general election, is the result actually valid? Is it only the opinionated or those with an axe to grind who influence the final outcome?
Its worrying if this trend also applies to software development. When we run a poll, send out questionnaires, or consult an advisory group, are we actually getting the appropriate result? If the aim is to widen the user base for a product, is asking people who already use it (and, in many cases, are experts) what they want to see the best way to broaden the reach and improve general user satisfaction? No doubt there's been plenty of study in this area from polling organizations and others associated with statistical modelling, but it's hard to see how you adjust numbers to make up for the lack of input from a very large proportion of the population.
In particular, if you are trying to make a product or service more open to newcomers, and widen the user base to include a broader spectrum of capabilities, how do you get to the people who have never heard of your product? Is asking the existing expert user base, and perhaps those already interested in learning about it, the best way? And, if not, how else can you garner a wider range of non-biased opinion?
Mind you, I reckon the Geordie with the big teeth will win...