Random Disconnected Diatribes of a p&p Documentation Engineer
Last week I was creating short introduction videos for our Architecture Guide project. You'd assume that this would be easy enough - write some slides and record the commentary, and then generate a WMV file from the recording. I used Camtasia, which integrates with PowerPoint and makes it really easy to create the recording and edit it. Only then, when I generated the WMV file, did I start to appreciate just how large these kinds of files can be.
You see, the problem is that we are limited to a maximum file size of 4MB and five or six minutes. Yet the first attempt using the default settings produced a 12MB file. So that's when I started digging around the settings, and reading up on video options and formats. You'd think that with the power and usability of modern software there would be some setting, like there is in Windows Movie Maker, saying "just make it fit into 4MB and be wonderful". No such luck.
Yes, I did try taking the WMV and exporting it from Movie Maker into 4MB, but the quality was so bad you couldn't even read the slide titles. So I tried recording in AVI format and converting that, with approximately equally awful results. In the end, I used the "Slides and Audio (Medium)" setting in Camtasia and edited the commentary (such as removing pauses and superfluous waffle) until it was just below 4MB. And then repeated the process for the other ten presentations on my "to do" list. The final quality is acceptable, though the audio compression makes me sound like I've developed a lithp.
Perhaps I just don't grasp the technicalities in enough depth. You can edit a whole range of audio and video parameters for the encoder, so in theory you can delicately tweak the settings to get an optimal output quality. But none of it seems to do quite what you'd expect. You reduce the frame rate from 10fps to 5fps and little dots start crawling all over the slides like a disorganized army of ants slowly eating the words. You change the keyframe interval from 1 second to 5 seconds, and you end up looking at the introduction slide for the first two minutes of the video. You reduce the audio bandwidth from 22KHz to 16KHz (mono) and it sounds like someone playing a kazoo.
It's rather like if your TV changed channel every time you adjusted the volume, or if the phone company rearranged the digits of your telephone number whenever someone tried to call you. I even worked diligently through a set of recommendations from some clever people who do this kind of thing all the time. It took ages to locate all the settings and make sense of them, and I wish I'd read the instructions from the end backwards rather than starting from step 1. When I finally got it all set up, I discovered the final paragraph of the document contained the rather less than useful comment "Using these settings, I managed to get the file size down to below 6MB per minute!" Wow, that's a lot of help when I need to get 5 or 6 minutes into 4MB...
Of course, after I finally did manage to squeeze my output into the required dimensions (if you'll pardon the expression) I thought it would be a good thing to document what I'd discovered. I mean, I work for the documentation team, so I probably ought to produce some dregs of documentation now and then - if only to justify my existence. Of course, in this brave new media-based world, it's actually "guidance" I create, not just boring old documentation, so I did think it would be neat to do a video on how to record videos. A sort of "guidance on guidance" thing. Only trouble is, I couldn't find a way to get Camtasia to allow me to record myself using Camtasia... There must be some way to do it, probably using a VM (or an ordinary video camera). Or by installing another video capture program to capture you using the original video capture program. I wonder if the guidance team at TechSmith (who make Camtasia) have to secretly smuggle somebody else's software onto their machines to create their guidance...?
Mind you, even better, next week I'm doing "train the trainer" videos to teach people how to teach people to use our gleaming new Architecture Guide (I bet you'd forgotten that this post was about the new guide). So if I create some documentation on how to do that, is it "guidance on guidance on guidance"? It's all starting to sound a bit like Chinese Whispers (or "Telephone" in the US). You know, the game they play at the kind of parties I'm never invited to where you have to pass on a whispered message and then see what it comes out as after ten or twenty people have communicated it.
I remember reading how, in the trenches during the First World War, supposedly they would pass commands back to the reserve lines in this way. Although it maybe won't make sense to any non-UK readers, the command "Send reinforcements, we're going to advance" is said to have been delivered to the reserve lines as "Send three and fourpence, we're going to a dance". At least it would probably be more entertaining than the videos I'm creating.
What is it with airports? I mean, if I built an airport in the town called Mansfield, I would probably seriously consider calling it "Mansfield Airport". It seems a good name since it identifies where the airport is, and what region or area it serves. The island of Madeira has only one airport (which, I guess, is not surprising as 95% of the island slopes at around 45 degrees), located next to the town of Santa Cruz. However, it's not called "Madeira airport", or even "Santa Cruz airport". It's called "Funchal airport"; I suppose because Funchal is the island's capital city. I wonder what they'll do when they finally bulldoze enough of the island to build another airport?
Imagine if we followed that approach here in England - we'd have dozens of airports called "London airport". Strangely, however, we actually do have four called that already; "London (City)", "London (Gatwick)", "London (Heathrow)", and "London (Stanstead)". And only one of them is in London. Maybe they ought to rename a few US airports the same way. I can start asking for a ticket to "Washington (Seattle)", which will be really confusing because Seattle is in Washington state... I think I can feel jet lag coming on already.
Mind you, renaming airports seems to be a growing sport. Here in England they renamed Liverpool airport to "John Lennon airport", just in case anybody that knew who John Lennon was didn't know that the Beatles came from Liverpool. And Doncaster airport got renamed to "Robin Hood airport", even though it's 50 miles from Sherwood Forest. In fact, Mansfield (just across the motorway from where I live) is within the boundaries of the old Sherwood Forest. I wonder if, when I build my airport, I can ask for the name back.
Best of all, though, is the airport we flew from last week. For as long as I can remember, it's been called "East Midlands airport" (EMA). It's in the East Midlands, just inside the Derbyshire boundary and not far from Nottingham. Recently, however, Nottingham city council tried to get it called "Nottingham airport", though that meant they'd need to change the name of the existing airport at Tollerton that's called "Nottingham airport". But then Derby city council got upset, so they considered calling it "Nottingham/Derby airport".
However, it's not far from Leicester either, so they obviously decided they wanted their share and that it should be called "Nottingham/Derby/Leicester airport". I discovered that they resolved the situation by calling it "East Midlands Airport Serving Nottingham, Derby and Leicester". It's a good thing they built the new arrivals hall, or they wouldn't have had enough room for the sign.
Still, maybe the airports thing is just change for change's sake. Worse are changes due to stupid bureaucracy. Today, as I was reading Motor Cycle News while waiting for a haircut, I discovered that the nameless bureaucrats who run the People's Republic of Europe have stipulated that the new driving test for motorcycles will include a "swerve" test to be executed at 50 kilometers per hour. In real money, that translates into 31.07 miles per hour. Unfortunately, almost all of the existing driving test centers are located in built-up areas (obviously) where the speed limit is 30 miles per hour. So, have a guess what the solution is:
a) Allow the "swerve" test to be taken at 30 miles per hourb) Build 220 new driving test centers outside urban areas
If you answered a), you obviously are not familiar with European bureaucracy. Yep, they stipulated that the test can only be taken at an urban test center. OK, so a few of the 220 new test centers are due to be ready (perhaps) when the change takes place. I wonder how many hospitals they could have built with the money...?
After all that, it's good to know that we, here in the software industry, aren't tempted to change the names of things just for fun and for no reason. I'm absolutely convinced that the next version of Windows will be called "Windows 2010", or maybe "Windows XP Extra", or perhaps "Windows Vista II". And it will integrate seamlessly with Hotmail, or MSN, or Windows Live. And provide an architecture for building applications based on SOA, or SaaS, or S+S, or (like airports) it may have varying cloud cover.
As a writer, I enjoy the weirdness of words. In the English (and US English) language, and particularly in technical writing, words often mean something distinctly different from their initially apparent meaning. When I'm looking at text provided by other members of the teams I work with, such as developers and architects, I often come across a word or phrase where the usage and context is obviously familiar, yet the real meaning is totally inappropriate. And fixing the text sometimes takes a determined effort as I try to bend my brain away from the obvious to look for the appropriate.
For example, "Use a protocol like HTTP or TCP" or "May negatively impact performance". So what's wrong with these? Well, "...like HTTP or TCP" could mean "...have fond feelings for HTTP or TCP". In the same vein as the well-know expression "Time flies like an arrow, fruit flies like a banana". And "...negatively impact performance" might be taken to mean it actually improves it. You see where I'm going?
Bear in mind that everything we produce must follow strict style and word selection guidelines so that it is easy to assimilate by those whose first language is not English. And, of course, it must also be easily translatable by mechanized tools into other languages. OK, so the tools these days are very good. I read in a computer magazine last week about a guy who did the obvious test - he took some English text, fed it through an English/French translator, and then fed the result through a French/English translator. The result was understandable, if not actually fully readable, but it was "good enough".
Of course, I immediately decided it would be interesting to push this a bit further by piping some text through several languages to give mechanized translation a real test. Time for an experiment I think. We'll start with this week's opening sentences:
"As a writer, I enjoy the weirdness of words. In the English language, and particularly in technical writing, words often mean something distinctly different from their initially apparent meaning."
The automated translation into Spanish gives:
"Como escritor, gozo del weirdness de palabras. En la lengua inglesa, y particularmente en la escritura técnica, las palabras significan a menudo algo distintamente diferente de su significado inicialmente evidente."
Interesting that they don't seem to have a word for "weirdness" in Spanish. Maybe it only applies to English people. Anyway, when translated back into English, it comes out as:
"Like writer, joy of weirdness of words. In the English language, and particularly in the technical writing, the words often mean something differently different from their initially evident meaning."
Wow, pretty close. And I reckon "joy of weirdness" and "differently different" are fine descriptions of most of my blogging activities. However, now we'll take the Spanish version and convert it into Russian, which produces this:
"Как сочинитель, утеха weirdness слов. В английском языке, и определенно в технически сочинительстве, слова часто намереваются что-то по-разному отличающееся от их первоначально очевидная смысль."
Which, when translated back into English comes out as:
"As writer, the joy of weirdness of words. In the English, and definitely in technically the writing, the words frequently will intend that- t. p - different being differed from their initially obvious."
Amazing. That's almost closer to the original. OK, so we got some extraneous letters in there that may affect the next step, but we'll push on regardless and stretch the bounds of reasonableness by taking the Russian version and translating it into Swedish:
"Som författare uttrycker glädjen av weirdnessen av. I engelskaet och bestämt i tekniskt handstilen, uttrycker vanligt ska ämnar den t. p - olikt som skilja sig åt från deras initialt tydligt."
And, finally, back from Swedish into English:
"As authors, the blessing expresses of weirdnessen of. In engelskaet and certain in the technical script, expresses commonly will intends it t. p - different that divide itself at from their initially clear."
I don't know about you, but I reckon that's a pretty remarkable demonstration of automated language translation. OK, so the original was not exactly complicated but the results are not a million miles away in meaning, even if the grammar could do with some attention. No doubt that, if a native speaker of those languages had edited the intervening versions, they would have been even better. All of these translations were performed by the online translator available at WorldLingo.com.
Anyway, talking of words, one that struck me as odd the other day was "stereotype". My first thought was that "stereo", being our shortened word for "stereophonic", means "two" or "double". Especially as we use "mono" for musically-oriented stuff that isn't stereo. So how can "stereotype" have the meaning of "all the same" or "the same as all others of its type"? And deeper exploration reveals that the original meaning of the word "stereotype" is thought to be the name for a duplication made during the printing process (see Gale Cengage Learning). Again, reinforcing the "two" or "different" meaning.
It was only after some research I found that "stereo" is a prefix that comes from the Latin word meaning "solid". Aha! The people who dreamt up the concept of piping music through two different channels obviously meant it to have a more solid sound, so they called it stereophonic (where "phonic" means "acoustics" or "relating to sound"). Maybe they invented "monaural" afterwards for people who could only afford one speaker, or - like me - are deaf in one ear. And it fits with "stereotype" actually meaning "a solid type" or "of the same type".
So while we're talking about stereotypes - the topic I originally intended to discuss when I started this post (which seems like several weeks ago now) - I never considered that I was affected by stereotypes of people or places. OK, so stereotypes are useful as a staple ingredient in stand-up comedy. Let's face it, a joke that starts "There's these three ordinary guys who do ordinary jobs, and have no obvious distinguishing marks, in a boat..." would have some way to go to be funny. And, when you travel a lot, you soon discover that people don't really conform to some stereotype for their country or nationality anyway. What I found really surprising, however, was that airports don't either.
I mean, you'd assume that Schipol airport in delightfully laid-back Amsterdam would be populated by people smoking various brands of weed, so it would all be a bit disorganized and your luggage would probably go via Outer Mongolia and the Faroe Isles. Meanwhile, Frankfurt airport in extremely efficient and organized Germany would be so well designed and run that you wouldn't even notice you'd been there.
Ha! No chance. Travelling to Redmond via Amsterdam was totally painless. Same terminal for arrival and departure, no security lines, arriving and departing on time, and luggage waiting on the belt after clearing immigration in Seattle. Meanwhile, Frankfurt was three (yes three) security barriers, recheck your passport and re-enter all the information even though you've got a boarding card because the computer is playing up, and nowhere to sit down meantime. And the departure was late.
But worst of all, they insinuated that I own an iPod and they think that my passport is a dangerous implement. I travel regularly, and am relatively organized about the security check thing. My watch, belt, phone, wallet, loose change, and other stuff are in my carry-on. I wear slip-on shoes to save time. And I have my laptop out of the bag ready. So in Frankfurt they don't let you put stuff in the plastic trays yourself - you have to wait for an assistant - and they keep asking if you've got an iPod. Maybe it's a new security scare?
Then I was surprised when the scanner bleeped like crazy as I walked through when the only metal near me was my wedding ring, the zip on my trousers, and the fillings in my teeth. Turned out, after a "pat-down", to be my passport that set it off. OK, so it's got a biochip and half a mile of aerial wire in it, but I've never known that happen before. Meanwhile, the guy kept saying "iPod" until I finally gave in and showed him my non-iPod MP3 player - at which point I was frog-marched off into a separate area while they tested it for a whole range of dangerous stuff. Maybe the X-ray machine had detected the rather potent 70's punk music it contains, or it objected to my comprehensive collection of classic Goon Shows. I suppose I can't blame it for that.
Back in June, when I signed my life away and made my pact with the blue badge, it seemed like a good idea to restrain my exuberance in one or two areas. Wandering aimlessly around the world attending conferences, and pleading with Web site editors to buy my articles, were obvious first steps. And a quick sanitization of my own Web site seemed like a good idea - trying to tempt any unwary dev shop I could find to give me a job was probably not a good idea either. And, in particular, losing the PowerPoint presentation that grumbled about the lack of inspiration and direction in the Web world seemed like a really positive move. Especially as it was robust enough to need words with asterisks in.
Then, at the p&p Summit a few weeks ago, I watched Billy Hollis do a wonderful presentation that echoed so many of my own subversive tendencies. His session, called "Drowning in Complexity", revealed through audience participation how few people actually know about, understand, and use the huge number of new technologies coming out of Microsoft. I mean, I work in a fairly small division and I have only a passing familiarity with many of our products, never mind the mass of stuff coming out of the many other divisions.
But what really made me smile were his comments on the way we seem to be piling more and more stuff on top of a document collaboration mechanism that was already revealing its failings way back in the late 1990's. Yep, what on earth are we still doing playing with Web browsers when we're trying to implement rich, interactive, and accessible user interfaces for business applications? I have absolutely no intention of getting involved here in a discussion of Flash, Chrome, and the like. What I want to know is: where is the next Tim Berners-Lee?
And where is the comprehensive managed security framework that allows you to properly interact with the hardware and the user? Or the mechanism to handle data locally in a sensible way? Or, and here comes that awful cliché, "write once, run anywhere". It seems that the only reason we battle on with Web browsers is because there is no other "write once" that does "run anywhere". Or is there? PDF seems to work fine, and it was developed by a single manufacturer. You can use a nice lightweight (and free) PDF reader such as Foxit, or you can use the full-blown (dare I say "overblown") real thing from Adobe. And there's no shortage of tools you can buy to create and edit PDFs, or do most anything else you want with them.
So I guess this diatribe has finally reached the point where I can no longer avoid the "S" word. Here at Microsoft, we're hiding it behind the exciting concept of the Rich Internet Application (RIA). But at the heart of it are, of course, XAML and WPF. One of the announcements by Scott Guthrie at the Summit was the aim of continuing convergence of Silverlight and WPF to achieve the "write once, run anywhere" nirvana. WPF that runs on a desktop, on a mobile device, on a tablet, and everywhere else as well; and with full support for ink, stylus, touch, rich media, and interaction with all the bells and whistles of the hardware.
Where I still have an issue is with Silverlight. Yes, it gives us a bridge to the ubiquitous Web browser to maximize reach. But we can do pure WPF and XAML in a host on Windows, just like we do with PDFs. Maybe the fact that WPF and XAML come from Microsoft will mean that it can't get the support from other platform manufacturers and the open source community that HTML did so many years ago. So are we forever condemned to using another browser plug-in? Another layer of "stuff" on top of the already inappropriate "stuff"?
There's an old story about the guy driving through some small village in the middle of nowhere, and he stops to ask a local yokel the way to his planned destination. The yokel replies "If I was going there, I wouldn't start from here". I just hope that we haven't gone so far down the road to that dead-end village that we can't actually get to where we want to be.
I seem to have spent a large proportion of my time this month worrying about health. OK, so a week of that was spent in the US where, every time I turned on the TV, it scared me to death to see all the adverts for drugs to cure the incredible range of illnesses I suppose I should be suffering from. In fact, at one stage, I started making a list of all the amazing drugs I'm supposed to "ask my doctor about", but I figured if I was that ill I'd probably never have time to take them all. They even passed an "assisted suicide" law while I was there, and I can see why they might need it if everyone is so ill all of the time.
And, of course, it rained and hailed most of the week as well. No surprise there. They even said I might be lucky enough to see some snow. Maybe it's all the drugs they've been asking their doctor about that makes snow seem like a fortuitous event. Still, I did get to see the US presidential election while I was there. Or, rather, I got to see the last week of the two year process. It seems like they got 80% turnout. Obviously, unlike Britain where we're lucky to get 40% turnout, they must think that voting will make a difference. Here in the People's Republic of Europe, we're all well aware that, if voting actually achieved anything, they'd make it illegal. I wonder if the result still stands if nobody actually turns up to vote?
Mind you, it does seem surreal in so many ways. You have to watch four different news channels if you want to get a balanced opinion. And one of the morning newsreaders seemed to have a quite noticeable lisp so that I kept hearing about the progreth of the Republicanth and the Democratth. A bit like reading a Terry Pratchett novel. Or maybe it was just the rubbish TV in the hotel. And they didn't have enough voting machines so in some places people were queuing for four hours in the rain to cast their votes. Perhaps it's because there are around 30 questions on the ballot paper where you get to choose the president, some assorted senators and governors, a selection of judges, and decide on a couple of dozen laws you'd like to see passed. Obviously a wonderful example of democracy at work.
Anyway, returning to the original topic of this week's vague expedition into the blogsphere, my concerns over health weren't actually connected to my own metabolic shortcomings. It was all to do with the Designed for Operations project that I've been wandering in and out of for some number of years. The organizers of the 2008 patterns & practices Summit had decided that I was to do a session about health modeling and building manageable applications. In 45 minutes I had to explain to the attendees what health models are, why they are important, and how you use them. Oh, and tell them about Windows Eventing 6.0 and configurable instrumentation helpers while you're at it. And put some jokes in because it's the last session of the day. And make sure you finish early 'cos you'll get a better appraisal. You can see that life is a breeze here at p&p...
So what about health modeling? Do you do it? I've done this kind of session three or four times so far and never managed to get a single person to admit that they do. I'm not convinced that my wild ramblings, furious arm waving, and shower of psychedelically colored PowerPoint graphics (and yes, Dave, they do have pink arrows) ever achieve anything other than confirm to the audience that some strange English guy with a funny accent is about to start juggling, and then probably fall off the stage. Mind you, they were all still there at the end, and only one person fell asleep. I suppose as there was no other session to go to, they had no choice.
What's interesting is trying to persuade people that it's not "all about exception handling". I have one slide that says "I don't care about divide by zero errors; I just want to know about the state changes of each entity". Perhaps it's no wonder that the developers in the audience thought they had been confronted by some insane evangelist of a long-lost technical religion. The previous session presented by some very clever people from p&p talked about looking for errors in code as being "wastage", and there I was on next telling people all about how they should be collecting, monitoring, and displaying errors.
But surely making applications more manageable, reporting health information, and publishing knowledge that helps administrators to verify, fix, and validate operations post deployment is the key to minimizing TCO? An application that tells you when it's likely to fail, tells you what went wrong when it does fail, and provides information to help you fix it, has got to be cheaper and easier to maintain. One thing that came out in the questions afterwards was that, in large corporations, many developers never see the architect, and have no idea what network administrators and operators actually do other than sit round playing cards all day. Unless they all talk to each other, we'll probably never see much progress.
At least they did seem to warm to the topic a little when I showed them the slide with a T-shirt that carried that well-worn slogan "I don't care if it works on your machine; we're not shipping your machine!" After I rambled on a bit about deployment issues and manageable instrumentation, and how you can allow apps to work in different trust levels and how you can expose extra debug information from the instrumentation, they seemed to perk up a bit. I suppose if I achieved nothing other than making them consider using configurable instrumentation helpers, it was all worthwhile.
I even managed to squeeze in a plug for the Unity dependency injection stuff, thus gaining a few brownie points from the Enterprise Library team. In fact, they were so pleased they gave me a limited edition T-shirt. So my 10,000 mile round trip to Redmond wasn't entirely wasted after all. And, even better, if all goes to plan I'll be sitting on a beach in Madeira drinking cold beer while you've just wasted a whole coffee break reading this...
It may seem like this week's disjointed ramblings follows on from last week's topic, in some lexographically eerie and unexpected way. I can assure you that this wasn't intentional - the capability to avoid straying off topic during the course of a single short article has so far always eluded me, and I see no reason for that situation to have changed. After all, there's no sign yet that I'm actually getting the hang of this blogging thing. Still, at least I'm not frightened of computers, as are some people in my age group...
No, what actually prompted this week's ramble was something I came across recently while researching obscure software design patterns. Part way through one document, I found this enlightening and interesting line: "Use a common data format to perform translations between two desperate data formats." Now, I'd have to admit that I don't do that much actual programming these days, but I do tend to deal with quite a lot of data. And, thankfully, none has yet come to my attention as being "desperate". I'd go with "wrong", "strange", and even "useless" as descriptions of the data I encounter, but none of it has shown any signs of criminality, or suicidal tendencies.
However, maybe it's just that I don't notice. Perhaps there are some Int32s in there that are really, really keen to escape into 64 bits. Or lumps of serialized binary that just can't bear not being an XML document. Could it be that they actually mean "desperate" as in "criminal"? Is there a gangster-style DataSet roaming through the layers of my applications stealing rows and leaving a trail of empty tables? Or perhaps the author of the page I was looking at actually meant "disparate", not "desperate".
But, coming back to people being frightened of computers, just image how much more frightened they'd be if they thought there really was desperate data flying round inside. So, to put such people's minds at rest, I've been touring the Internet and have assembled a list of the Top 10 things that you may not know about computers:
You may like to print out this list and keep it somewhere handy ready for when your son or daughter gives their old computer to your mother. Or for when you become senile. Meanwhile, if you have any useful tips for new or nervous computer users, you can send them to us.
So I was on-site at a dev shop the other day watching three guys fighting with a printer. It seems they needed some particular project report to send to a customer, and the printer was refusing to play ball. I watched them try various combinations of "press-and-hold" buttons on the printer, check the network cable, try printing from another program, try printing from another computer, ping the printer, and reinstall the printer drivers a couple of times.
Just then, the receptionist appeared and asked when the report would be ready. When told about the printer problem, she flipped open the side cover, pulled out the jammed sheet of paper, slammed the cover shut, and within a few seconds the report appeared. Meanwhile, she stood there with that "Are all men useless?" look, which - if you are a man - you will no doubt recognize.
So why is it that men seem to be able to make even the simplest technical jobs seem complicated? Or is it just an IT industry thing? A friend of mine, who definitely falls into the "user-not-programmer" group, sometimes phones and asks me to go over and "look at the computer, and bring your toolbox". Now, he knows that the contents of my toolbox include a large hammer, a selection of bent screwdrivers, and several rusty spanners that never fit anything. But I'm sure he doesn't actually expect me to put a drip pan underneath the machine, clean the spark plugs, and adjust the tappets - yet it seems like he naturally assumes it's some mechanical problem. Even though he knows that a computer is just a box full of odd-shaped bits with wires sticking out.
Likewise when the freezer in our kitchen stopped working last time, my wife made a pretty fair assumption that it wasn't a software glitch - it was down to me using a carving knife to scrape the ice off last time I defrosted it. And when our son phones to report yet another "in need of repair" scenario, he makes no distinction between a broken wardrobe door handle and the mobile phone he dropped into the bath. OK, so I have a hammer that fits the screws on his wardrobe door, but it's pretty unlikely that any of my spanners will fit his mobile phone.
Maybe this isn't a comprehensive survey of behavior, but it does suggest that perhaps it's a "computer programmer" thing. Could it be that us geeks are so involved in the nebulous vaguarities of software we automatically assume the problem is complicated? For example, we use Media Center for our TV, and occasionally it throws a hissy fit so my wife can't watch Coronation Street. I know that the answer is BRST (big red switch time), and it generally sorts itself out after a reboot. Yet I still can't convince my wife that pressing the remote control buttons harder (a mechanical solution) will have any effect other than breaking the remote control.
It's rather like that old story (stop me if you've heard it) about the physicist, chemist, and computer programmer touring Switzerland in a car. Halfway down a winding mountain road, the brakes fail and the car races down the road bouncing off rocks and crash barriers until it finally comes to rest at the bottom of the hill. Shaken but not hurt, the three guys get out and survey the situation. The physicist says "I reckon there's a problem with the braking system. We should measure the available braking force and calculate the resulting pressure on the calipers." The chemist disagrees, saying "No, I'm sure its problem with the brake pads. We should analyze the asbestos content, and compare it to industry recommendations." "I've got a better idea", says the computer programmer, "let's push it back to the top of the hill and see if it happens again."
Isn't a bit worrying that we seem to automatically assume any problem with something more technical than a light bulb is most likely to be a software issue? Does it show how little faith we, who know about this stuff, really have in our art? I mean, when you think about it, mechanical stuff is surely the most likely culprit in most situations. Metal bits go rusty, wear, bend out of shape, and seize up if you forget to oil them. Plastic bits invariably snap, or melt when they get hot. Yet software is just 1s and 0s. I've yet to find any rusty 0s lying in the bottom of a broken computer, or broken 1s jamming the cooling fan.
So here's a suggestion. Next time Microsoft Word gets confused, or Windows Explorer can't find your USB drive, take the lid off your computer and apply some mechanical diagnostics instead. I've got a large hammer you can borrow.
I just discovered last week that I'm supposed to be able to "move quickly and lightly", be "as sleek and agile as a gymnast", and be "fleet of foot". Either that or I'm supposed to be an X-ray and Gamma ray astronomical satellite belonging to the Italian Space Agency. Not much hope of any of these happening, I guess. Probably I shouldn't have decided to search the Web and see what "agile" actually means (and, in case you are wondering, the Italian satellite is called Astrorivelatore Gamma ad Immagini LEggero - see Carlo Gavazzi Space).
All this comes about because one of the projects I'm on at the moment is effectively exploring the techniques for agile document creation. Agile is well-known in the software development arena, and it seems - from what I've seen here at p&p - to work well. OK, so it has its downsides from the documentation engineer's perspective (as I know only too well), but it does appear to produce great results very quickly, with fewer "issues" than the waterfall approach.
So, can documentation be agile as well? A lot of the supposedly related stuff on the Web seems to be concerned mainly with creating dynamic content for Web sites, or creating content that you can display on different devices (such as mobile clients). However, there are a couple of people talking seriously about the issues. You'll find great articles from Scott Ambler (see Agile/Lean Documentation and Can Documentation Be Agile?) and Ron Jeffries (see Where's the Spec, the Big Picture, the Design?).
Much of the focus appears to be on documentation in terms of the requirements of development teams (project documentation, such as requirements and design decisions) and operations (dependencies, system interactions, and troubleshooting). However, they do talk a bit about "customer" or "user" documentation. What's worrying is that all of the articles seem to recommend that writing user documentation should wait until the product stabilizes, be delayed as long as possible, and be written as close as possible to the product's release. Working on end-user documentation early in the project is, they suggest, just a waste of resources.
Yet, if you don't start early and "run with" an agile dev team, how can you hope to have comprehensive user documentation available for the release? In theory, the team will capture all of the information required in models, sketches, and user stories that can be polished up to provide the user documentation. But is that enough? Should the whole design and structure of the documentation be based on developer's conversations and implementation specifications? Telling users that it "works like this" and "can do this" does not always produce useful guidance.
For example, how many times have you opened up the help file for some complex server product and seen sections titled "Configuration", "Administration", and "Reference"? Maybe you want to do something simple, such as add a new account. You find the section in the administration topic called "Adding a new account" and it explains which buttons to click, which textboxes you need to fill in, and how to select an option for the "operations category". But it doesn't tell you why you need to do this stuff. Or what will happen if you do (or don't) select an operations category. What the heck is an "operations category" anyway? Maybe they are a fundamental new feature that got added to the product right at the end of the dev cycle, after the docs for creating a new account were completed. And so there wasn't time to do the fundamental reorganization of the content required to match the new conceptual approach you now need to take when creating an account?
Agile development, they say, means that "working software is more important than comprehensive documentation". But all the articles and references seem to end up talking generally about documentation when what they really mean is "project documentation" - stuff for the team and project managers to use to monitor and manage the project. You continually get the impression that customer-focused documentation is neither a requirement, or important. Yet, without it, isn't there a chance that nobody will actually use the software? Or will never get the maximum benefit from it?
Getting back to the question of whether you actually can do agile documentation for a software project, how might agile work? For a developer, a task might be something like rewriting an interception algorithm to improve performance, or changing part of the UI to make it easier to use. For a writer, a task is usually to produce a document that describes some feature. If that feature is sufficiently compartmentalized, such as describing how to use some part of the UI or explaining how to write some plug-in component, then it is an ideal candidate for agile. You can focus on just that task, and then move on to the next one.
You can pair with another writer, or - even better - pair with the developer to help you understand the feature, and then use your "professional writer's abilities" (hopefully you have some) to get it down in a form that makes sense to users. And, because you don't have the same internal insight into the feature as the developer does, you can ask "user-level" questions and document the answers. Where it gets hard is when you are trying to grasp the "whole concept" view, especially from dev teams that know the code base inside out, instinctively understand the overall concepts of the product, and can't figure why, for example, anybody would find it hard to understand what an "operations category" is.
I guess lots of people have carried out scientific investigations into all this, but my simple and humble take is:
However, while these issues tend to arise when documenting a piece of physical software under development, the whole approach is different for the kind of task I'm currently engaged in. It involves producing guidance for technologies that are already released and are (generally) mainstream; for example, architectural guides and documentation that help you to use existing products. This is where you can actually work with something that is stable and finished (although they'll probably release an updated version just after you publish the guidance). Maybe agile works better here?
What's interesting, and initially prompted this rant, is that I'm discovering how the traditional (i.e. not quite so agile) approach still seems to have some fundamental advantages. Documents I create from scratch to achieve a specific aim, working on my own in a quiet place, seem to come out better that those that get passed around from one person to another with everyone "adding their bit" but not seeing the whole picture. Being able to prioritize work myself means that I can spend time on the important tasks without being worried that there is some more urgent task waiting. And seeing the whole picture means I can write with more accuracy and less duplication of effort.
I guess, in the end, I'm still struggling to see how you can use agile methods for end-user or consumer documentation. Maybe it needs to be some custom blend of techniques, concentrating on the features of agile and traditional methods that work best. Use the agile features of pairing for collating information, flexibility for maximizing effort, and feedback to keep it on target. Combine this with traditional approaches to prioritization, concentration, and an overall plan. Maybe it's a whole new paradigm? Can I trade-mark "tragile documentation"?