Random Disconnected Diatribes of a p&p Documentation Engineer
So here's an interesting approach to merchandising and pricing your products. Imagine, if you will, that you have set up a company to build sports cars, and you reckon your can sell 50 in the first year. Or maybe, closer to home, you have invented some fabulous new piece of software that you're convinced will sweep the world off its feet and find a home on every desktop and server out there (yes, when I was a lot younger I started out like that as well...). Anyway, after a year, you discover that you are only selling half what you budgeted for, and so you're losing money. What do you do?
Well, you could decide to trim your costs to get back on budget. Or you could redouble your efforts in selling the cars/programs/whatever-it-is-you-do to reach your initial target. Or you might instead consider improving the product so it is easier to sell, making up the gap that way. It's a fair bet that one of these approaches - or, more likely, a combination - will solve the embarrassing "hole in the finances" problem...? Or, maybe, instead of any of these, how about just doubling the price?
If that sounds like a daft idea, and may in fact result in reduced sales, you obviously aren't familiar with the way that Government departments tackle this kind of not-unheard-of problem. After all, they are a monopoly in many areas of public service and provision, such as the people who look after registering land ownership or issuing passports. Here in our little Communist corner of the People's Republic of Europe, the Land Registry and the Passport Office have both announced that, because people can't afford to buy houses or go on holiday any more, these two offices are both running over budget. It seems that the income from the fees they charge has dropped dramatically during the recession.
In fact, one of my wife's friends works in the local Land Registry office and she's regularly been regaling us with tales of how they have nothing to do - and are having to look busy by reorganizing the filing system, re-sharpening their pencils every hour, and moving everything from one room to another and back again. She was worried that she'd be made redundant, but that's not the way they do things. Instead, they just put their price up for each registration. And no messing about with 5% increases here, add another 75% on instead.
And then there's passports. Or, as they have now become, "combined identity and travel documents". Not only have they doubled the price, but it seems that the Government doesn't know enough about us already and now they need to put every detail on some database that they can then sell to make some extra money. Of course, it will be really useful in reducing crime and terrorism, and make us all feel safer because we'll have an extra plastic card that we can wave around to prove who we are. Maybe they'll be able to do all the fancy things you see on these TV forensics programs as well, like identify people from the perfume or aftershave they wear, or by the color of their socks. I watch C.S.I. so I know about these things.
In fact, my favorite one was where they were at some big trucking company office where there was a huge map showing lots of red dots where all the trucks were wandering around the roads of the state. A pal of mine has been involved in this kind of project for a local council, so I know it can be done. Though I seem to remember they only did snow ploughs and grass cutting machines, so it was a bit less exciting. My theory is that they could afford only a limited number of geo-location devices, and they figured they wouldn't be doing much grass cutting in the snow.
But I'm wandering off topic. So, as we watched these trucks on the big screen blinking their way around the map, the C.S.I. people started firing questions. "Can you remove all the ones where the driver never goes on route 17?" Three taps on the keyboard, a satisfying bloopy noise, and some of the red dots flash and then fade away. "Now remove all the ones where the truck was off the road last Tuesday". Click, click, click, bloop, gone. "How about all of the trucks with a dent in the passenger door?" Tap, tap, tap, bloop, gone. "Now remove all the ones where the driver has never had whooping cough". Well, you get this picture... after a few more iterations there's only a single red blinking light left, so that must be the murderer!
But I still can't figure why, when you want to search some incredibly huge database of fingerprints, the system decides to make the most of its processing capabilities by retrieving all the ones that don't match and drawing a picture of them on the screen. Still, at least when it does find the right one, it flashes in a very attractive manner and all of the information you could possibly want scrolls across the screen. I wonder if they use an HTML Marquee tag. And our police will soon be able to do the same after we've all been photographed, iris-scanned, and fingerprinted like criminals for our new "combined identity and travel documents".
That is, of course, if they actually get the stuff into the database with some semblance of approximate accuracy (supposedly most of the Government databases have at least 10% errors, and the driver and vehicle one has nearer to 20% errors). I even read this week that they've still not got a system that's supposed to transfer court records to the police national database working because "some of the information is too complicated". It was supposed to be up and running three years ago, and they've already spent on it, according to my rough calculations, a sum of money equal to the entire tax take from our village for the next 75 years. I can't imagine how us people that live in the real world (and obviously only have to work with simple information) would get a way with that - even by doubling the price.
I'm fast coming to the conclusion that you actually need to be quite stupid to use a computer these days. Within a few years those with even a minor modicum of capability, or just a hint of innate common sense, or even mental agility that verges on a level around normal, will find themselves completely excluded from the ever-present, always-connected, online virtualness and technological future of man (and women) kind. We'll be reduced to writing on stuff called "paper" and sending these hand-written messages to others by buying "postage stamps". Or actually talking using ordinary words over a voice connection called a "telephone".
OK, so this partly comes about as a result of my daily battles with software that is either so simplified and "user-friendly" that it's almost impossible to make it do what you want, or which seems intent on trying to hide from you anything that does not involve answering inane questions. Yes, I know I've ranted on about this in probably far too many blog posts in the past. And I appreciate that software should be as intuitive and easy to use as possible to open up our wonderful world of computing technology to the widest possible audience.
But this week I've seen with some horror the effects of our attempts to achieve this in gory close-up detail as I've tried to help some friends get set up with their new computer. And these aren't stupid people trying to do difficult stuff. They have both run retail businesses before they retired from full-time work, and can quite successfully manage things like programming a video recorder, working the latest types of mobile phones, and eating gum whilst walking. And what's worse, they actually had another friend who is technically quite competent help them get their modem and Internet connection set up and working, yet fail to complete the job.
So these people looked on in horror as I tried to get them started with a bit of basics on using a GUI, starting and stopping programs, and gentle Web surfing. Questions I never even anticipated, such as "Why are there so many different ways to do the same thing?" and "What are all these little pictures on the bottom for?" (the notification icon area which contains no less than 11 icons that do nothing when you click on them). And even "How do I turn it off?" They didn't seem to intuitively grasp that you need to click the button that says "Start" if you wave your mouse pointer over it when you wanted to stop.
Then there's the free 60-day trial of Office that continually pops up a dialog asking you to register it, sends you (after several clicks) to a page that gives you a product key and tells you to copy it into the Office "register" dialog, but then sends you an email to tell you to do it all over again. Or the Norton program that nags continually until you click the "Fix" button, then does a few tricks, and then starts to "check your system" - at which point everything stops with no indication of what it's doing or how long it will take. And then it starts to "backup your files" to some online repository (no idea where). It says you can carry on working, but reports that the process failed when you close the nag window.
On my first visit, I set up a Windows Live email account for them so they wouldn't have to keep changing their email address when they change ISPs (I've read enough horror stories about the one they are with, though I suspect that all ISPs have a reasonably equal number of these circulating the 'Net). But the next day they told me that they hadn't managed to get into it again because they couldn't figure out what to do when presented with the initial Home page. "Why do I have to wave my arrow thing all over the page to see which bits do something?" they asked. I'd explained that links were usually blue and underlined, so they were completely fooled by links that are black and only go blue and underlined when you move the mouse pointer over them; and doubly fooled by the main login one that lit up blue. Though no doubt, after a while, they'll get used to the strange and often unintuitive conventions we take for granted (like "I can understand Maximize and Minimize, but what does Restore Down mean?").
Still, all of this is just a familiarization process, and they'll soon become proficient and inclusive members of our high-tech community. Though where the fun really started was trying to get their ISP email working so they could receive messages and online bills. My ISP (British Telecom) allows you to specify any email address to receive the "important information about our services" messages. But their ISP insists that you use their own email system, so we had to persuade Windows Email (a.k.a. Outlook Express) to talk to their mail servers. No, you can't just use the Webmail feature because the email setup process (which you have to do yourself) requires that you verify mail server registration using an "important information about our services" email that they send you before you can log in (?).
You kick off this registration process through their own Web site, after logging into it with your "broadband account details" - which are different from your email account details they send you in the welcome pack with your modem (even though you don't yet have an email account). And here we come to the nub of the issue that drove me (and them) crazy. The ISP provides a password to log onto their site in the "welcome" letter. But after endless attempts we couldn't make it work. So we phoned the automated "password reminder" service. The nice electronic lady read out the user name and password - exactly the same as in the welcome letter.
Now I don't know about you, but faced with a password (and this isn't the real one) such as "H6C2W9A3", and being canny enough to guess that - like most systems - it is case-sensitive, what would you type in? My guess is the same as we did over and over again: "H6C2W9A3". What you actually have to type is "h6c2w9a3". Yes, it's case sensitive, but to save confusion they print it in the welcome pack using "letters that look the same as the ones on the keyboard". And the automated password reminder service read it out as "haych for Henry, number six, see for Charlie, number two, double-you for Whisky, number nine, 'ay for Alpha, number three". Not even a suggestion that there might be some lower-case stuff in there.
Now you see what I mean about stupid people? The only people who will be able to use the Internet in a few years time are those who WRITE EVERYTHING IN CAPITAL LETTERS and don't even realize that there are such things as "small letters". And, after all that, when we finally did get to the "My Account" page, we found the following message (and note the interesting use of grammar): "My Account is currently unavailable. We making some improvements to our customer service and online systems over the weekend".
Probably they're making them more compatible with stupid people...
There's some ethereal guy called "system" wandering around inside my servers stealing stuff. It's a bit like when you were a kid and your parents hid things from you. When my hamster died, my Dad told me it had gone to live on a farm. Of course, when I got a bit older and my Grandmother passed away I realized he was telling fibs because she suffered from hay fever and was afraid of cows, so there's no way she would go and live on a farm. Yet, even though I've now reached the age where people generally feel they can tell me the truth (often, worryingly, to may face), I discover that Windows Server 2008 is still hiding stuff from me.
I suppose it's all related to the poor decisions I made when ordering my servers. Ever since I set them up with Hyper-V, and virtualized all the machines I find I need for my diminutive network here at chez Derbyshire, I've been struggling for disk space. It seems that 300GB is just an aperitif when you get serious about virtualization. OK, so the Server 2008 docs do say you need a minimum of 40GB for a standard installation, but I made the VMs only 30GB. My Windows 2003 Server VM that runs ISA is 30GB and has 22GB free. Though the Windows 2008 VMs that don't have very much at all installed are both showing only 8GB free of 30GB so maybe they were right...
Anyway, although my math skills may have waned since leaving school, I managed to calculate that I could run four 30GB VMs on a 150GB disk (yes, I know you're supposed to put them on separate disks, but my network loading is somewhat less than heavy - none of the machines goes above about 3% CPU utilization). Yet I could never get all of them onto the disk. OK, so Hyper-V does use some extra space for each VM when it's running (about 2GB for a 30GB VM), but I should still have space for four of them. In fact, as one of the VMs is a tightly locked down copy of Windows XP used for browsing and troubleshooting while I'm pretending to be a system administrator, and its only 10GB, I should have space left to swing several cats round simultaneously. But I could only ever fit the three 30GB VMs onto the disk.
I did try reducing the size of the VM with the 20+GB of free space using the Hyper-V tools, but (as they say in several blog posts I found) it's not a trivial exercise. You can convert the VM to a dynamic disk and compact it (it went down to 5.6GB), but when you convert it back to a fixed size disk there is no option to specify the size because it automatically grows to the partition size specified in its boot sector. You need to edit the partition size to reduce the physical disk size, and I didn't fancy playing round with that on a Sunday afternoon. Please, Hyper-V guys, can we have a tool to do this (and better docs that explain why you are wasting good gardening time playing with the existing tools).
So I've put off dealing with this issue for the last few months since setting everything up, but now that we are suddenly experiencing tropical conditions here in Little Olde England I decided I needed to find a way to get this sorted so I could shut down the "spare" server and reduce the searing temperatures in my server cabinet (see last week's ramblings for details). So out comes the calculator: three times 32 (the three VMs on the disk) equals 96. Check the disk properties and it says 133GB used, 14GB free. So where did all the spare disk space go? Maybe it's got some lost clusters, so I schedule a disk check and reboot. After restarting, look in the bootlog.txt file and - lo and behold - around 40GB is described as "in use by the system". What on earth for? Is it hiding secret documents from me? Does it need some spare disk space for playing Mahjong when nobody is watching? Is it full of dead hamsters that never made it to the farm?
So I did the usual, check the properties of each folder and add the total sizes together. 96GB. Then turn on "view operating system files" and do the same. Still 96GB. See what I mean? Most things made of metal expand when they get hot, so my disk drives should be getting bigger not smaller. I even considered looking underneath to see if there was a pool of congealed clusters that had leaked out of the bottom (OK, so not really). But then - "Aha!" - I remember seeing the occasional error message in Event Log about something to do with "Not sufficient disk space to create shadow copies". One of those messages that I've conveniently been ignoring.
So after furkling through the properties of the disk, I find that Windows has allocated 41GB to shadow copies. I suppose the fact that you can see this in the Shadow Copies tab of the Properties dialog means that it's not technically "hidden", but where is the file? You can't see it in Windows Explorer, even with "show operating system files" and "show hidden files" turned on. And how do you stop it happening? After reading some online docs and blog posts, it became clear that the shadow copies are there because the disk has a share set up, and it allows connected users to get at the previous deleted or updated data that was on the disk. I have the disk with the VMs on shared at admin level to be able to do backups, so I can't really just turn off sharing. And according to the Shadow Copies dialog, they are disabled on the disk anyway.
I had a go with the vssadmin command line tool that is part of Server 2008, but that said it couldn't find any shadow copies (that system guy obviously hides stuff from Windows as well). It seems that vssadmin can only delete shadow copies you create manually. And to make it worse, the more I tried enabling and disabling shadow copies, the larger the shadow copy got. After ten minutes it had grown to 55GB! In the end, more by luck that any administrative capability on my part, I found that by clicking the Schedule button and deleting the two existing scheduled shadow copy tasks, and setting the size to 300MB (the minimum you can specify), the shadow copies magically just disappeared. Suddenly I've got tons of spare disk space on all of my drives!
Of course, now I can't sleep at night worrying that shadow copies aren't occurring for my shares, but seeing as how: a) I didn't know there were there before, b) I've never had reason to use them, and c) I can't see why I'd need to get a previous copy of a VM when they are all exported and backed up in multiple places regularly (and don't actually change that much anyway), maybe I'm just being as paranoid as usual.
I suppose I'll find out one day when the sky does fall in, and I can't get to the Internet to update my blog. You'll probably be able to tell when this happens because the post will suddenly end in mid
OK, OK, so one month I'm complaining that our little green paradise island seems to have drifted north into the Arctic, and now I'm grumbling about the heat. Obviously global warming is more than just a fad, as we've been subjected here in England to temperatures hovering around 90 degrees in real money for the last week or so. Other than the gruesome sight of pale-skinned Englishmen in shorts (me included), it's having some rather dramatic effects on my technology installations. I'm becoming seriously concerned that my hard disks will turn into floppy ones, and my batteries will just chicken out in the heat.
Oh dear, bad puns and I only just got going. But it does seem like the newer the technology, the less capable it is of operating in temperatures that many places in the world would call normal. There's plenty of countries that get regular spells of weather well into the 90's, as I discovered when we went to a wedding in Cyprus a few years back. How on earth do they cope? I've got extra fans running 24/7 in the computer cabinet and in the office trying to keep everything going. I'm probably using 95% of my not inconsiderable weekly electricity consumption keeping kit that only uses 1% of it to actually do stuff from evaporating (the other 4% is the TV, obviously).
Maybe the trouble is that, here in England where we have a "temperate" climate, we're not really up to speed with modern technology such as air conditioning. Yes, they seem to put it in every car these days, but I only know one person who has it in their house, and that's in the conservatory where - on a hot day - it battles vainly to get the temperature below 80 degrees. I briefly considered running an extension lead out to my car and sitting in it to work, but that doesn't help with the servers and power supplies.
I've already had to shut down the NAS because it's sending me an email every five minutes saying it's getting a bit too warm. And I've shut down the backup domain controller to try and cut down the heat generation (though it's supposed to be one of those environmentally friendly boxes that will run on just a sniff of electricity). And the battery in the UPS in the office did its usual summer trick of bulging out at the sides, and throwing in the towel as soon as I powered up a desktop and a couple of decent sized monitors. It's no wonder UPS are so cheap to buy. They're like razors (or inkjet printers) - you end up spending ten times more than a new one would have cost on replacement batteries. Even though I cut a hole in the side and nailed a large fan onto it.
Probably I'm going to have to bite the bullet and buy a couple of those portable air conditioning units so my high-tech kit can stay cool while we all melt here in the sun. In fact, my wife reckons I've caught swine 'flu because she finds me sitting here at the keyboard sweating like pig when she sails in from her nice cool workplace in the evening. At least the heat has killed most things in the garden (including the lawn) so that's one job I've escaped from.
By the way, in case you didn't realize, the title this week comes from a rather old BBC TV program. Any similarity between the actor who played Gunner 'Lofty' and this author is vigorously denied.
Listening to the radio one day this week, I heard somebody describe golf as being "a series of tragedies obscured by the occasional miracle". It struck me that maybe what I do every day is very similar. If, as a writer, you measured success as a ratio between the number of words you write and the number that actually get published, you'd probably decide that professional dog walker or wringer-out for a one-armed window cleaner was a far more rewarding employment prospect.
Not being a golfer myself (see "INAG"), I'd never heard that quote before. However it is, it seems, quite well known - I found it, and several similar ones, on various golf Web sites. Including a couple that made me think about how closely the challenges of golf seem to mirror those of my working life. For example, "Achieving a certain level of success in golf is only important if you can finally enjoy the level you've reached after you've reached it." How do you know when you've reached it? Or can you actually do better next time? Or maybe you should just assume that you're doing the best you can on every project? That seems like a recipe for indolence; surely you can always get better at what you do? But if you keep practicing more and more, will you just end up creating more unused output and reduce your written/published ratio?
Or how about "Golf is the only sport where your most feared opponent is you"? I find that writing tends to be a one-person activity, where I can concentrate without the distractions of the outside world penetrating the whirling vortex of half-formed thoughts and wild abstractions that are supposed to be elements of a carefully planned and managed process for distilling knowledge and information from the ether and converting it into binary data. I always assumed that professional developers tended to have the same issues, so I have no idea how they can do paired programming. Imagine two writers sat side by side arguing about which words to put where, and if that should be a semi-colon or a comma, while trying to write an article.
I've always maintained that the stuff I create should, by the time it actually pops up in the Inbox of my editor and reviewers, be complete, readable, as free of spelling errors and bad grammar as possible (unlike the subject of one of my previous posts), and - of course - technically accurate. OK, so you can't always guarantee all of these factors, but making people read and review (and, more to the point, edit) stuff that is half-baked, full of spelling and grammar faults, and generally not in any kind of shape for its intended use just seems to be unprofessional. It also, I guess, tends to decrease the chance of publication and reduce your written/published ratio.
Ah, you say, but surely your approach isn't agile? Better to throw it together and then gradually refactor the content, modify the unsuccessful sentences, and hone the individual phrases to perfection; whilst continually testing the content through regular reviews, and comparison with reality (unless, I suppose, you are writing a fantasy or science fiction novel). Should "your most feared opponent" be the editor? I'm not sure. When it comes back from review with comments such as "This is rubbish - it doesn't work like that at all" or "Nice try, but it would be better if it described what we're actually building" you probably tend to sense a shift in most-feared-opponentness.
I suppose I should admit that I once tried writing fiction (on purpose), but every page turned out to be some vaguely familiar combination of the styles of my favorite authors. Even the plot was probably similar to something already published. Thankfully I gave up after one chapter, and abandoned any plans to write the next block-selling best-buster. And I couldn't think of a decent title for it anyway. Written/published ratio zero, and a good reason to stick with my proper job of writing technical guidance for stuff that is real. Or as real as a disk file full of ones and zeros can be.
And while we're talking about jobs, they have a great advert on one of our local radio stations at the moment. I've never figured out what they're trying to sell, but it does offer the following useful advice: "If you work on the checkout in a hand-grenade shop, it's probably best not to ask customers for their PIN". However, in the end, I suspect that none of the quotes can beat Terry Pratchett's definition of the strains of the authoring process: "Writing is easy. You only need to stare at a piece of blank paper until your forehead bleeds".
I've been trying something new and exciting this week. OK, so it's perhaps not as exciting as bungee jumping or white-water rafting, but it's certainly something I've not tried before. I'm experimenting to see if I can use Team Foundation Server (TFS) to monitor and control the documentation work for my current project. As usual, the dev guys are using agile development methods, and they seem to live and die by what TFS tells them, so it must be a good idea. Maybe. But I suppose there's no room in today's fast-moving, high-flying, dynamic, and results-oriented environment for my usual lackadaisical approach of just doing it when it seems to be the best time, and getting it finished before they toss the software out of the door and into the arms of the baying public.
So, dive into the list of work items for the current iteration and see if I can make some wild guesses at how long the documentation work will take for each one. Ah, here's a nice easy one: fix some obscure bug that hardly anybody was aware of. That's a quarter of an hour to add a note about the fix to the docs. But it seems like I can only enter whole hours, so I suppose I'll have to do it slowly. And here's another no-impact one: refactor the code for a specific area of the product. And these three are all test tasks, so I don't need to document them either. Wow, this is easy. It looks like I'll only have three hours work to do in the next fortnight. Plenty of time to catch up on the gardening and DIY jobs I've managed to postpone for the last year or three.
Next one - completely change the way that the configuration system works. Hmmm, that's more difficult. How many places in the 900 pages will that have an impact? And how long will it take to update them all? Oh well, take a wild guess at four days. And the next one is six completely new methods added to a class. That's at least another three days to discover how they work, what they do, and the best way to use them. And write some test code, and then document them. After a few more hours of stabbing in the dark and whistling in the wind, I can add up the total. Twenty three days. That should be interesting, because the iteration is only two weeks. Looks like I need to write faster...
Now skip forward to Friday, and go back to TFS to mark up my completions. How do I know if a task is done or not? Will the code change again? Will changes elsewhere impact the new updates to the docs? When will test complete their pass on the code so I can be sure it's actually stable? And do I have to wait for test to review my docs? Or wait for the nice lady who does the English edit to make sure I spelt everything right and didn't include any offending letters (see Oending Letters). I guess I've finished my updates, so I can mark them as "Done". But does that mean I need to add a task for review, test, and edit for my updates? Surely they won't want to work through the doc until it contains all of the updates for that particular section of the product?
So this isn't as easy as it may have seemed at the beginning. In fact, I've rambled on in the past about trying to do agile with guidance development (see Tragile Documentation). I can see that I'll be annoying people by asking them to test and edit the same doc several times as I make additional changes during the many upcoming iterations. Perhaps I should just leave them all as "In Progress"? But that will surely mess up the velocity numbers for the iteration. And they'll probably think I went off on vacation for the two weeks. Not that the sound of the waterfall in my garden pond and the ice cream van that always seems to go past during the daily stand-up call won't tend to reinforce this assumption.
Still, it will be interesting to see how it all pans out. Or whether I spend more time fighting with my VPN connection and TFS than actually writing stuff...
Isn't it funny how - after a while - you tend not to notice, or you ignore the annoying habits of your closest colleagues. As I work from home, some 5,000 miles away from my next closest colleagues, the closest colleague I have is Microsoft Vista (yes, I do lead a sad and lonely life doing my remote documentation engineering thing). I mean, I've accepted that sometimes when I open a folder in Windows Explorer it will decide to show me a completely different view of the contents from the usual "Details" view I expect. I suppose it's my own fault because I happen to have a few images in there as well as Word documents, and Vista thinks it's being really helpful by telling me how I rated each one rather than stuff I want to know - like the date it was last modified.
But worse of all is the search feature, or perhaps I should call it an unfeature. In XP, I could select a folder and enter a partial file name then watch as it wandered through the subtree (which, with my terrible memory of where I put stuff was often "C:\"). It told me where it was looking, and I knew it was just looking at filenames. If I only wanted to search the contents of files, I could tell it to do that. In Vista, I type something in the search box and get a warning that the folder isn't indexed, then a slow progress bar. I've no idea where it's looking, or what it's looking for. And neither does it by the look of the results sometimes.
It seems to decide by itself whether to look inside files (so when I search for some part of a filename I get a ton of hits for files that happen to contain that text), yet it seems incapable of finding the matching files by name. I have to either wait till its finished or open the Search Tools dialog before I can get at the advanced options to tell it what kind of search I want and if I want all subfolders to be included. And when I do look for something in the contents of the files, I get either 1,000 hits or none at all. In fact, I've actually resorted to using TextPad to search for strings in text files recently. And after all that, I have to go clicking around the folder tree (while trying to cope with the contents oscillating widly from side to side as I open each one) to get back to where I was because it helpfully moved the folder view to the very end of my long and complicated list of folders.
I can see that the Vista approach may be easier and quicker for simple searches, but I can't help feeling that it often just gets in the way by trying to be too clever and "usable" (something I've grumbled about before - see Easter Bonnets and Adverse Automation). Maybe some of the problem is that I'm continually creating and deleting folders and moving stuff around as I gracefully slither between projects and my other daily tasks. I've tried setting default folder and search options, but I guess Vista can't cope with my indecisiveness. Perhaps I should just keep everything in one folder called "Stuff". But then I'd need a really good search engine...
Probably a lot of this ranting comes about because of the totally wasted day spent trying to get some updated software to run on my machine. The software in question uses a template and some DLLs that get loaded into Word, some other DLLs that do magic conversion things with the documents, and some PowerShell scripts that drive the whole caboodle. So after the installation, PowerShell refused to have anything to do with my scripts, even though I configured the appropriate execution policy setting. Finally I managed to persuade it to load the scrips, but all it would do was echo back the command line. In the end, I copied the scripts from the new folder into the existing one where the previous version was located, and the scripts ran! How do you figure that? Is there some magic setting for folder permissions that I have yet to discover?
And then I had to run installutil to get the script to find some cmdlets in another assembly, and delay sign a couple of other assemblies that barfed with Vista's security model. After about 6 hours work, it looked like it was all sorted - until I went back into Word to discover that the assemblies it requires now produced a load error. In the end, the only working setup I could achieve was by uninstalling and going back to the previous version. And people wonder why I tend to shy away from upgrading stuff...
At least there is some good news - the latest updates to Hyper-V I installed that morning included a new version of the integration components, and (at least at the moment) I've still got a proper mouse pointer in my virtual XP machine (see Cursory Distractions). So I guess the whole day wasn't wasted after all.
Footnote: Actually it was - my mouse pointer has just gone back to a little black dot...
It's a good thing that Tim Berners-Lee is still alive or he'd probably be turning in his grave. I was hoping to find that my latest exploration of Web-based Interfaces for Kommunicating Ideas would lead me to some Wonderfully Intuitive Kit Intended for sharing knowledge and collecting feedback, but sadly I'm Wistfully Imagining Knowledge Instruments that should have been around today - and aren't. And, yes, I'm talking about wikis.
As an aside, you've probably seen those word ladder puzzles in the Sunday papers where you have to turn one word into another by adding one letter at a time. Seeing as how I talked about Wii last time, and wiki this week, maybe I can continue the pattern. Any suggestions of a five-letter topic that contains the letters w, i, k, and i are welcome...
Anyway, coming back to the original topic, it could all have been so different. Instead of the awful and highly limited format capabilities, and the need to spend an inordinate amount of time creating conversion tools, we could have had a ready-built, comprehensive, easy-to-use, and amazingly less grotty technology than wikis if we hadn't let some guy get in the way some time back in the mid nineties. Mind you, it's probably not wholly fair to blame it all on Marc Andreessen and Netscape; Microsoft followed the same path and I guess are equally guilty. I suppose the drive for world-wide adoption, the opening of the Web to the unwashed public, and commercial factors in general were the real reason behind it all.
You see, when our Tim and his team invented HTML and the associated server-side stuff, the intention was that it would be a collaboration and information sharing mechanism. User agents (what we now call browsers) would fetch content from a server if the user had read permission and display it in a documentation format using markup to indicate the structure and type of content it contained. Elements such as "p" (paragraph), "strong", "ul" (unordered list), "ol" (ordered list), "dl" (definition list), and the "h(x)" (heading) elements would indicate the type of content contained, not the way it should be displayed.
But, more than that, the user agent would allow the user to edit the content and then, providing they had write permission on the originating server, update the original document with their revisions and comments using elements such as "ins" and "del". However, as we've seen, the elements in HTML have come to represent the displayed format rather than the context of the content, and browsers are resolutely read-only these days. Of course, more recent mechanisms such as CSS and XML transformations allow us to move back to the concept of markup indicating context rather than display attributes. But if you want to see what it should have been like, download and install the W3C reference browser Amaya and see how it allows you to edit the pages it displays.
So, instead we had to invent a new way to do collaboration, and wiki caught on. OK, it's probably fine for quickly knocking up a few pages to allow users to edit, review, and provide feedback. But it's seriously broken compared to doing anything sensible like you can with an XML-based format (which includes HTML 4.x). It's a kind of "Web for dummies" approach, where the concept of nesting and formatting content consists of a few weird marker characters that easily get confused with the content - even to the extent that you need to "escape" things like variable names that start with an underscore.
I guess this railing against technology comes about because I just spent two days building a tool to convert our formatted Word docs (which use our own DocTools kit to generate a variety of outputs) into a suitable format for Codeplex wiki pages. I even had to build another "kludge" tool to add to our growing collection - it's the only way I can find to do the final content tweaks. All I can say is, whoever dreamed up this format never tried to do stuff like this with complicated multi-topic Word source documents...
And, worse still, you have to add each page to the wiki project site individually and then attach the image files to it. OK, so the tool does give you a TOC and text files you can copy, but it sure would be nice to have a way to bulk upload stuff. My current test document set has 59 pages, so I can see I'll be spending a whole day clicking Save, Attach, and Edit.
But maybe that has some advantages. I'll have less time to spend inflicting the general public with Wild and Incoherent Komplaints and Insults in my blog...