Random Disconnected Diatribes of a p&p Documentation Engineer
OK, OK, so one month I'm complaining that our little green paradise island seems to have drifted north into the Arctic, and now I'm grumbling about the heat. Obviously global warming is more than just a fad, as we've been subjected here in England to temperatures hovering around 90 degrees in real money for the last week or so. Other than the gruesome sight of pale-skinned Englishmen in shorts (me included), it's having some rather dramatic effects on my technology installations. I'm becoming seriously concerned that my hard disks will turn into floppy ones, and my batteries will just chicken out in the heat.
Oh dear, bad puns and I only just got going. But it does seem like the newer the technology, the less capable it is of operating in temperatures that many places in the world would call normal. There's plenty of countries that get regular spells of weather well into the 90's, as I discovered when we went to a wedding in Cyprus a few years back. How on earth do they cope? I've got extra fans running 24/7 in the computer cabinet and in the office trying to keep everything going. I'm probably using 95% of my not inconsiderable weekly electricity consumption keeping kit that only uses 1% of it to actually do stuff from evaporating (the other 4% is the TV, obviously).
Maybe the trouble is that, here in England where we have a "temperate" climate, we're not really up to speed with modern technology such as air conditioning. Yes, they seem to put it in every car these days, but I only know one person who has it in their house, and that's in the conservatory where - on a hot day - it battles vainly to get the temperature below 80 degrees. I briefly considered running an extension lead out to my car and sitting in it to work, but that doesn't help with the servers and power supplies.
I've already had to shut down the NAS because it's sending me an email every five minutes saying it's getting a bit too warm. And I've shut down the backup domain controller to try and cut down the heat generation (though it's supposed to be one of those environmentally friendly boxes that will run on just a sniff of electricity). And the battery in the UPS in the office did its usual summer trick of bulging out at the sides, and throwing in the towel as soon as I powered up a desktop and a couple of decent sized monitors. It's no wonder UPS are so cheap to buy. They're like razors (or inkjet printers) - you end up spending ten times more than a new one would have cost on replacement batteries. Even though I cut a hole in the side and nailed a large fan onto it.
Probably I'm going to have to bite the bullet and buy a couple of those portable air conditioning units so my high-tech kit can stay cool while we all melt here in the sun. In fact, my wife reckons I've caught swine 'flu because she finds me sitting here at the keyboard sweating like pig when she sails in from her nice cool workplace in the evening. At least the heat has killed most things in the garden (including the lawn) so that's one job I've escaped from.
By the way, in case you didn't realize, the title this week comes from a rather old BBC TV program. Any similarity between the actor who played Gunner 'Lofty' and this author is vigorously denied.
I seem to have spent a large proportion of my time this month worrying about health. OK, so a week of that was spent in the US where, every time I turned on the TV, it scared me to death to see all the adverts for drugs to cure the incredible range of illnesses I suppose I should be suffering from. In fact, at one stage, I started making a list of all the amazing drugs I'm supposed to "ask my doctor about", but I figured if I was that ill I'd probably never have time to take them all. They even passed an "assisted suicide" law while I was there, and I can see why they might need it if everyone is so ill all of the time.
And, of course, it rained and hailed most of the week as well. No surprise there. They even said I might be lucky enough to see some snow. Maybe it's all the drugs they've been asking their doctor about that makes snow seem like a fortuitous event. Still, I did get to see the US presidential election while I was there. Or, rather, I got to see the last week of the two year process. It seems like they got 80% turnout. Obviously, unlike Britain where we're lucky to get 40% turnout, they must think that voting will make a difference. Here in the People's Republic of Europe, we're all well aware that, if voting actually achieved anything, they'd make it illegal. I wonder if the result still stands if nobody actually turns up to vote?
Mind you, it does seem surreal in so many ways. You have to watch four different news channels if you want to get a balanced opinion. And one of the morning newsreaders seemed to have a quite noticeable lisp so that I kept hearing about the progreth of the Republicanth and the Democratth. A bit like reading a Terry Pratchett novel. Or maybe it was just the rubbish TV in the hotel. And they didn't have enough voting machines so in some places people were queuing for four hours in the rain to cast their votes. Perhaps it's because there are around 30 questions on the ballot paper where you get to choose the president, some assorted senators and governors, a selection of judges, and decide on a couple of dozen laws you'd like to see passed. Obviously a wonderful example of democracy at work.
Anyway, returning to the original topic of this week's vague expedition into the blogsphere, my concerns over health weren't actually connected to my own metabolic shortcomings. It was all to do with the Designed for Operations project that I've been wandering in and out of for some number of years. The organizers of the 2008 patterns & practices Summit had decided that I was to do a session about health modeling and building manageable applications. In 45 minutes I had to explain to the attendees what health models are, why they are important, and how you use them. Oh, and tell them about Windows Eventing 6.0 and configurable instrumentation helpers while you're at it. And put some jokes in because it's the last session of the day. And make sure you finish early 'cos you'll get a better appraisal. You can see that life is a breeze here at p&p...
So what about health modeling? Do you do it? I've done this kind of session three or four times so far and never managed to get a single person to admit that they do. I'm not convinced that my wild ramblings, furious arm waving, and shower of psychedelically colored PowerPoint graphics (and yes, Dave, they do have pink arrows) ever achieve anything other than confirm to the audience that some strange English guy with a funny accent is about to start juggling, and then probably fall off the stage. Mind you, they were all still there at the end, and only one person fell asleep. I suppose as there was no other session to go to, they had no choice.
What's interesting is trying to persuade people that it's not "all about exception handling". I have one slide that says "I don't care about divide by zero errors; I just want to know about the state changes of each entity". Perhaps it's no wonder that the developers in the audience thought they had been confronted by some insane evangelist of a long-lost technical religion. The previous session presented by some very clever people from p&p talked about looking for errors in code as being "wastage", and there I was on next telling people all about how they should be collecting, monitoring, and displaying errors.
But surely making applications more manageable, reporting health information, and publishing knowledge that helps administrators to verify, fix, and validate operations post deployment is the key to minimizing TCO? An application that tells you when it's likely to fail, tells you what went wrong when it does fail, and provides information to help you fix it, has got to be cheaper and easier to maintain. One thing that came out in the questions afterwards was that, in large corporations, many developers never see the architect, and have no idea what network administrators and operators actually do other than sit round playing cards all day. Unless they all talk to each other, we'll probably never see much progress.
At least they did seem to warm to the topic a little when I showed them the slide with a T-shirt that carried that well-worn slogan "I don't care if it works on your machine; we're not shipping your machine!" After I rambled on a bit about deployment issues and manageable instrumentation, and how you can allow apps to work in different trust levels and how you can expose extra debug information from the instrumentation, they seemed to perk up a bit. I suppose if I achieved nothing other than making them consider using configurable instrumentation helpers, it was all worthwhile.
I even managed to squeeze in a plug for the Unity dependency injection stuff, thus gaining a few brownie points from the Enterprise Library team. In fact, they were so pleased they gave me a limited edition T-shirt. So my 10,000 mile round trip to Redmond wasn't entirely wasted after all. And, even better, if all goes to plan I'll be sitting on a beach in Madeira drinking cold beer while you've just wasted a whole coffee break reading this...
Technical writers, like wot I am, tend to be relatively docile and unflappable creatures. It comes with the territory (and, often, with age). Especially when most of your life is spent trying to document something that the dev team are still building, and particularly so if that dev team is resolutely following the agile path. You know that the features of the software and its capabilities will morph into something completely different just after you get the docs on that bit finished; and they'll discover the bugs, gotchas, and necessary workarounds only after you sent your finely crafted text to edit.
And you get used to the regular cycle of "What on earth have they added now?", "What is this supposed to do", "Can anyone tell me how it works", and "OK, I see, that must be it". Usually it involves several hours searching through source code, playing with examples, trying to understand the unit tests, and - as a last resort - phoning a friend. But, in most cases, you figure out what's going on, take a wild stab at how customers will use the feature, and then describe it in sufficient detail and in an organized way so that they can understand it.
Of course, the proper way to document it is to look for the scenario or use case, and describe this using one of the well known formats such as "Scenario", "Problem", "Solution", and "Example". You might even go the whole hog and include "Forces", "Liabilities", and "Considerations" - much like you see software design patterns described. Though, to me at least, this often seems like overkill. I guess we've all seen documents that describe the scenario as "The user wants to retrieve data from a database that has poor response times", the problem as "The database has poor response times and the user wants to retrieve data", and the forces include "A database that has poor response times may impact operation of the application". And then the solution says "Use asynchronous access, stupid!" (though the "stupid" bit is not usually included, just implied).
Worse still, writing these types of documents takes a lot of effort, especially when you try not to make the reader sound stupid, and may even tend to put people off when compared to a more straightforward style that just explains the reason for a feature, and how to use it. Here at p&p, we tend to document these as "Key Scenarios" - with just a short description of the problem and the solution, followed by an example.
However, in an attempt to provide more approachable documentation that is easier to read and comprehend, I've tended to wander away from the strict "Key Scenarios" approach. Instead, I've favored breaking down software into features and sub-features, and describing each one starting with an overview of what it does, followed by a summary of the capabilities and links to more detailed topics for each part that describe how to use it. This approach also has the advantage in that it makes coping with the constant churn of agile-developed software much easier. The overview usually doesn't change (much); you can modify the structure, order, and contents of each subtopic; and you can slot new subtopics in without breaking the whole feature section. Only when there are extreme changes in the software (which are not unknown, of course) do you need to rebuild a whole section.
OK, so the whole idea of agile is that you discover all this though constant contact with the other members of the team, and have easy access to ask questions and get explanations. But it often isn't that simple. For example, during testing of the way that Unity resolves objects, I found I was getting an exception if there was no named mapping in the container for the abstract class or interface I was trying to resolve. The answer was "Yes, Unity will throw an exception when there is no mapping for the type". But, as you probably guessed, this isn't a limitation of Unity (as I, employing my usual ignorance and naivety, assumed it was). Unity will try its best to create an instance of any type you ask for, whether there's a mapping or not. But it throws an exception because, with no mapping, it ends up trying to instantiate an interface or abstract class. Needless to say, furious re-editing of the relevant doc sections ensued.
Then, just to reinforce my growing concern that maybe my less rigorous approach to feature documentation is not working as well as planned, I came across a set of features that are so complex and unintuitive, and viciously intertwined with each other, that understanding the nuances of the way they work seemed almost impossible. Even the test team, with their appropriate allocation of really clever people, struggled to grasp it all and explain it to mere mortals like me. Yes, I can vaguely appreciate what it does, and even how some of it works. But what is it for?
This seems to be the root of the issue. There are no documented use cases that describe when or why customers would use this set of features, or what they are designed to achieve. Just a semi-vague one liner that describes what the feature is. Or, to be more accurate, several one liners that describe bits of what the feature is. No explanation of why it's there, why it's useful, and why users might need it. Or part of it. Or some unexplained combination of it. Of course, it is impeccably implemented and works fine (I'm told), and is a valuable addition to the software that a great many users of previous versions asked for.
But the documentation dilemma also reveals, like some gruesome mythical beast rising from the mire, just how frighteningly easy it is to innocently stray from the well worn path of "proper" documentation techniques. It's only obvious when something like this (the problem not the mythical beast) comes back to bite you. Faced with a set of features that almost seem to defy explanation in terms of "what they are" and "what they do", it does seem like the key scenario approach, with its problem, solution, and example format, really is the way to go.
If the first step of documenting a feature of a product is to discover and list the key scenarios, you have a platform for understanding the way that the software is designed to be used, and what it was designed to do. If you can't even find the key scenarios before you start writing, never mind accurately document them, the chances are you are never going to produce guidance that properly or accurately describes the usage.
And maybe, one day, I'll even apply "proper" documentation design patterns to writing blog posts...
It's customary here in England to castigate British Rail for their outlandish non-service excuses. As far back as I can remember we've had "leaves on the line". Then, after they spent several million pounds on special cleaning trains, it morphed into "the wrong kind of leaves on the line". And of course, every winter when the entire British transport system grinds to a halt they blame "the wrong kind of snow." But this week I've been introduced to a new one: "the wrong kind of electricity".
During the summer months I ply my lonely documentation engineering trade using a laptop and enjoying the almost-outdoorsness of the conservatory; soaking up the joys of summer, the birds singing in the trees, the fish splashing around in the pond, and a variety of country wildlife passing by. So when I noticed one of my regular computer suppliers was selling off Windows 7 laptops, no doubt to be ready for the imminent arrival of Windows 8, I thought it would be a good idea to pick up a decent 17" one to replace my aging 14" Dell Latitude. With age gradually degrading my eyesight I reckon I'll soon need all the screen space I can get.
So when my nice new Inspiron arrived I powered it up, worked through the "configuring your computer" wizard, removed all the junk that they insist on installing, and started to list all the stuff I'll need to install. Until I noticed that the battery wasn't charging. So I fiddled with the power settings, dug around in the BIOS, tried a different power pack, and did all the usual pointless things like rebooting several times. No luck.
So I dive into t'Internet to see if there's a known fix. Yes, according to several posts on the manufacturer's site and elsewhere there is. You replace the power supply board inside the computer at a cost of 35 pounds, or – if it's still under guarantee – send it back and they replace the motherboard. Mind you, there were several other suggestions, such as upgrading the BIOS and banging the power supply against a wall, but as I'd only had the machine for two hours none of these seemed to be an ideal solution. So I did the obvious – pack it up and send it back to the supplier as DoA (dead on arrival).
Mind you, when I phoned the supplier and explained the problem the nice lady said that it would be OK if I kept it plugged into the mains socket because then it doesn't need the battery to be charged up. True, but as I pointed out to her, it's supposed to be a portable computer. I'll need a long piece of wire if I decide to use it the next time I'm travelling somewhere by train.
And do I want a replacement? How common is the failure? To have it happen on a brand new machine is worrying. Yet, strangely, only a few weeks ago I noticed one time when I powered up my old Latitude that it displayed a message saying it didn't recognize my power pack, but then decided it did. Yet after wandering around the house I found five Dell laptop power packs and they all seem to be much the same. All 19.6 Volts, either 3.4 Amps or higher current rating. They all have the same two-pole plug, with the positive in the center. The only difference seems to be that the newer ones have 25 certificates of conformance on the label, while the older ones have around 15 (perhaps that's why they seem to get bigger each time - to make room for the larger label).
So how does the computer know which power pack I've plugged in? When I looked in the BIOS it said that the power pack was "65 Watts". Is there some high frequency modulation on the output that the computer can decipher? Or does it do the old electrician's trick of flicking the wires together to see if there's sparks, and measure the effect? Do all computers these days do the same thing? If I buy an unbranded replacement power pack will the computer pop up a window saying "You tight-fisted old miser - you don't really expect me to work with that do you?"
And is all this extra complexity, which can obviously go wrong, really needed? How comfortable will I be with all my computers now if I feel I need to check that the power supply/computer interface is still working every time I switch one of them on? It seems like the usual suspicion most people have that the first thing to die on your computer will be the hard disk is no longer true. Now your computer may decide to stop working just because you're using the wrong kind of electricity...
If an article I read in the paper this week is correct, you need to immediately uninstall Arial, Verdana, Calibri, and Tahoma fonts from your computer; and instead use Comic Sans, Boldini, Old English, Magneto, Rage Italic, or one of those semi-indecipherable handwriting-style script fonts for all of your documents. According to experts, it will also be advantageous to turn off the spelling checker; and endeavour to include plenty of unfamiliar words and a sprinkling of tortuous grammatical constructs.
It seems researchers funded by Princeton University have discovered that people are 14% less likely to remember things they read when they are written in a clean and easy-to-read font and use a simple grammatical style. By making material "harder to read and understand" they say you can "improve long term learning and retention." In particular, they suggest, reading anything on screen - especially on a device such as a Kindle or Sony Reader that provides a relaxing and easy to read display - makes that content instantly forgettable. In contrast, reading hand-written text, text in a non-standard font, and text that is difficult to parse and comprehend provides a more challenging experience that forces the brain to remember the content.
There's a lot more in the article about frontal lobes and dorsal fins (or something like that) to explain the mechanics of the process. As they say in the trade, "here comes the science bit". Unfortunately it was printed in a nice clear Times Roman font using unchallenging sentence structure and grammar, so I've forgotten most of what it said. Obviously the writer didn't practice what they preached.
But this is an interesting finding. I can't argue with the bit about stuff you read on screen being instantly forgettable. After all, I write a blog that definitely proves it - nobody I speak to can remember what I wrote about last week (though there's probably plenty of other explanations for that). However, there have been several major studies that show readers skip around and don't concentrate when reading text on the Web, often jumping from one page to another without taking in the majority of the content. It's something to do with the format of the page, the instant availability, and the fundamental nature of hyperlinked text that encourages exploration; whereas printed text on paper is a controlled, straight line, consecutive reading process.
From my own experience with the user manual for our new all-singing, all-dancing mobile phones, I can only concur. I was getting nowhere trying to figure out how to configure all of the huge range of options and settings for mail, messaging, synchronization, contacts, and more despite having the laptop next to me with the online user manual open. Instead, I ended up printing out all 200 pages in booklet form and binding them with old bits of string into something that is nothing like a proper manual - but is ten times more useful.
And I always find that proof-reading my own documents on screen is never as successful as when I print them out and sit down in a comfy chair, red pen in hand, to read them. Here at p&p we are actively increasing the amount of guidance content that we publish as real books so that developers and software architects can do the same (red pen optional). The additional requirements and processes required for hard-copy printed materials (such as graphic artists, indexers, additional proof readers, layout, and the nagging realization that you only have one chance to get it right) also seem to hone the material to an even finer degree.
So what about the findings of those University boffins? Is all this effort to get the content polished to perfection and printed in beautifully laid out text actually reducing its usefulness or memorability? We go to great lengths to make our content easy to assimilate, using language and phrasing defined in our own style manuals and passing it through multiple rigorous editing processes. Would it be better if we just tossed it together, didn't bother with any editing, and then photo-copied it using a machine that's nearly run out of toner? It should, in theory, produce a more useful product that you'd remember reading - though perhaps not for the desired reasons.
Taking an excerpt from some recent guidance I've created, let's see if it works. I wrote "You can apply styles directly within the HTML of each page, either in a style section in the head section of the page or by applying the class attribute to individual elements within the page content." However, before I went back and read through and edited it, it could well have said something like "The style section in the head section, or by decorating individual elements with the class attribute in the page, can be used to apply styles within the HTML or head of each page within the page content."
Is the second version likely to be more memorable? I know that my editor would suspect I'd acquired a drug habit or finally gone (even more) senile if I submitted the second version. She'd soon have it polished up and looking more like the first one. And, no doubt, apply one of the standard "specially chosen to be easily readable" fonts and styles to it - making readers less likely to recall the factual content it contains five minutes after they've read it.
But perhaps a typical example of the way that a convoluted grammar and structure makes content more memorable is with the well-known phrase taken from a mythical motor insurance claim form: "I was driving past the hole that the men were digging at over fifty miles per hour." So that could be the answer. Sentences that look right, but then cause one of those "Hmmm .... did I actually read that right?" moments.
At the end of the article, the writer mentioned that he asked Amazon for their thoughts on the research in terms of its impact on Kindle usage, but they were "unable to comment". Perhaps he sent the request in a nice big Arial font, and the press guy at Amazon immediately forgot he'd read it...
So when you buy a load balancing router, what do you actually expect it to do? Maybe I just expected too much from it as a solution to my distinctly unbalanced connectivity requirements. But even if the outcome is typically a lack of adequate refreshment, I suppose it's nice to live in hope with a "glass half full" approach. Though, in my case, "network half full" seems more appropriate.
After my recent shuffling of accounts with Internet Service Providers, I now have two connections to the Net. One is through cable and provides 20 Mbit downstream and 1 Mbit upstream. The other is through ADSL and (on a good day) offers 2 Mbit down and 750 Kbit up. They're both linked to my internal network through a LinkSys RV042 load balancing router. So, in theory, I can get great performance combined with automatic failover should one of the connections fail.
However, this arrangement seems to cause more problems than it solves. Some sites, including my personal email provider, treat requests from different IP addresses as coming from different sources. My cable and ADSL connections are very different IP addresses, and my email provider's load balancer and security mechanism don't allow requests from both IP addresses to access the same authenticated session. I guess this is done to mitigate "man in the middle" attacks, but it means that I'm continually prompted for credentials when using Outlook or OWA.
The solution is to force all HTTPS requests to go through one of my two connections rather than being load balanced, and the obvious choice is to use the faster connection. To do that I can set up protocol bindings in the router, but it just raises a heap of new issues. For example, the Protocol Bindings configuration section consists of controls where you specify the service type (such as HTTPS), the source IP address range, and the destination IP address range. There's also an Enable checkbox which, for some reason, is unchecked by default. So if you just enter the values and click the button to add the binding, it doesn't work unless you remember to select it again and check the Enable box.
And then you need to figure out whether adding one binding will still allow all other services and IP addresses to route to one or both of the external connections. There's nothing in the router manual to indicate whether you need to add more bindings for these, or how you can create a rule that excludes some services and IP addresses but includes the rest. And there's no indication either of whether the order of the bindings makes any difference. As there's no "Move up" and "Move down" buttons, I have to assume it doesn't.
So I Binged for help, but all I can find are some non-relevant half-examples. The Cisco forum people keep saying that "you can use protocol bindings to resolve this issue" but never actually show you how. From the number of posts complaining that protocol bindings don't work, I can only guess that most people, like me, can't figure out how to configure them properly.
But I carried on and created a binding to route all HTTPS requests to my email provider through one connection. Except that I have no idea what destination IP address range to specify. I can get the IP address currently allocated to my email server using nslookup, but I can't get the range because their DNS server won't accept requests to list them all. And they seem reticent about providing the information in response to my emails. In the end I decided to route all HTTPS request through one connection rather than just requests to their servers.
So, having resolved that issue I can now enable load balancing secure in the knowledge that I can still get at my email, as well as benefitting from using both connections for everything else. The router is also configured with the throughput rates for each connection and is clever enough to spread requests across them based on the capacity of each one, so more requests go to the cable connection than the ADSL connection. This is good because there is a monthly download limit on the ADSL line, whereas the cable connection is unlimited.
Unfortunately it doesn't actually speed things up in many cases. When I open a web page, the requests for the associated images and other content are load balanced and so, in theory, the entire page loads more quickly than when using just one connection. However, the router doesn't know how big each of these images or resources is, so it may send the request for very large ones over the slower connection; which means the page load is actually slower than using just the faster connection. And it's quite possible that a request for, say, a streaming video file may be routed through the slower connection (whereas I'd want it to go through the faster one) because I can't set up protocol bindings for different content types.
Load balancing is, of course, designed to work over connections that have the same capacity and bandwidth, so I guess I can't expect it to do anything different. In the end I've gone back to using the router in failover mode only on the faster cable connection. At least I can be sure I won't exceed the bandwidth limits on the ADSL connection unless the cable connection fails altogether, though what my ISP will think when my bandwidth usage is zero each month I can't guess. Or whether my ADSL modem will still work when it doesn't have regular traffic it can use to configure connection rates and error checking. And it still feels like I'm wasting a connection that I'm paying for.
But at least I've got a valid reason for being a bit unbalanced...
So when somebody comes to read your gas meter, do you give them a front door key and tell them to pop in any time they like? Or when you hand over your credit card at the supermarket checkout, do you let them know it's OK to take money out whenever they want? No? Then why do we put up with software that thinks being invited into your computer once is equivalent to an offer to run anytime it feels in the mood?
I moaned only last week about how some programs insist on scattering shortcuts all over the desktop, even when they are only doing an update to fix some security hole. OK, so maybe I was a bit unfair picking on Adobe Reader, but in fact - as far as taking liberties with my computer - it deserves it. Did you know that it installs two programs that run every time you start your computer? One that checks for updates and one that acts as a speed-increasing pre-loader.
I suppose both are useful if you spend most of your day using Adobe Reader. I don't, however. And surely if every program you installed insisted in running some "speed enhancing loader" like this it would defeat the whole object because every program would end up running more slowly. OK, so security update check programs are a requirement these days, and hopefully terminate after doing their check, but they still hit the start-up time. Perhaps, where appropriate, they could execute the update check when the program runs instead (like Mozilla does)?
The first thing I do when asked to "look at" somebody's computer that's running slowly or taking ages to start is to consider stopping some of the three dozen or so programs that decide they need to auto-start. I suspect this is a factor in many cases where users complain that their computer is "getting slower all the time" - although some PC manufacturers are equally to blame for filling the machine with a bucketful of semi-useless demo-ware.
Of course, some vital applications, such as security and virus protection software, do need to run at start-up. Though I'm not sure Microsoft Communicator or Messenger quite qualify for the "vital" category, but at least they have menu options where you can turn off "Start when Windows starts". And some other less-than-vital applications are sensible enough to install their shortcut in the Startup section of the Start menu so you can easily find (and delete) them.
So, having moaned at Adobe, who else has the temerity to imagine that they own my computer? How about the utility for backing up a mobile phone? I installed it and used it once to back up the initial configuration of my wife's gleaming new HTC Desire phone, and ever since I've had an ugly picture of a phone blinking at me from the notification area. Just so that, in case I decide sometime next year to plug the phone in again, it can spring magically to life. Maybe it would be useful if I synchronized music and photos every day, but surely an option to not have it run all the time wouldn't be that hard to do?
And, even worse, how about the TomTom satnav utility? I don't have a TomTom satnav, but I did agree to try and upgrade/fix one for a friend. The only way you can do it is to install the HOME utility from TomTom's website. Yes it installs so that it auto-starts, and then sits grinning at you from the notification area forever afterwards. So I now have two programs that I don't want or use, for devices that I don't have, running on my computer. I know memory and processing power is cheap these days, but I don't see why I should be wasting it like this.
Of course, Microsoft is well aware of this issue. One of the useful tools in Vista was Windows Defender, which let you easily see (and stop) programs that auto-run. However, if you install anti-virus software, including Microsoft's own Security Essentials, Defender is disabled. So thank goodness for Sysinternals Autoruns (this latest version is better than ever). Using it I discovered that the HTC utility actually installs four executable files in various places, including Task Scheduler, and the TomTom utility installs two. But removing them with Autoruns is easy, without having to dive into the Registry.
Of course, there's still the usual warning about fiddling with system settings. Changing Registry settings can cause your children to die from the plague, your house to fall down, your own Mother to disown you, and you may even prevent your computer from running correctly. So don't remove anything if you are not sure what it does.
Maybe the reason manufacturers insist on auto-running programs is because they think users are too stupid to realize they need to run them themselves when required. I would have doubted that until recently, but according to recent research led by a professor at Leeds University on behalf of the British National Formulary (which advises on medicines) expecting people to be able to turn their computer on at all is now in question.
The research team have discovered that complicated instructions, such as those typically found on medicines, can "cause confusion" and "should be simplified". The examples they gave of instructions that are now "too difficult for patients to understand" are "May cause drowsiness" and "Avoid alcohol". Gadzooks! Perhaps we need to simplify our IT terminology to match? Instead of "computer" we should maybe use "box with screen and letters on it", and instead of "USB cable" we should use "bendy wire thing with funny square bit on the end".
It all reminded me of that wonderful air traffic control oriented after-dinner speech by Dave Gunson (still available from Amazon) where he explained that airliners have to follow narrow flight corridors to maximize safety and position, and that it requires highly skilled flying to maintain the correct course and route. But then spoilt it by admitting that they actually show them in colors on the map, so you just need to "go down the blue one, then the red one..."
And that pilots wear white kid leather gloves with a big black letter "L" on the back of one and an "R" on the other...
At last I can document something that might, possibly, be of marginal use to those one or two readers who mistakenly stray into the quagmire of my unrelated weekly ramblings. For the last few months I've been fulfilling my new role as an out-of-sight-and-out-of-mind remote documentation engineer (that's a documentation engineer who's remote, not an engineer who writes remote documentation - though, having read some of my scribblings, you might dispute that assertion). Anyway, recently I was beamed up to the mother-ship to spend a couple of weeks onsite at Redmond. I suspect they just wanted to see if the weird English guy they accidently offered a job to actually does exist.
So a while ago, while preparing my shiny and tiny XPS laptop for the trip, I had a minor moan about Vista's backup facility. But little did I realize that worse was coming my way. My laptop usually lives on the corporate domain, even though I only connect to the domain over a VPN and am not connected during login to the machine. The cached domain credentials allow me to log into the machine without being connected, and then the O/S authenticates me on the domain when I connect to the network.
All fine so far. Except that, for some reason, one day it suddenly stopped me from logging in to the machine. I assumed that this was because I hadn't physically connected to the domain for several weeks. So it should all sort itself out when I can reduce the 5,000 mile gap between my laptop and the nearest corporate Ethernet socket. The domain controller would be forced to accept that I'm not just some ethereal figment of its Active Directory, and would gleefully welcome home it's long-lost prodigal son.
Now, I'm not generally known to suffer from wild spasms of optimism. And so I wasn't really surprised when the long-awaited fusing of my tiny little Dell with the corporate big-iron simply resulted in an Event Viewer full of error messages, a pop-up telling me there was an error in my profile, and the machine refusing to persist any changes to the environment. Every logon was accompanied by a long wait while Vista "prepared my desktop" (I could have built a whole desk in the time it takes), a task bar full of useless notification icons (what the **** do they all do?), dialogs welcoming me to the exciting world of Windows Vista (yes please, I'd love to watch the introduction video again), the Home page of Dell Support's Web site (just in case something broke while booting up?), and - of course - none of the useful stuff I set up last time.
After a quick consultation with the onsite server admin guy (hi Mike), it seemed like this is not an unknown issue - he had a laptop suffering from just this malady. While he hadn't had time to investigate, he did confirm that it was a problem with Vista not being able to create the profile folder on the local disk. But why? The relevant folder did not exist, the parent folder was not read-only, and a quick test by manually creating the required folder had no effect. So it looked like I was in for another of my regular hunt-round-the-Registry-and-delete-stuff exercises.
Of course, as per accepted administrative guidelines, I need to warn you about editing the Registry. So, before you start, make sure you write out your will, lay in supplies of pizza and cola, and say goodbye to the kids and the dog. You might also like to wear some protective clothing (a T-shirt with a suitably inane slogan will do), and make sure there is nothing valuable in the direction you are likely to throw the unbootable machine afterwards.
Once prepared, locate the Registry Key:HKEY_LOCAL_MACHINE \SOFTWARE \Microsoft \Windows NT \CurrentVersion \ProfileListThis contains a subkey named with a SID for each user account on the machine. In theory, there is one for each of the named folders in the C:\Users folder on your disk. If you select a SID subkey, you'll see the user name in its subkeys, so you can compare each one with the disk folders. You should find ones for the Administrator, Local System, and Network accounts; as well as ones for all the users registered on the machine or in the domains you've joined.
My machine has led a busy existence during its short life, moving between three domains and having a couple of local users. During a "routine clean up", I seem to remember deleting some of the subfolders in the Users folder - though I was, of course, careful not to delete any currently registered users. What I discovered was the Registry still held references to profile folders for accounts that no longer exist on the machine, but not one for the domain account I was trying to use. I have no idea why this caused a problem, but I decided to tidy up the Registry while I was in there and deleted the SID subkeys, including all their child subkeys, for all of the non-existent user accounts.
Then, having decided which wall would be the target for an airborne laptop, I rebooted. And, magically, it all worked. Instant logon to the domain, wireless certificates downloaded and installed, and no errors. Quick reconfigure of the desktop, log off, log on again, and it remembered my settings. No more "Welcome to Vista" stuff; and everything working just as it should. Even Outlook managed to find its Exchange Server without the usual application of a large stick. Amazing. Maybe I can actually get to love Vista after all...