Writing ... or Just Practicing?

Random Disconnected Diatribes of a p&p Documentation Engineer

  • Writing ... or Just Practicing?

    A Sticky Situation


    I discovered this week why builders always have a tube of Superglue in their pockets, and how daft our method of heating houses here in the UK is. It's all connected with the melee of activity that's just starting to take over our lives at chez Derbyshire as we finally get round to all of the modernization tasks that we've been putting off for the last few years.

    I assume that builders don't generally glue many things together when building a house - or at least not using Superglue. More likely it's that "No More Nails" adhesive that sticks anything to anything, or just big nails and screws. However, the source of my alternative adhesive information was - rather surprisingly - a nurse at the Accident and Emergency department of our local hospital.

    While doing the annual autumn tidying of our back garden I managed to tumble over and poke a large hole in my hand on a nearby fence post. As I'm typically accident prone, this wasn't an unexpected event, but this time the usual remedy of antiseptic and a big plaster dressing didn’t stop the bleeding so I reluctantly decided on a trip to A&E.

    Being a Sunday I expected to be waiting for hours. However, within ten minutes I was explaining to the nurse what I'd done, and trying to look brave. Last time I did something similar, a great many years ago and involving a very sharp Stanley knife, I ended up with several stitches in my hand. However, this time she simply sprayed it with some magic aerosol and then lathered it with Superglue. Not what I expected.

    But, as she patiently explained, they use this method now for almost all lacerations and surgery. It's quicker, easier, keeps the wound clean and dry, heals more quickly, and leaves less of a scar than stitches. She told me that builders and other tradesman often use the same technique themselves. Obviously I'll need to buy a couple of tubes for future use.

    Meanwhile, back at the hive of building activity and just as the decorator has started painting the stairs, I discover that the central heating isn't working. For the third time in twelve years the motorized valve that controls the water flow to the radiators has broken. Another expensive tradesman visit to fix that, including all the palaver of draining the system, refilling it, and then patiently bleeding it to clear the airlocks.

    Of course, two of the radiators are in the wrong place for the new kitchen and bathroom, so they need to be moved. Two days later I've got a different plumber here draining the system again, poking pipes into hollow walls, setting off the smoke alarms while soldering them together, randomly swearing, and then refilling and bleeding the system again.

    But what on earth are we doing with all this pumping hot water around through big ugly lumps of metal screwed to the walls anyway? Isn’t it time we adopted the U.S. approach of blowing hot air to where it's needed from a furnace in the basement? When you see the mass of wires, pipes, valves, and other stuff needed for our traditional central heating systems you have to wonder.

    Mind you, on top of all the expense, the worst thing is the lump on my arm the size of a tennis ball where the nurse decided I needed a Tetanus shot...

  • Writing ... or Just Practicing?

    It Ain't Half Hot Mum


    OK, OK, so one month I'm complaining that our little green paradise island seems to have drifted north into the Arctic, and now I'm grumbling about the heat. Obviously global warming is more than just a fad, as we've been subjected here in England to temperatures hovering around 90 degrees in real money for the last week or so. Other than the gruesome sight of pale-skinned Englishmen in shorts (me included), it's having some rather dramatic effects on my technology installations. I'm becoming seriously concerned that my hard disks will turn into floppy ones, and my batteries will just chicken out in the heat.

    Oh dear, bad puns and I only just got going. But it does seem like the newer the technology, the less capable it is of operating in temperatures that many places in the world would call normal. There's plenty of countries that get regular spells of weather well into the 90's, as I discovered when we went to a wedding in Cyprus a few years back. How on earth do they cope? I've got extra fans running 24/7 in the computer cabinet and in the office trying to keep everything going. I'm probably using 95% of my not inconsiderable weekly electricity consumption keeping kit that only uses 1% of it to actually do stuff from evaporating (the other 4% is the TV, obviously).

    Maybe the trouble is that, here in England where we have a "temperate" climate, we're not really up to speed with modern technology such as air conditioning. Yes, they seem to put it in every car these days, but I only know one person who has it in their house, and that's in the conservatory where - on a hot day - it battles vainly to get the temperature below 80 degrees. I briefly considered running an extension lead out to my car and sitting in it to work, but that doesn't help with the servers and power supplies.

    I've already had to shut down the NAS because it's sending me an email every five minutes saying it's getting a bit too warm. And I've shut down the backup domain controller to try and cut down the heat generation (though it's supposed to be one of those environmentally friendly boxes that will run on just a sniff of electricity). And the battery in the UPS in the office did its usual summer trick of bulging out at the sides, and throwing in the towel as soon as I powered up a desktop and a couple of decent sized monitors. It's no wonder UPS are so cheap to buy. They're like razors (or inkjet printers) - you end up spending ten times more than a new one would have cost on replacement batteries. Even though I cut a hole in the side and nailed a large fan onto it.

    Probably I'm going to have to bite the bullet and buy a couple of those portable air conditioning units so my high-tech kit can stay cool while we all melt here in the sun. In fact, my wife reckons I've caught swine 'flu because she finds me sitting here at the keyboard sweating like pig when she sails in from her nice cool workplace in the evening. At least the heat has killed most things in the garden (including the lawn) so that's one job I've escaped from.

    By the way, in case you didn't realize, the title this week comes from a rather old BBC TV program. Any similarity between the actor who played Gunner 'Lofty' and this author is vigorously denied.

  • Writing ... or Just Practicing?

    Take Two Aspirins And Call Me In The Morning


    I seem to have spent a large proportion of my time this month worrying about health. OK, so a week of that was spent in the US where, every time I turned on the TV, it scared me to death to see all the adverts for drugs to cure the incredible range of illnesses I suppose I should be suffering from. In fact, at one stage, I started making a list of all the amazing drugs I'm supposed to "ask my doctor about", but I figured if I was that ill I'd probably never have time to take them all. They even passed an "assisted suicide" law while I was there, and I can see why they might need it if everyone is so ill all of the time.

    And, of course, it rained and hailed most of the week as well. No surprise there. They even said I might be lucky enough to see some snow. Maybe it's all the drugs they've been asking their doctor about that makes snow seem like a fortuitous event. Still, I did get to see the US presidential election while I was there. Or, rather, I got to see the last week of the two year process. It seems like they got 80% turnout. Obviously, unlike Britain where we're lucky to get 40% turnout, they must think that voting will make a difference. Here in the People's Republic of Europe, we're all well aware that, if voting actually achieved anything, they'd make it illegal. I wonder if the result still stands if nobody actually turns up to vote?

    Mind you, it does seem surreal in so many ways. You have to watch four different news channels if you want to get a balanced opinion. And one of the morning newsreaders seemed to have a quite noticeable lisp so that I kept hearing about the progreth of the Republicanth and the Democratth. A bit like reading a Terry Pratchett novel. Or maybe it was just the rubbish TV in the hotel. And they didn't have enough voting machines so in some places people were queuing for four hours in the rain to cast their votes. Perhaps it's because there are around 30 questions on the ballot paper where you get to choose the president, some assorted senators and governors, a selection of judges, and decide on a couple of dozen laws you'd like to see passed. Obviously a wonderful example of democracy at work.

    Anyway, returning to the original topic of this week's vague expedition into the blogsphere, my concerns over health weren't actually connected to my own metabolic shortcomings. It was all to do with the Designed for Operations project that I've been wandering in and out of for some number of years. The organizers of the 2008 patterns & practices Summit had decided that I was to do a session about health modeling and building manageable applications. In 45 minutes I had to explain to the attendees what health models are, why they are important, and how you use them. Oh, and tell them about Windows Eventing 6.0 and configurable instrumentation helpers while you're at it. And put some jokes in because it's the last session of the day. And make sure you finish early 'cos you'll get a better appraisal. You can see that life is a breeze here at p&p...

    So what about health modeling? Do you do it? I've done this kind of session three or four times so far and never managed to get a single person to admit that they do. I'm not convinced that my wild ramblings, furious arm waving, and shower of psychedelically colored PowerPoint graphics (and yes, Dave, they do have pink arrows) ever achieve anything other than confirm to the audience that some strange English guy with a funny accent is about to start juggling, and then probably fall off the stage. Mind you, they were all still there at the end, and only one person fell asleep. I suppose as there was no other session to go to, they had no choice.

    What's interesting is trying to persuade people that it's not "all about exception handling". I have one slide that says "I don't care about divide by zero errors; I just want to know about the state changes of each entity". Perhaps it's no wonder that the developers in the audience thought they had been confronted by some insane evangelist of a long-lost technical religion. The previous session presented by some very clever people from p&p talked about looking for errors in code as being "wastage", and there I was on next telling people all about how they should be collecting, monitoring, and displaying errors.

    But surely making applications more manageable, reporting health information, and publishing knowledge that helps administrators to verify, fix, and validate operations post deployment is the key to minimizing TCO? An application that tells you when it's likely to fail, tells you what went wrong when it does fail, and provides information to help you fix it, has got to be cheaper and easier to maintain. One thing that came out in the questions afterwards was that, in large corporations, many developers never see the architect, and have no idea what network administrators and operators actually do other than sit round playing cards all day. Unless they all talk to each other, we'll probably never see much progress.

    At least they did seem to warm to the topic a little when I showed them the slide with a T-shirt that carried that well-worn slogan "I don't care if it works on your machine; we're not shipping your machine!" After I rambled on a bit about deployment issues and manageable instrumentation, and how you can allow apps to work in different trust levels and how you can expose extra debug information from the instrumentation, they seemed to perk up a bit. I suppose if I achieved nothing other than making them consider using configurable instrumentation helpers, it was all worthwhile.

    I even managed to squeeze in a plug for the Unity dependency injection stuff, thus gaining a few brownie points from the Enterprise Library team. In fact, they were so pleased they gave me a limited edition T-shirt. So my 10,000 mile round trip to Redmond wasn't entirely wasted after all. And, even better, if all goes to plan I'll be sitting on a beach in Madeira drinking cold beer while you've just wasted a whole coffee break reading this...

  • Writing ... or Just Practicing?

    Making a Use Case for Scenarios


    Technical writers, like wot I am, tend to be relatively docile and unflappable creatures. It comes with the territory (and, often, with age). Especially when most of your life is spent trying to document something that the dev team are still building, and particularly so if that dev team is resolutely following the agile path. You know that the features of the software and its capabilities will morph into something completely different just after you get the docs on that bit finished; and they'll discover the bugs, gotchas, and necessary workarounds only after you sent your finely crafted text to edit.

    And you get used to the regular cycle of "What on earth have they added now?", "What is this supposed to do", "Can anyone tell me how it works", and "OK, I see, that must be it". Usually it involves several hours searching through source code, playing with examples, trying to understand the unit tests, and - as a last resort - phoning a friend. But, in most cases, you figure out what's going on, take a wild stab at how customers will use the feature, and then describe it in sufficient detail and in an organized way so that they can understand it.

    Of course, the proper way to document it is to look for the scenario or use case, and describe this using one of the well known formats such as "Scenario", "Problem", "Solution", and "Example". You might even go the whole hog and include "Forces", "Liabilities", and "Considerations" - much like you see software design patterns described. Though, to me at least, this often seems like overkill. I guess we've all seen documents that describe the scenario as "The user wants to retrieve data from a database that has poor response times", the problem as "The database has poor response times and the user wants to retrieve data", and the forces include "A database that has poor response times may impact operation of the application". And then the solution says "Use asynchronous access, stupid!" (though the "stupid" bit is not usually included, just implied).

    Worse still, writing these types of documents takes a lot of effort, especially when you try not to make the reader sound stupid, and may even tend to put people off when compared to a more straightforward style that just explains the reason for a feature, and how to use it. Here at p&p, we tend to document these as "Key Scenarios" - with just a short description of the problem and the solution, followed by an example.

    However, in an attempt to provide more approachable documentation that is easier to read and comprehend, I've tended to wander away from the strict "Key Scenarios" approach. Instead, I've favored breaking down software into features and sub-features, and describing each one starting with an overview of what it does, followed by a summary of the capabilities and links to more detailed topics for each part that describe how to use it. This approach also has the advantage in that it makes coping with the constant churn of agile-developed software much easier. The overview usually doesn't change (much); you can modify the structure, order, and contents of each subtopic; and you can slot new subtopics in without breaking the whole feature section. Only when there are extreme changes in the software (which are not unknown, of course) do you need to rebuild a whole section.

    But increasingly I'm finding that the lack of a more rigid scenario documentation approach is causing problems. Mostly this has to do with the lack of a well documented design and use case created up front, before development of a feature starts. If the use case consists of just "Implement a mechanism for merging configuration settings", you can soon find yourself struggling to provide consistent and useful documentation without having to completely rewrite it several times. You end up guessing why it's there, what it's capabilities are in terms of user requirements, what scenarios it is designed to operate in, and what problems it's supposed to address.

    OK, so the whole idea of agile is that you discover all this though constant contact with the other members of the team, and have easy access to ask questions and get explanations. But it often isn't that simple. For example, during testing of the way that Unity resolves objects, I found I was getting an exception if there was no named mapping in the container for the abstract class or interface I was trying to resolve. The answer was "Yes, Unity will throw an exception when there is no mapping for the type". But, as you probably guessed, this isn't a limitation of Unity (as I, employing my usual ignorance and naivety, assumed it was). Unity will try its best to create an instance of any type you ask for, whether there's a mapping or not. But it throws an exception because, with no mapping, it ends up trying to instantiate an interface or abstract class. Needless to say, furious re-editing of the relevant doc sections ensued.

    Then, just to reinforce my growing concern that maybe my less rigorous approach to feature documentation is not working as well as planned, I came across a set of features that are so complex and unintuitive, and viciously intertwined with each other, that understanding the nuances of the way they work seemed almost impossible. Even the test team, with their appropriate allocation of really clever people, struggled to grasp it all and explain it to mere mortals like me. Yes, I can vaguely appreciate what it does, and even how some of it works. But what is it for?

    This seems to be the root of the issue. There are no documented use cases that describe when or why customers would use this set of features, or what they are designed to achieve. Just a semi-vague one liner that describes what the feature is. Or, to be more accurate, several one liners that describe bits of what the feature is. No explanation of why it's there, why it's useful, and why users might need it. Or part of it. Or some unexplained combination of it. Of course, it is impeccably implemented and works fine (I'm told), and is a valuable addition to the software that a great many users of previous versions asked for.

    But the documentation dilemma also reveals, like some gruesome mythical beast rising from the mire, just how frighteningly easy it is to innocently stray from the well worn path of "proper" documentation techniques. It's only obvious when something like this (the problem not the mythical beast) comes back to bite you. Faced with a set of features that almost seem to defy explanation in terms of "what they are" and "what they do", it does seem like the key scenario approach, with its problem, solution, and example format, really is the way to go.

    If the first step of documenting a feature of a product is to discover and list the key scenarios, you have a platform for understanding the way that the software is designed to be used, and what it was designed to do. If you can't even find the key scenarios before you start writing, never mind accurately document them, the chances are you are never going to produce guidance that properly or accurately describes the usage.

    And maybe, one day, I'll even apply "proper" documentation design patterns to writing blog posts...

  • Writing ... or Just Practicing?

    Blocking Malware Domains in ISA 2006


    As in many households, several regular and occasional computer users take advantage of my connection to the outside world. I use ISA Server 2006 running as a virtual Hyper-V instance for firewalling and connection management (I'm not brave enough to upgrade to Forefront yet), and all incoming ports are firmly closed. But these days the risk of picking up some nasty infection is just as great from mistaken actions by users inside the network as from the proliferation of malware distributors outside.

    Even when running under limited-permission Windows accounts and with reasonably strict security settings on the browsers, there is a chance that less informed users may allow a gremlin in. So I decided some while ago to implement additional security by blocking all access to known malware sites. I know that the web browser does this to some extent, but I figured that some additional layer of protection - plus logging and alerts in ISA - would be a useful precaution. So far it seems to have worked well, with thankfully only occasional warnings that a request was blocked (and mostly to an ad server site).

    The problem is: where do you get a list of malware sites? After some searching, I found the Malware Domain Blocklist site. They publish malware site lists in a range of formats aimed at DNS server management and use in your hosts file. However, they also provide a simple text list of domains called JustDomains.txt that is easy to use in a proxy server or ISA. Blocking all web requests for the listed domains will provide some additional protection against ingress and the effects of malware that might inadvertently find its way into a machine otherwise; and you will see the blocked requests in your log files.

    They don’t charge for the malware domain lists, but you decide to use them please do as I did and make a reasonable donation. Also be aware that malware that connects using an IP address instead of a domain name will not be blocked when you use just domain name lists.

    To set it up in ISA 2006, you need the domain list file to be in the appropriate ISA-specific format. It's not available in this format, but a simple utility will convert it. You can download a free command line utility I threw together (the Visual Studio 2010 source project is included so you can check the code and recompile it yourself if you wish). It takes the text file containing the list of malware domain names and generates the required XML import file for ISA 2006 using a template. There's a sample supplied but you'll need to export your own configuration from the target node and edit that to create a suitable template for your system. You can also use a template to generate a file in any other format you might need.

    ISA 2006 Toolbox

    To configure ISA open the Toolbox list, find the Domain Name Sets node, right-click, and select New Domain Name Set. Call it something like "Malware Domains". Click Add and add a dummy temporary domain name to it, then click OK. The dummy domain will be removed when you import the list of actual domain names. Then right-click on your new Malware Domains node, click Export Selected, and save the file as your template for this node. Edit it to insert the placeholders the utility requires to inject the domain names into it as described in the readme file and sample template provided.

    Malware Domaind List

    After you generate your import file, right-click on your Malware Domains node, click Import to Selected, and locate the import file you just created from the list of domain names. Click Next, specify not to import server-specific information, and then click Finish. Open your Malware Domain set from the Toolbox and you should see the list of several thousand domain names.

     Now you can configure a firewall rule for the new domain name set. Right-click the Firewall Policy node in the main ISA tree view and click New Rule. Call it something recognizable such as "Malware Domains". In the Action tab select Deny and turn on the Log requests matching this rule option. In the Protocols tab, select All outbound traffic. In the From tab, click Add and add all of your local and internal networks. In the To tab click Add and add your Malware Domains domain name set. In the Content Types tab, select All content types. In the Users tab select All users, and in the Schedule tab select Always. Then click OK, click Apply in the main ISA window, and move the rule to the top of the list of rules.

    ISA Block Rule

    You can test your new rule by temporarily adding a dummy domain to the Domain Name Set list and trying to navigate to it. You should see the ISA server page indicating that the domain is blocked.

    If you wish, you can create a list of IP addresses of malware domains and add this set to your blocking rule as well so that malware requests that use an IP address instead of a domain name are also blocked. The utility can resolve each of the domain names in the input list and create a file suitable for importing into a Computer Set in ISA 2006. The process for creating the Computer Set and the template is the same as for the Domain Name Set, except you need to inject the domain name and IP address of each item into your import file. Again, a sample template that demonstrates how is included, but you must create your own version as described above.

    Be aware that some domains may resolve to internal or loopback addresses, which may affect operation of your network if blocked. The utility attempts to recognize these and remove them from the resolved IP address list, but use this feature with care and check the resolved IP addresses before applying a blocking rule.

    Another issue is the time it takes to perform resolution of every domain name, and investigations undertaken here suggest that only about one third of them actually have a mapped IP address. You'll need to decide if it's worth the effort, but you can choose to have the utility cache resolved IP addresses to save time and bandwidth resolving them all again (though this can result in stale entries). If you do create a Computer Set, you simply add it to the list in the To tab of your blocking rule along with your Domain Name Set. Of course, you need to regularly update the lists in ISA, but this just involves downloading the new list, creating the import file(s), and importing them into your existing Domain Name Set and Computer Set nodes in ISA.

  • Writing ... or Just Practicing?

    The Wrong Type of Electricity


    It's customary here in England to castigate British Rail for their outlandish non-service excuses. As far back as I can remember we've had "leaves on the line". Then, after they spent several million pounds on special cleaning trains, it morphed into "the wrong kind of leaves on the line". And of course, every winter when the entire British transport system grinds to a halt they blame "the wrong kind of snow." But this week I've been introduced to a new one: "the wrong kind of electricity".

    During the summer months I ply my lonely documentation engineering trade using a laptop and enjoying the almost-outdoorsness of the conservatory; soaking up the joys of summer, the birds singing in the trees, the fish splashing around in the pond, and a variety of country wildlife passing by. So when I noticed one of my regular computer suppliers was selling off Windows 7 laptops, no doubt to be ready for the imminent arrival of Windows 8, I thought it would be a good idea to pick up a decent 17" one to replace my aging 14" Dell Latitude. With age gradually degrading my eyesight I reckon I'll soon need all the screen space I can get.

    So when my nice new Inspiron arrived I powered it up, worked through the "configuring your computer" wizard, removed all the junk that they insist on installing, and started to list all the stuff I'll need to install. Until I noticed that the battery wasn't charging. So I fiddled with the power settings, dug around in the BIOS, tried a different power pack, and did all the usual pointless things like rebooting several times. No luck.

    So I dive into t'Internet to see if there's a known fix. Yes, according to several posts on the manufacturer's site and elsewhere there is. You replace the power supply board inside the computer at a cost of 35 pounds, or – if it's still under guarantee – send it back and they replace the motherboard. Mind you, there were several other suggestions, such as upgrading the BIOS and banging the power supply against a wall, but as I'd only had the machine for two hours none of these seemed to be an ideal solution. So I did the obvious – pack it up and send it back to the supplier as DoA (dead on arrival).

    Mind you, when I phoned the supplier and explained the problem the nice lady said that it would be OK if I kept it plugged into the mains socket because then it doesn't need the battery to be charged up. True, but as I pointed out to her, it's supposed to be a portable computer. I'll need a long piece of wire if I decide to use it the next time I'm travelling somewhere by train.

    And do I want a replacement? How common is the failure? To have it happen on a brand new machine is worrying. Yet, strangely, only a few weeks ago I noticed one time when I powered up my old Latitude that it displayed a message saying it didn't recognize my power pack, but then decided it did. Yet after wandering around the house I found five Dell laptop power packs and they all seem to be much the same. All 19.6 Volts, either 3.4 Amps or higher current rating. They all have the same two-pole plug, with the positive in the center. The only difference seems to be that the newer ones have 25 certificates of conformance on the label, while the older ones have around 15 (perhaps that's why they seem to get bigger each time - to make room for the larger label). 

    So how does the computer know which power pack I've plugged in? When I looked in the BIOS it said that the power pack was "65 Watts". Is there some high frequency modulation on the output that the computer can decipher? Or does it do the old electrician's trick of flicking the wires together to see if there's sparks, and measure the effect? Do all computers these days do the same thing? If I buy an unbranded replacement power pack will the computer pop up a window saying "You tight-fisted old miser - you don't really expect me to work with that do you?"

    And is all this extra complexity, which can obviously go wrong, really needed? How comfortable will I be with all my computers now if I feel I need to check that the power supply/computer interface is still working every time I switch one of them on? It seems like the usual suspicion most people have that the first thing to die on your computer will be the hard disk is no longer true. Now your computer may decide to stop working just because you're using the wrong kind of electricity...


  • Writing ... or Just Practicing?

    Can An Arial Attack Produce High Calibri Guidance?


    If an article I read in the paper this week is correct, you need to immediately uninstall Arial, Verdana, Calibri, and Tahoma fonts from your computer; and instead use Comic Sans, Boldini, Old English, Magneto, Rage Italic, or one of those semi-indecipherable handwriting-style script fonts for all of your documents. According to experts, it will also be advantageous to turn off the spelling checker; and endeavour to include plenty of unfamiliar words and a sprinkling of tortuous grammatical constructs.

    It seems researchers funded by Princeton University have discovered that people are 14% less likely to remember things they read when they are written in a clean and easy-to-read font and use a simple grammatical style. By making material "harder to read and understand" they say you can "improve long term learning and retention." In particular, they suggest, reading anything on screen - especially on a device such as a Kindle or Sony Reader that provides a relaxing and easy to read display - makes that content instantly forgettable. In contrast, reading hand-written text, text in a non-standard font, and text that is difficult to parse and comprehend provides a more challenging experience that forces the brain to remember the content.

    There's a lot more in the article about frontal lobes and dorsal fins (or something like that) to explain the mechanics of the process. As they say in the trade, "here comes the science bit". Unfortunately it was printed in a nice clear Times Roman font using unchallenging sentence structure and grammar, so I've forgotten most of what it said. Obviously the writer didn't practice what they preached.

    But this is an interesting finding. I can't argue with the bit about stuff you read on screen being instantly forgettable. After all, I write a blog that definitely proves it - nobody I speak to can remember what I wrote about last week (though there's probably plenty of other explanations for that). However, there have been several major studies that show readers skip around and don't concentrate when reading text on the Web, often jumping from one page to another without taking in the majority of the content. It's something to do with the format of the page, the instant availability, and the fundamental nature of hyperlinked text that encourages exploration; whereas printed text on paper is a controlled, straight line, consecutive reading process.

    From my own experience with the user manual for our new all-singing, all-dancing mobile phones, I can only concur. I was getting nowhere trying to figure out how to configure all of the huge range of options and settings for mail, messaging, synchronization, contacts, and more despite having the laptop next to me with the online user manual open. Instead, I ended up printing out all 200 pages in booklet form and binding them with old bits of string into something that is nothing like a proper manual - but is ten times more useful.

    And I always find that proof-reading my own documents on screen is never as successful as when I print them out and sit down in a comfy chair, red pen in hand, to read them. Here at p&p we are actively increasing the amount of guidance content that we publish as real books so that developers and software architects can do the same (red pen optional). The additional requirements and processes required for hard-copy printed materials (such as graphic artists, indexers, additional proof readers, layout, and the nagging realization that you only have one chance to get it right) also seem to hone the material to an even finer degree.

    So what about the findings of those University boffins? Is all this effort to get the content polished to perfection and printed in beautifully laid out text actually reducing its usefulness or memorability? We go to great lengths to make our content easy to assimilate, using language and phrasing defined in our own style manuals and passing it through multiple rigorous editing processes. Would it be better if we just tossed it together, didn't bother with any editing, and then photo-copied it using a machine that's nearly run out of toner? It should, in theory, produce a more useful product that you'd remember reading - though perhaps not for the desired reasons.

    Taking an excerpt from some recent guidance I've created, let's see if it works. I wrote "You can apply styles directly within the HTML of each page, either in a style section in the head section of the page or by applying the class attribute to individual elements within the page content." However, before I went back and read through and edited it, it could well have said something like "The style section in the head section, or by decorating individual elements with the class attribute in the page, can be used to apply styles within the HTML or head of each page within the page content."

    Is the second version likely to be more memorable? I know that my editor would suspect I'd acquired a drug habit or finally gone (even more) senile if I submitted the second version. She'd soon have it polished up and looking more like the first one. And, no doubt, apply one of the standard "specially chosen to be easily readable" fonts and styles to it - making readers less likely to recall the factual content it contains five minutes after they've read it.

    But perhaps a typical example of the way that a convoluted grammar and structure makes content more memorable is with the well-known phrase taken from a mythical motor insurance claim form: "I was driving past the hole that the men were digging at over fifty miles per hour." So that could be the answer. Sentences that look right, but then cause one of those "Hmmm .... did I actually read that right?" moments.

    At the end of the article, the writer mentioned that he asked Amazon for their thoughts on the research in terms of its impact on Kindle usage, but they were "unable to comment". Perhaps he sent the request in a nice big Arial font, and the press guy at Amazon immediately forgot he'd read it...

  • Writing ... or Just Practicing?

    A Drip Under Pressure


    I read somewhere a while ago that the word "expert" comes from a combination of the two Latin words "ex" meaning "a has-been", and "spurt" meaning "a drip under pressure". I'm not sure I actually believe it, but is does seem a remarkably fortuitous match to my capabilities when it comes to the grudge matches I regularly indulge in just trying to keep my own network running. I suppose I've rambled on about my semi-competence as a network administrator enough times in the past. It's one of those areas where you think you're starting to get the knack of it, and then you realize that it's only because you haven't yet discovered all the things you don't know.

    I guess it's a bit like the famous quote from Donald Rumsfield (the US Secretary of Defence a while back), who told a Pentagon briefing that "There are known knowns; these are things we know that we know. There are known unknowns. That is to say, there are things we know we don't know. But there are also unknown unknowns. These are things we don't know we don't know." Pretty much sums up the task of network administration I reckon. I just wish I could get a list of the things I don't know that I don't know - probably there's a Web site out there with it on, but one of the things I don't know is where to find it.

    Anyway, enough prevarication - time I dragged this post back to somewhere nearer to reality (or at least as near as I usually get). The background to all this is another round of network updates aimed at trying to achieve a more reliable - and faster - connection between the confines of my daily routine in a remote office perched precariously on the edge of the Internet, and the big wide world of high speed connectivity out there. In a shock announcement a few weeks ago, our Government took the bold step of announcing that they expect everyone in the country to have access to "high speed broadband" at a speed of at least 2MB by no later than 2015. I suspect they're thinking we'll catch up with countries like Japan who can get 100MB without really trying.

    Well, I get about 1.6MB on a good day over my 2MB ADSL connection, so waiting till 2015 for an extra 0.4MB seems a less than attractive proposition. And I'll still only get 512K up, which is the real killer when you use VPN or need to send large documents and files. And, unless I move house to next door to the local telephone exchange, I'm not going to get anything faster over ADSL. However, our national cable company suddenly seems to have got its act together, and can now provide data connections in our area. They even offer a symmetric service, with packages from 4MB to 100MB both ways! The only hang-up is that the cheapest is around 5,000 pounds ($9,000) per year.

    But they do a "business service" that's 20MB down and 1MB up for a much more realistic rate, and so I've taken the plunge and ordered it. However, I hear regular disaster tales about cable services, so I decided to keep the ADSL line as well to provide redundancy in addition to higher speed. LinkSys do a reasonably priced load-balancing and failover router (the RV042) that should do the job nicely. And it will no doubt prove to be a useful source for a forthcoming diatribe on networking for dummies.

    In the meantime, it means reshuffling the existing stuff around to make room; and to reconfigure the servers to provide separate connections for the Web server and DNS, which require static IP addresses, and the main network gateway that doesn't. That way I can take advantage of load-balancing for the internal network, while leaving the stuff that needs static addresses - and isn't, in reality, very busy - on the existing ADSL line. It all seemed very simple until I started to Visio the network diagram (is Visio a verb these days?), and discovered I don't have enough network cards in the servers...

    So here I can sing the praises of Hyper-V. Previously I used one NIC for the host server O/S, one for the virtual internal switch, and one for the virtual external switch (as described in Hyper-Ventilation, Act III). Now I need two separate external connections, so I reconfigured the Hyper-V network to have one virtual switch connected to the internal network NIC, and two external switches connected to the other two NICs. I took the precaution of removing the VMs from Hyper-V manager first, and - amazingly - when I re-imported them, they retained all of their IPv4 settings! OK, so they did forget all their IPv6 settings, but that's not a huge job to reconfigure. And I even moved one of the VMs from one machine to another using the Export and Import tools, and it worked a treat. I have to say that going Hyper-V certainly seems like it was a good choice.

    However, I came across one issue that first raised its ugly head a while ago. Hyper-V tries to be clever and ensure that you always have a working connection in the base (host) O/S. This means that, if you have more than one NIC in the machine, you end up with multiple active connections in the base O/S - and you have to disable the ones you don't need. That's OK, but I noticed that Service Pack 2 helpfully re-enabled all of the connections without telling me, and I only found out when I got error messages saying there were duplicate machine names on the network. Now I only have one connection to the internal network, so that error will never arise. If the connections in the base O/S get enabled again by some update or other factor, I'll have active connections between the external network and the base O/S.

    One saving grace is that the duplicated connections have no fixed IP address or DNS entry, and rely on DHCP, so they won't actually work because the external router doesn't support DHCP. But if they were connected directly to an ISP, that IPS's system could provide one quite happily I guess. Would you know about it? I asked some Hyper-V experts whether I should uninstall the duplicated connections in the base O/S, but they advised against it. It seems this is resolved in Server 2008 R2, but you probably want to keep your eye on your base O/S connections until you upgrade.

    And Server 2008 isn't the only thing that's waiting for the software to catch up. As part of the upgrades, I replaced the somewhat limited (and nearly full) Buffalo NAS drive with a gleaming new NetGear 4TB X-RAID package to try and keep my backups (including the server VMs) safe. Guess what? Just like the Buffalo drive, it can't talk to Server 2008 Active Directory. The updated system software is in beta and I'm loathe to try that with my precious data, so I'm back to using the drive admin account in my backup batch files. As they say in the trade (though I'm not sure which trade): "Rearrange these words into a well-known phrase or saying: Your Guys Act Together Get".

    Finally (yes, there is an end to this post in sight), I discovered an interesting feature of the built-in Windows standard firewall. After moving the external DNS server from the gateway server to the Web server that lives in the perimeter network, I enabled the "DNS Service" filter in the standard Windows firewall to allow access for lookups from the Internet. As usual, after any network updates, I ran a port scan to check I hadn't done anything stupid. So it was a bit of a shock to discover that port 135 (DCOM / RPC) was open. It seems it's to allow external management of the DNS server using the MMC snap-in, and only allows connections to the DNS server executable, but I'm pretty sure that's not really something I want to allow anyone and his dog to do from their spare bedroom. Yes, I know that my external router will block port 135, and they'd need to be able to log on anyway, but it seems like I should have been told this would happen.

    The answer is to configure Windows Firewall With Advanced Security (on the Administrative Tools menu). It looks a bit scary, especially as - at first glance - it seems like there are tons of enabled "Allow" rules that you definitely don't want on a public machine. But if you click the Monitoring node in the tree view you should see that the Public profile is active, and so most of the enabled rules (which specify the Domain or Private profile) are not active. Click the Firewall node to see which rules actually are active. To see (and change) which profile a connection uses, open Control Panel | Network and Sharing Center and click Customize for the connection. In my case, I disabled the DNS Service filter in the standard firewall dialog and enabled the two Public rules in WFWAS for DNS Incoming - one each for TCP and UDP on port 53. A subsequent port scan found port 135 firmly closed. Back in the standard Windows firewall dialog, the DNS Service entry now displays a "shaded tick" status.

    All this shows how it's a really good idea to always check every update to your network - you can use a site such as Shields Up at Gibson Research for public facing machines and routers. I actually got a "perfect TruStealth rating" from them and from a couple of other sites on one connection, so I feel like a real administrator now. Though I'm sure there'll be some new unknown unknowns to discover tomorrow...

Page 1 of 41 (326 items) 12345»