Random Disconnected Diatribes of a p&p Documentation Engineer
I discovered this week why builders always have a tube of Superglue in their pockets, and how daft our method of heating houses here in the UK is. It's all connected with the melee of activity that's just starting to take over our lives at chez Derbyshire as we finally get round to all of the modernization tasks that we've been putting off for the last few years.
I assume that builders don't generally glue many things together when building a house - or at least not using Superglue. More likely it's that "No More Nails" adhesive that sticks anything to anything, or just big nails and screws. However, the source of my alternative adhesive information was - rather surprisingly - a nurse at the Accident and Emergency department of our local hospital.
While doing the annual autumn tidying of our back garden I managed to tumble over and poke a large hole in my hand on a nearby fence post. As I'm typically accident prone, this wasn't an unexpected event, but this time the usual remedy of antiseptic and a big plaster dressing didn’t stop the bleeding so I reluctantly decided on a trip to A&E.
Being a Sunday I expected to be waiting for hours. However, within ten minutes I was explaining to the nurse what I'd done, and trying to look brave. Last time I did something similar, a great many years ago and involving a very sharp Stanley knife, I ended up with several stitches in my hand. However, this time she simply sprayed it with some magic aerosol and then lathered it with Superglue. Not what I expected.
But, as she patiently explained, they use this method now for almost all lacerations and surgery. It's quicker, easier, keeps the wound clean and dry, heals more quickly, and leaves less of a scar than stitches. She told me that builders and other tradesman often use the same technique themselves. Obviously I'll need to buy a couple of tubes for future use.
Meanwhile, back at the hive of building activity and just as the decorator has started painting the stairs, I discover that the central heating isn't working. For the third time in twelve years the motorized valve that controls the water flow to the radiators has broken. Another expensive tradesman visit to fix that, including all the palaver of draining the system, refilling it, and then patiently bleeding it to clear the airlocks.
Of course, two of the radiators are in the wrong place for the new kitchen and bathroom, so they need to be moved. Two days later I've got a different plumber here draining the system again, poking pipes into hollow walls, setting off the smoke alarms while soldering them together, randomly swearing, and then refilling and bleeding the system again.
But what on earth are we doing with all this pumping hot water around through big ugly lumps of metal screwed to the walls anyway? Isn’t it time we adopted the U.S. approach of blowing hot air to where it's needed from a furnace in the basement? When you see the mass of wires, pipes, valves, and other stuff needed for our traditional central heating systems you have to wonder.
Mind you, on top of all the expense, the worst thing is the lump on my arm the size of a tennis ball where the nurse decided I needed a Tetanus shot...
OK, OK, so one month I'm complaining that our little green paradise island seems to have drifted north into the Arctic, and now I'm grumbling about the heat. Obviously global warming is more than just a fad, as we've been subjected here in England to temperatures hovering around 90 degrees in real money for the last week or so. Other than the gruesome sight of pale-skinned Englishmen in shorts (me included), it's having some rather dramatic effects on my technology installations. I'm becoming seriously concerned that my hard disks will turn into floppy ones, and my batteries will just chicken out in the heat.
Oh dear, bad puns and I only just got going. But it does seem like the newer the technology, the less capable it is of operating in temperatures that many places in the world would call normal. There's plenty of countries that get regular spells of weather well into the 90's, as I discovered when we went to a wedding in Cyprus a few years back. How on earth do they cope? I've got extra fans running 24/7 in the computer cabinet and in the office trying to keep everything going. I'm probably using 95% of my not inconsiderable weekly electricity consumption keeping kit that only uses 1% of it to actually do stuff from evaporating (the other 4% is the TV, obviously).
Maybe the trouble is that, here in England where we have a "temperate" climate, we're not really up to speed with modern technology such as air conditioning. Yes, they seem to put it in every car these days, but I only know one person who has it in their house, and that's in the conservatory where - on a hot day - it battles vainly to get the temperature below 80 degrees. I briefly considered running an extension lead out to my car and sitting in it to work, but that doesn't help with the servers and power supplies.
I've already had to shut down the NAS because it's sending me an email every five minutes saying it's getting a bit too warm. And I've shut down the backup domain controller to try and cut down the heat generation (though it's supposed to be one of those environmentally friendly boxes that will run on just a sniff of electricity). And the battery in the UPS in the office did its usual summer trick of bulging out at the sides, and throwing in the towel as soon as I powered up a desktop and a couple of decent sized monitors. It's no wonder UPS are so cheap to buy. They're like razors (or inkjet printers) - you end up spending ten times more than a new one would have cost on replacement batteries. Even though I cut a hole in the side and nailed a large fan onto it.
Probably I'm going to have to bite the bullet and buy a couple of those portable air conditioning units so my high-tech kit can stay cool while we all melt here in the sun. In fact, my wife reckons I've caught swine 'flu because she finds me sitting here at the keyboard sweating like pig when she sails in from her nice cool workplace in the evening. At least the heat has killed most things in the garden (including the lawn) so that's one job I've escaped from.
By the way, in case you didn't realize, the title this week comes from a rather old BBC TV program. Any similarity between the actor who played Gunner 'Lofty' and this author is vigorously denied.
I seem to have spent a large proportion of my time this month worrying about health. OK, so a week of that was spent in the US where, every time I turned on the TV, it scared me to death to see all the adverts for drugs to cure the incredible range of illnesses I suppose I should be suffering from. In fact, at one stage, I started making a list of all the amazing drugs I'm supposed to "ask my doctor about", but I figured if I was that ill I'd probably never have time to take them all. They even passed an "assisted suicide" law while I was there, and I can see why they might need it if everyone is so ill all of the time.
And, of course, it rained and hailed most of the week as well. No surprise there. They even said I might be lucky enough to see some snow. Maybe it's all the drugs they've been asking their doctor about that makes snow seem like a fortuitous event. Still, I did get to see the US presidential election while I was there. Or, rather, I got to see the last week of the two year process. It seems like they got 80% turnout. Obviously, unlike Britain where we're lucky to get 40% turnout, they must think that voting will make a difference. Here in the People's Republic of Europe, we're all well aware that, if voting actually achieved anything, they'd make it illegal. I wonder if the result still stands if nobody actually turns up to vote?
Mind you, it does seem surreal in so many ways. You have to watch four different news channels if you want to get a balanced opinion. And one of the morning newsreaders seemed to have a quite noticeable lisp so that I kept hearing about the progreth of the Republicanth and the Democratth. A bit like reading a Terry Pratchett novel. Or maybe it was just the rubbish TV in the hotel. And they didn't have enough voting machines so in some places people were queuing for four hours in the rain to cast their votes. Perhaps it's because there are around 30 questions on the ballot paper where you get to choose the president, some assorted senators and governors, a selection of judges, and decide on a couple of dozen laws you'd like to see passed. Obviously a wonderful example of democracy at work.
Anyway, returning to the original topic of this week's vague expedition into the blogsphere, my concerns over health weren't actually connected to my own metabolic shortcomings. It was all to do with the Designed for Operations project that I've been wandering in and out of for some number of years. The organizers of the 2008 patterns & practices Summit had decided that I was to do a session about health modeling and building manageable applications. In 45 minutes I had to explain to the attendees what health models are, why they are important, and how you use them. Oh, and tell them about Windows Eventing 6.0 and configurable instrumentation helpers while you're at it. And put some jokes in because it's the last session of the day. And make sure you finish early 'cos you'll get a better appraisal. You can see that life is a breeze here at p&p...
So what about health modeling? Do you do it? I've done this kind of session three or four times so far and never managed to get a single person to admit that they do. I'm not convinced that my wild ramblings, furious arm waving, and shower of psychedelically colored PowerPoint graphics (and yes, Dave, they do have pink arrows) ever achieve anything other than confirm to the audience that some strange English guy with a funny accent is about to start juggling, and then probably fall off the stage. Mind you, they were all still there at the end, and only one person fell asleep. I suppose as there was no other session to go to, they had no choice.
What's interesting is trying to persuade people that it's not "all about exception handling". I have one slide that says "I don't care about divide by zero errors; I just want to know about the state changes of each entity". Perhaps it's no wonder that the developers in the audience thought they had been confronted by some insane evangelist of a long-lost technical religion. The previous session presented by some very clever people from p&p talked about looking for errors in code as being "wastage", and there I was on next telling people all about how they should be collecting, monitoring, and displaying errors.
But surely making applications more manageable, reporting health information, and publishing knowledge that helps administrators to verify, fix, and validate operations post deployment is the key to minimizing TCO? An application that tells you when it's likely to fail, tells you what went wrong when it does fail, and provides information to help you fix it, has got to be cheaper and easier to maintain. One thing that came out in the questions afterwards was that, in large corporations, many developers never see the architect, and have no idea what network administrators and operators actually do other than sit round playing cards all day. Unless they all talk to each other, we'll probably never see much progress.
At least they did seem to warm to the topic a little when I showed them the slide with a T-shirt that carried that well-worn slogan "I don't care if it works on your machine; we're not shipping your machine!" After I rambled on a bit about deployment issues and manageable instrumentation, and how you can allow apps to work in different trust levels and how you can expose extra debug information from the instrumentation, they seemed to perk up a bit. I suppose if I achieved nothing other than making them consider using configurable instrumentation helpers, it was all worthwhile.
I even managed to squeeze in a plug for the Unity dependency injection stuff, thus gaining a few brownie points from the Enterprise Library team. In fact, they were so pleased they gave me a limited edition T-shirt. So my 10,000 mile round trip to Redmond wasn't entirely wasted after all. And, even better, if all goes to plan I'll be sitting on a beach in Madeira drinking cold beer while you've just wasted a whole coffee break reading this...
Technical writers, like wot I am, tend to be relatively docile and unflappable creatures. It comes with the territory (and, often, with age). Especially when most of your life is spent trying to document something that the dev team are still building, and particularly so if that dev team is resolutely following the agile path. You know that the features of the software and its capabilities will morph into something completely different just after you get the docs on that bit finished; and they'll discover the bugs, gotchas, and necessary workarounds only after you sent your finely crafted text to edit.
And you get used to the regular cycle of "What on earth have they added now?", "What is this supposed to do", "Can anyone tell me how it works", and "OK, I see, that must be it". Usually it involves several hours searching through source code, playing with examples, trying to understand the unit tests, and - as a last resort - phoning a friend. But, in most cases, you figure out what's going on, take a wild stab at how customers will use the feature, and then describe it in sufficient detail and in an organized way so that they can understand it.
Of course, the proper way to document it is to look for the scenario or use case, and describe this using one of the well known formats such as "Scenario", "Problem", "Solution", and "Example". You might even go the whole hog and include "Forces", "Liabilities", and "Considerations" - much like you see software design patterns described. Though, to me at least, this often seems like overkill. I guess we've all seen documents that describe the scenario as "The user wants to retrieve data from a database that has poor response times", the problem as "The database has poor response times and the user wants to retrieve data", and the forces include "A database that has poor response times may impact operation of the application". And then the solution says "Use asynchronous access, stupid!" (though the "stupid" bit is not usually included, just implied).
Worse still, writing these types of documents takes a lot of effort, especially when you try not to make the reader sound stupid, and may even tend to put people off when compared to a more straightforward style that just explains the reason for a feature, and how to use it. Here at p&p, we tend to document these as "Key Scenarios" - with just a short description of the problem and the solution, followed by an example.
However, in an attempt to provide more approachable documentation that is easier to read and comprehend, I've tended to wander away from the strict "Key Scenarios" approach. Instead, I've favored breaking down software into features and sub-features, and describing each one starting with an overview of what it does, followed by a summary of the capabilities and links to more detailed topics for each part that describe how to use it. This approach also has the advantage in that it makes coping with the constant churn of agile-developed software much easier. The overview usually doesn't change (much); you can modify the structure, order, and contents of each subtopic; and you can slot new subtopics in without breaking the whole feature section. Only when there are extreme changes in the software (which are not unknown, of course) do you need to rebuild a whole section.
OK, so the whole idea of agile is that you discover all this though constant contact with the other members of the team, and have easy access to ask questions and get explanations. But it often isn't that simple. For example, during testing of the way that Unity resolves objects, I found I was getting an exception if there was no named mapping in the container for the abstract class or interface I was trying to resolve. The answer was "Yes, Unity will throw an exception when there is no mapping for the type". But, as you probably guessed, this isn't a limitation of Unity (as I, employing my usual ignorance and naivety, assumed it was). Unity will try its best to create an instance of any type you ask for, whether there's a mapping or not. But it throws an exception because, with no mapping, it ends up trying to instantiate an interface or abstract class. Needless to say, furious re-editing of the relevant doc sections ensued.
Then, just to reinforce my growing concern that maybe my less rigorous approach to feature documentation is not working as well as planned, I came across a set of features that are so complex and unintuitive, and viciously intertwined with each other, that understanding the nuances of the way they work seemed almost impossible. Even the test team, with their appropriate allocation of really clever people, struggled to grasp it all and explain it to mere mortals like me. Yes, I can vaguely appreciate what it does, and even how some of it works. But what is it for?
This seems to be the root of the issue. There are no documented use cases that describe when or why customers would use this set of features, or what they are designed to achieve. Just a semi-vague one liner that describes what the feature is. Or, to be more accurate, several one liners that describe bits of what the feature is. No explanation of why it's there, why it's useful, and why users might need it. Or part of it. Or some unexplained combination of it. Of course, it is impeccably implemented and works fine (I'm told), and is a valuable addition to the software that a great many users of previous versions asked for.
But the documentation dilemma also reveals, like some gruesome mythical beast rising from the mire, just how frighteningly easy it is to innocently stray from the well worn path of "proper" documentation techniques. It's only obvious when something like this (the problem not the mythical beast) comes back to bite you. Faced with a set of features that almost seem to defy explanation in terms of "what they are" and "what they do", it does seem like the key scenario approach, with its problem, solution, and example format, really is the way to go.
If the first step of documenting a feature of a product is to discover and list the key scenarios, you have a platform for understanding the way that the software is designed to be used, and what it was designed to do. If you can't even find the key scenarios before you start writing, never mind accurately document them, the chances are you are never going to produce guidance that properly or accurately describes the usage.
And maybe, one day, I'll even apply "proper" documentation design patterns to writing blog posts...
It's customary here in England to castigate British Rail for their outlandish non-service excuses. As far back as I can remember we've had "leaves on the line". Then, after they spent several million pounds on special cleaning trains, it morphed into "the wrong kind of leaves on the line". And of course, every winter when the entire British transport system grinds to a halt they blame "the wrong kind of snow." But this week I've been introduced to a new one: "the wrong kind of electricity".
During the summer months I ply my lonely documentation engineering trade using a laptop and enjoying the almost-outdoorsness of the conservatory; soaking up the joys of summer, the birds singing in the trees, the fish splashing around in the pond, and a variety of country wildlife passing by. So when I noticed one of my regular computer suppliers was selling off Windows 7 laptops, no doubt to be ready for the imminent arrival of Windows 8, I thought it would be a good idea to pick up a decent 17" one to replace my aging 14" Dell Latitude. With age gradually degrading my eyesight I reckon I'll soon need all the screen space I can get.
So when my nice new Inspiron arrived I powered it up, worked through the "configuring your computer" wizard, removed all the junk that they insist on installing, and started to list all the stuff I'll need to install. Until I noticed that the battery wasn't charging. So I fiddled with the power settings, dug around in the BIOS, tried a different power pack, and did all the usual pointless things like rebooting several times. No luck.
So I dive into t'Internet to see if there's a known fix. Yes, according to several posts on the manufacturer's site and elsewhere there is. You replace the power supply board inside the computer at a cost of 35 pounds, or – if it's still under guarantee – send it back and they replace the motherboard. Mind you, there were several other suggestions, such as upgrading the BIOS and banging the power supply against a wall, but as I'd only had the machine for two hours none of these seemed to be an ideal solution. So I did the obvious – pack it up and send it back to the supplier as DoA (dead on arrival).
Mind you, when I phoned the supplier and explained the problem the nice lady said that it would be OK if I kept it plugged into the mains socket because then it doesn't need the battery to be charged up. True, but as I pointed out to her, it's supposed to be a portable computer. I'll need a long piece of wire if I decide to use it the next time I'm travelling somewhere by train.
And do I want a replacement? How common is the failure? To have it happen on a brand new machine is worrying. Yet, strangely, only a few weeks ago I noticed one time when I powered up my old Latitude that it displayed a message saying it didn't recognize my power pack, but then decided it did. Yet after wandering around the house I found five Dell laptop power packs and they all seem to be much the same. All 19.6 Volts, either 3.4 Amps or higher current rating. They all have the same two-pole plug, with the positive in the center. The only difference seems to be that the newer ones have 25 certificates of conformance on the label, while the older ones have around 15 (perhaps that's why they seem to get bigger each time - to make room for the larger label).
So how does the computer know which power pack I've plugged in? When I looked in the BIOS it said that the power pack was "65 Watts". Is there some high frequency modulation on the output that the computer can decipher? Or does it do the old electrician's trick of flicking the wires together to see if there's sparks, and measure the effect? Do all computers these days do the same thing? If I buy an unbranded replacement power pack will the computer pop up a window saying "You tight-fisted old miser - you don't really expect me to work with that do you?"
And is all this extra complexity, which can obviously go wrong, really needed? How comfortable will I be with all my computers now if I feel I need to check that the power supply/computer interface is still working every time I switch one of them on? It seems like the usual suspicion most people have that the first thing to die on your computer will be the hard disk is no longer true. Now your computer may decide to stop working just because you're using the wrong kind of electricity...
As in many households, several regular and occasional computer users take advantage of my connection to the outside world. I use ISA Server 2006 running as a virtual Hyper-V instance for firewalling and connection management (I'm not brave enough to upgrade to Forefront yet), and all incoming ports are firmly closed. But these days the risk of picking up some nasty infection is just as great from mistaken actions by users inside the network as from the proliferation of malware distributors outside.
Even when running under limited-permission Windows accounts and with reasonably strict security settings on the browsers, there is a chance that less informed users may allow a gremlin in. So I decided some while ago to implement additional security by blocking all access to known malware sites. I know that the web browser does this to some extent, but I figured that some additional layer of protection - plus logging and alerts in ISA - would be a useful precaution. So far it seems to have worked well, with thankfully only occasional warnings that a request was blocked (and mostly to an ad server site).
The problem is: where do you get a list of malware sites? After some searching, I found the Malware Domain Blocklist site. They publish malware site lists in a range of formats aimed at DNS server management and use in your hosts file. However, they also provide a simple text list of domains called JustDomains.txt that is easy to use in a proxy server or ISA. Blocking all web requests for the listed domains will provide some additional protection against ingress and the effects of malware that might inadvertently find its way into a machine otherwise; and you will see the blocked requests in your log files.
They don’t charge for the malware domain lists, but you decide to use them please do as I did and make a reasonable donation. Also be aware that malware that connects using an IP address instead of a domain name will not be blocked when you use just domain name lists.
To set it up in ISA 2006, you need the domain list file to be in the appropriate ISA-specific format. It's not available in this format, but a simple utility will convert it. You can download a free command line utility I threw together (the Visual Studio 2010 source project is included so you can check the code and recompile it yourself if you wish). It takes the text file containing the list of malware domain names and generates the required XML import file for ISA 2006 using a template. There's a sample supplied but you'll need to export your own configuration from the target node and edit that to create a suitable template for your system. You can also use a template to generate a file in any other format you might need.
To configure ISA open the Toolbox list, find the Domain Name Sets node, right-click, and select New Domain Name Set. Call it something like "Malware Domains". Click Add and add a dummy temporary domain name to it, then click OK. The dummy domain will be removed when you import the list of actual domain names. Then right-click on your new Malware Domains node, click Export Selected, and save the file as your template for this node. Edit it to insert the placeholders the utility requires to inject the domain names into it as described in the readme file and sample template provided.
After you generate your import file, right-click on your Malware Domains node, click Import to Selected, and locate the import file you just created from the list of domain names. Click Next, specify not to import server-specific information, and then click Finish. Open your Malware Domain set from the Toolbox and you should see the list of several thousand domain names.
Now you can configure a firewall rule for the new domain name set. Right-click the Firewall Policy node in the main ISA tree view and click New Rule. Call it something recognizable such as "Malware Domains". In the Action tab select Deny and turn on the Log requests matching this rule option. In the Protocols tab, select All outbound traffic. In the From tab, click Add and add all of your local and internal networks. In the To tab click Add and add your Malware Domains domain name set. In the Content Types tab, select All content types. In the Users tab select All users, and in the Schedule tab select Always. Then click OK, click Apply in the main ISA window, and move the rule to the top of the list of rules.
You can test your new rule by temporarily adding a dummy domain to the Domain Name Set list and trying to navigate to it. You should see the ISA server page indicating that the domain is blocked.
If you wish, you can create a list of IP addresses of malware domains and add this set to your blocking rule as well so that malware requests that use an IP address instead of a domain name are also blocked. The utility can resolve each of the domain names in the input list and create a file suitable for importing into a Computer Set in ISA 2006. The process for creating the Computer Set and the template is the same as for the Domain Name Set, except you need to inject the domain name and IP address of each item into your import file. Again, a sample template that demonstrates how is included, but you must create your own version as described above.
Be aware that some domains may resolve to internal or loopback addresses, which may affect operation of your network if blocked. The utility attempts to recognize these and remove them from the resolved IP address list, but use this feature with care and check the resolved IP addresses before applying a blocking rule.
Another issue is the time it takes to perform resolution of every domain name, and investigations undertaken here suggest that only about one third of them actually have a mapped IP address. You'll need to decide if it's worth the effort, but you can choose to have the utility cache resolved IP addresses to save time and bandwidth resolving them all again (though this can result in stale entries). If you do create a Computer Set, you simply add it to the list in the To tab of your blocking rule along with your Domain Name Set. Of course, you need to regularly update the lists in ISA, but this just involves downloading the new list, creating the import file(s), and importing them into your existing Domain Name Set and Computer Set nodes in ISA.
If an article I read in the paper this week is correct, you need to immediately uninstall Arial, Verdana, Calibri, and Tahoma fonts from your computer; and instead use Comic Sans, Boldini, Old English, Magneto, Rage Italic, or one of those semi-indecipherable handwriting-style script fonts for all of your documents. According to experts, it will also be advantageous to turn off the spelling checker; and endeavour to include plenty of unfamiliar words and a sprinkling of tortuous grammatical constructs.
It seems researchers funded by Princeton University have discovered that people are 14% less likely to remember things they read when they are written in a clean and easy-to-read font and use a simple grammatical style. By making material "harder to read and understand" they say you can "improve long term learning and retention." In particular, they suggest, reading anything on screen - especially on a device such as a Kindle or Sony Reader that provides a relaxing and easy to read display - makes that content instantly forgettable. In contrast, reading hand-written text, text in a non-standard font, and text that is difficult to parse and comprehend provides a more challenging experience that forces the brain to remember the content.
There's a lot more in the article about frontal lobes and dorsal fins (or something like that) to explain the mechanics of the process. As they say in the trade, "here comes the science bit". Unfortunately it was printed in a nice clear Times Roman font using unchallenging sentence structure and grammar, so I've forgotten most of what it said. Obviously the writer didn't practice what they preached.
But this is an interesting finding. I can't argue with the bit about stuff you read on screen being instantly forgettable. After all, I write a blog that definitely proves it - nobody I speak to can remember what I wrote about last week (though there's probably plenty of other explanations for that). However, there have been several major studies that show readers skip around and don't concentrate when reading text on the Web, often jumping from one page to another without taking in the majority of the content. It's something to do with the format of the page, the instant availability, and the fundamental nature of hyperlinked text that encourages exploration; whereas printed text on paper is a controlled, straight line, consecutive reading process.
From my own experience with the user manual for our new all-singing, all-dancing mobile phones, I can only concur. I was getting nowhere trying to figure out how to configure all of the huge range of options and settings for mail, messaging, synchronization, contacts, and more despite having the laptop next to me with the online user manual open. Instead, I ended up printing out all 200 pages in booklet form and binding them with old bits of string into something that is nothing like a proper manual - but is ten times more useful.
And I always find that proof-reading my own documents on screen is never as successful as when I print them out and sit down in a comfy chair, red pen in hand, to read them. Here at p&p we are actively increasing the amount of guidance content that we publish as real books so that developers and software architects can do the same (red pen optional). The additional requirements and processes required for hard-copy printed materials (such as graphic artists, indexers, additional proof readers, layout, and the nagging realization that you only have one chance to get it right) also seem to hone the material to an even finer degree.
So what about the findings of those University boffins? Is all this effort to get the content polished to perfection and printed in beautifully laid out text actually reducing its usefulness or memorability? We go to great lengths to make our content easy to assimilate, using language and phrasing defined in our own style manuals and passing it through multiple rigorous editing processes. Would it be better if we just tossed it together, didn't bother with any editing, and then photo-copied it using a machine that's nearly run out of toner? It should, in theory, produce a more useful product that you'd remember reading - though perhaps not for the desired reasons.
Taking an excerpt from some recent guidance I've created, let's see if it works. I wrote "You can apply styles directly within the HTML of each page, either in a style section in the head section of the page or by applying the class attribute to individual elements within the page content." However, before I went back and read through and edited it, it could well have said something like "The style section in the head section, or by decorating individual elements with the class attribute in the page, can be used to apply styles within the HTML or head of each page within the page content."
Is the second version likely to be more memorable? I know that my editor would suspect I'd acquired a drug habit or finally gone (even more) senile if I submitted the second version. She'd soon have it polished up and looking more like the first one. And, no doubt, apply one of the standard "specially chosen to be easily readable" fonts and styles to it - making readers less likely to recall the factual content it contains five minutes after they've read it.
But perhaps a typical example of the way that a convoluted grammar and structure makes content more memorable is with the well-known phrase taken from a mythical motor insurance claim form: "I was driving past the hole that the men were digging at over fifty miles per hour." So that could be the answer. Sentences that look right, but then cause one of those "Hmmm .... did I actually read that right?" moments.
At the end of the article, the writer mentioned that he asked Amazon for their thoughts on the research in terms of its impact on Kindle usage, but they were "unable to comment". Perhaps he sent the request in a nice big Arial font, and the press guy at Amazon immediately forgot he'd read it...
After approximately two weeks of intermittent network upgrades, I seem to still have a working network. I guess at least that's something to be thankful for. But it's still not fulfilled the original plan. And much hyper-ventilation has occurred during the process, particularly when watching those little green caterpillars crawl across the endless "Please wait..." dialogs, and wondering what the next error dialog will say...
Scene I: "Virtual Notworks"
One of the hardest parts of the configuration process for Hyper-V (at least for me) seems to be understanding virtual networking, and applying appropriate network settings. Despite reading up on it beforehand and thinking I grasped how it worked, I encountered endless error messages about multiple gateways and duplicated connections while trying to configure the network connections for the VMs and the host machine. It turns out that I was probably being as dense as usual in that I missed the obvious point about what the virtual switches that Hyper-V creates actually do.
Stepping back, the scenario is a machine with three physical network cards. I want to use one to connect the host (physical) server O/S to the internal network, one to connect specific virtual machines to the internal network, and one to connect specific virtual machines to the outside world. One of the virtual machines, hosting the firewall and mail server, will connect to both.
So step one is to use the Virtual Network Manager to create two virtual switches and connect these to the two physical NICs. When you look in the Manage Network Connections dialog on the host, you see - as expected - the three physical connections and two additional connections. What's confusing is that they are all "Connections". It's only when you examine the properties of each one that you realize two of them are bound to the new Microsoft Virtual Switch protocol and nothing else. At this point, it's a good idea to rename these connections so the name contains the word "Switch" to help you easily identify them.
So, now you can use the Hyper-V Settings dialog for each virtual machine to add the appropriate network connection(s) to that VM. What this actually does is create a connection within the virtual machine and "plug it into" the virtual switch you specify. You can, of course, plug the connections within more than one VM into each virtual switch. It really does help to think of the "switch" connections as "real" network switches like the 4 and 8 port ones you can buy from your local computer store. Ben Armstrong has some nice pictures in his blog post that illustrate this.
What's confusing in almost every post and document I've read is the use of the word "host" or "parent" to refer to the physical machine and its O/S. It implies that the VMs somehow run "inside" the O/S that is running directly on the hardware. I've started to refer to it in my head as the "base machine" and "base O/S" instead. While the base O/S and the Hyper-V runtime implement the virtual switches, these switches are not "inside" the base O/S. The Virtual Network Manager effectively moves them out of the base O/S. So the confusing part (at least for me) was what do I do with the two new "Connections" that are visible in the Manage Network Connections dialog of the base O/S. I know that I must configure the non-virtual connection that the base O/S will use to talk to the network. And I know that I have to configure, within each VM, the connections that I add to these VMs using the Hyper-V Settings dialog.
Unable to find any guidance on the matter, I assumed that the two "Connections" visible in the base O/S were being used to link the physical NICs to the virtual switches, hence the quandary over how to configure them. As it was, I followed the "know nowt, do nothing" approach and left them set to the default of "Obtain an IP address automatically". It was only after a day or so I noticed that file copy speed was erratic, and that the physical servers each had two different IP addresses in the domain DNS list.
Probably you are already hopping up and down, and waving your arms to try and attract my attention, with the answer. My error is obvious now, but wasn't at the time. What the Virtual Network manager does is steal the physical NIC and plug it into a virtual switch. However, this would cause a problem if the machine only had one physical NIC, so it tries to be helpful by automatically creating a new connection in the base O/S for each virtual switch it creates, and then plugs these new connections into the appropriate virtual switch. This means that the base O/S still has access to the physical NIC.
However, this also means that, on a multi-NIC machine, you can easily get duplicate connections in the base O/S. For example, in my case I already have a connection in the base O/S that's nailed to one of the physical NICs, and that's all I need. But when I dig a bit of CAT6 out of the junk box and plug one of the other physical NICs in the machine into the network, the virtual switch links it to one of the un-configured "Connections" in the base O/S. This means I've got two connections from the base O/S to the network for the same machine, but with different IP addresses.
If you managed to follow that rambling description, you'll be pleased to know that it finally dawned on me what was going on, and I confirmed it when I finally came across this advice in the last comment to a long blog post on the subject: "...if you have multiple physical NICs, disable the duplicated connections in the base O/S that the Virtual Network Manager creates". In other words, in the Manage Network Connections list in the base O/S, unplug all the "Connections" (not the "Switches") that Hyper-V so helpfully created (and, coincidently, you don't know what to do with). Unless, of course, you need the base O/S to talk to more than one network, but that probably negates the whole point of having a vanilla and minimum base O/S install that runs multiple VMs containing all the complicated stuff.
Note: In Windows Server 2008 R2 you can untick the Allow management operating system to share this network adapter option in Virtual Network Manager to remove these duplicated connections from the base O/S so that updates and patches applied in the future do not re-enable them.
By the way, if you get odd messages about duplicate connection names, gateways, or other stuff while configuring network connections within a VM, it's worth checking for any "orphan" unconnected connections that the Virtual Network Manager may have created. In fact, it's worth doing this anyway to avoid "connections problems" when you try to import an exported VM if the roof falls in. Use the process described in http://support.microsoft.com/kb/269155 to find these and uninstall them.
Scene II: "An Exchange of Plan"
All that remains now is to get one more VM up and running to host my firewall, public DNS, and Exchange Server. One more day's work and it will all be done. All the hardware is in place, all the infrastructure and networks installed, and most of it is performing without filling the Event Logs with those nasty "red cross" messages. Maybe I can phone the lad down the road who is finding a home for my old boxes and get rid of the last one...
Or maybe not. I just read the "ReadMe" file for ISA 2006 and discovered that I can't run it on a 64-bit machine. Yet Exchange Server really wants 64-bit to work properly (according to the docs). And why should I run 32-bit software on my gleaming new 64-bit boxes anyway? So I check out the replacement, Forefront, but it's still in Beta. Do I want to chance that on my only connection to the outside world? Probably not.
And after reading How to Transition (or Migrate) to Microsoft Exchange Server 2007 I begin to wonder how migration will go when I'm coming from a box that was originally upgraded from Exchange 5.5 to Exchange Server 2000. Do I really need an Exchange Server? Yes, it's useful for experimenting and researching stuff I work on, but the administrative overhead - never mind the upgrade hassle I can see lurking in the wings - probably far outweighs the gains.
In fact, if it's comparable to the struggle with Windows 2000 Server, I'll probably have to book a week's vacation. Or hire someone who knows what they're doing. Maybe I should just have done that in the first place, but then I wouldn't have learned all this valuable stuff about how it all works.
For example, after a couple of days, the old server, which is still the main domain controller for the external network, started filling the Event Log with a message every five minutes telling me that there was a domain error. According to Microsoft, the message you usually get is
Well, that would be useful. What I got was:
However, after implementing the process described in Event ID 1000, 1001 is logged every five minutes in the Application event log and rebooting, it seems to be fixed. The problem was incorrect permissions on the Winnt\SysVol folder and rights assignment for "Bypass traverse checking". Probably another left-over from the original NT4 installation. Thank heavens for Technet...
And, increasingly, I find I'm struggling for disk space. I need 120GB just to back up the three VMs I'm running, and the servers only have a pair of 160GB disks. If you are ordering hardware to do Hyper-V, buy boxes with four times the space you think you'll need. And make sure you get Gigabit NICs in them and use quality CAT6 cable and a Gigabit network switch 'cos you're going to be spending a lot of time copying very large files...
Ultimately, I took the decision to outsource my Exchange Server to a well-known and reputable company here in the UK. The cost is less than I pay now just for outsouorced email filtering services, so it looks like a bargain. And that meant that I could create a virtual 32-bit Windows 2003 instance (ISA will not install on Win 2008) on Hyper-V to run just ISA 2006 and the external DNS server for my public domains. Less stuff to worry about in the long term I hope, though I'll probably have to upgrade that to Forefront on Win 2008 some time in the future. But at least there's no need now for an external domain!
Scene III: "DNS = Decidedly Negative Scenario"
Of course, everyone knows that DNS is a black art, and that you should never expect a DNS server to do what you expect. Well, unless you know about this stuff anyway. Up to now, my old DNS setup seemed to be working fine, though probably more through luck and old shoelaces than any real expertise on my part. So I decided this time to read up on how I should do DNS for ISA and an external DNS server to see if I could get it right. And, having got it all set up and running fine on a spare IP address, all seemed hunky-dory.
Until the "big switch-over day" arrived and I pulled the old ISA box out of the network. Everything stopped working. Every machine began to spew its excess event log messages all over the garage floor. My wife was shouting that she couldn't get her email. And it was only 9 o'clock on a Sunday morning. Maybe I should just put the old ISA box back and go back to bed...? However, after calming down and topping up with coffee, I started to investigate. A couple of wrong gateway entries in the domain controller network connections obviously weren't helping, but fixing these didn't cure it. So I went back to the docs to see what I missed the first time round.
The guidance I'd used was Configuring DNS Servers for ISA Server 2004 (there is no ISA 2006 version), which shows the setup for "Domain Member ISA Server computer with full internal resolution". However, the doc is a bit confusing in that it covers several different scenarios. In the end, it was grasping that the ISA box needs to use the internal DNS server and that the internal DNS server will do all forwarding to other DNS servers. These forward lookups go out to the Internet through the ISA server, but do not go to the DNS server on the ISA box. Read "Why can’t I point to the Windows 2000 DNS first, and then to the ISP DNS?" in the "Common Questions" section of that document to understand why. Plus, the internal domain machines must not include the external DNS server in their list of DNS servers, but should instead reference only the internal DNS and allow that to forward lookups (I use DHCP to set these options). Maybe the following more detailed version of the schematic in the Technet doc will help...
Note: If your public DNS server is only answering queries for zones for which it is authoratative (which is most likely the case) make sure you set the Disable Recursion option in the Advanced tab of the Properties dialog for the DNS server. See Can I Plug My Guitar Into My DNS Server? for more details.
I set the zone TTL for the external DNS server zones to one day, but you may want to increase that if you don't plan moving IP addresses around or updating records very often. Keep the internal TTL at about an hour to cope with DHCP and dynamic address updates. One thing I noticed is that, if you don't specify a DNS server for an interface (i.e. the external network connection), Windows uses the local 127.0.0.1 address "because DNS is installed on this machine". But it doesn't seem to break anything that I've noticed yet...
Scene IV: "Time Passes..."
They say that the show ain't over till the fat lady sings. I sincerely hope she's in the wings tuning up and ready to let rip, because the tidying up after my virtual Yuletide seems to go on and on. Obviously I broke most of the connections and batch files on the network by changing the machine names and IP addresses. But other things about Hyper-V are still catching me out.
For example, I've always used the primary domain controller as a reliable time source for each domain by configuring it to talk to an external time server pool. I even know the NET TIME command line options off by heart. But it all gets silly on Hyper-V because you have multiple servers trying to set the time. The solution, I read, is to get the base O/S to act as a reliable time source, and target the VMs (and other machines if required) to it. You have to use use the more complex syntax of the W32TM command, but it all seemed to work fine until I installed the ISA box. ISA 2006 is clever in that it automatically allows a domain-joined machine to talk to "trusted servers" (which, you'd assume, includes its domain controller). But I had tons of messages saying it couldn't contact or get a valid time through the internal or external connection.
Well, I have to say that I wouldn't expect it to work with the external connection as that is blocked for the ISA box. But why not over the internal connection? Should I just disable the w32time service on the grounds that Hyper-V automatically syncs time for the VMs it hosts (unless you disable this in the Hyper-V Settings dialog for the VM)? Or should I allow external NTP (UDP) access from the ISA box to an external time server? In the end, after some help from other bloggers, I just used NET TIME to remove any registered time servers from the ISA box, restarted the w32time service, and it automatically picked up time from both the "VM IC Time Synchronization Provider" and the domain controller. Perhaps, like me, it just needed a rest before starting again.
Another interesting (?) issue that crawled out of the woodwork after a few days was the error "The Key Distribution Center (KDC) cannot find a suitable certificate to use for smart card logons, or the KDC certificate could not be verified." As I don't use smart cards, I ignored the error until I found the article Event ID 29 — KDC Certificate Availability on Technet. Another example of problems bought on by domain migration from Windows 2003 perhaps. As with several other issues, the solution is less than useful because I get to the bit where it says "...click Request New Certificate and complete the appropriate information in the Certificate Enrollment Wizard for a domain controller certificate", but the Wizard tells me that "certificate types are not available" and I should "contact the Administrator".
Not a lot of use when I am pretending to be the Administrator. Unable to find any other useful guidance, I took a chance and installed the Active Directory Certificate Services role, which created a root certificate in the Personal store and allowed me to create the domain controller certificate I needed. I have no idea if this is the correct approach, but time will no doubt tell...
One thing I would recommend is putting the machine name in big letters on the screen background. I used to get lost just working four machines through a KVM. Now there are multiple machines for some of the KVM buttons. And if you are executing command line options, use the version that contains the machine name as a parameter in case you aren't actually on the machine you think you are...
Finale: "Was It Worth It?"
So, after three weeks, was it actually worth it? I'm not referring to the time you've wasted reading all this administrative junk and doubtful meanderings. I mean, what do I think about the process and the result? Here's my opinions:
And the good news for any remaining readers of my blog is that I can maybe find something more interesting to ramble on about next week...