Random Disconnected Diatribes of a p&p Documentation Engineer
Every now and then I get to write actual code rather than just documentation. Usually there's either a crowd watching in amazement that I can actually find Visual Studio, never mind knowing some of the magic keywords that make it all work when you press the green arrow button. Or else everyone is cowering behind their desk in case my computer can't cope with the culture shock and explodes. Isn't it wonderful when everyone has so much faith in your capabilities - after all, I've read the .NET Architecture Guide (endlessly, as I've been working on it for the last year) so I ought to know a bit about this stuff.
Unfortunately, as I've rambled about in previous posts (such as "How p&p Makes Cheese Sandwiches"), my programming tasks tend to involve building kludges and "temporary fix" tools to solve problems that are either too esoteric to be of any use to people outside our small documentation team, or are a stop-gap until the proper tool gets upgraded next time. OK, so I used to be a consultant and I wrote a few Web apps for customers, but I'd hesitate to publish "best practice compliance" figures for those. Especially as they were also usually kludges required to get software the company had paid big money for to work how they wanted it to.
So, anyway, last week I decided to put together a rough tool to help us find broken and incorrect links in a documentation set that builds to create a CHM or HxS file. The issue is that, although the authoring tools we use can create links between topics and within topics, you can't easily (or at all in some cases) check if these links are valid. They may point to a topic that you removed, or the target topic may have changed (so has a different auto-generated topic filename), or they may just point to the wrong topic. We do use a link checker utility to find broken links, but it can't find links that go to the wrong topic or links that point to an anchor (bookmark) in the same page that does not exist. The only way we can verify these are through manual testing ("click and read").
In theory, the process for automatically checking the links that the link checker cannot verify sounds relatively simple. Every topic page is an HTML file generated by the documentation tools from the source Word documents. The text of a link that points to a separate topic should have the same text as the title of that topic. And a topic containing an in-page link should contain an anchor that matches the bit after the "#" in the link href. So it's just a matter of applying some processing to each HTML file to verify these rules. I could do it by reading the pages one by one using MSXML, or just open them as text files and read the source that way.
I chose the second approach for no better reason that it seemed easier and quicker to build, and because I already had most of the code from another tool that did similar stuff to update various sections of the source files (such as inserting feedback links and index entries). All it involves is some judicious string handling, searching, and text comparisons. Of course, it got more complex as time went on because I found I needed to allow for optional settings such as allowing the case of titles to differ, and ignoring leading and trailing spaces in topic titles. But it was fairly easy to stir in a mixed selection of semi-appropriate keywords and variables, and bring the whole lot slowly to the boil.
Did I use agile development methods, as I know I should (especially after working with the p&p dev teams for so long)? Well, if you can call throwing it together and fixing broken bits afterwards "agile", then yes. But, in reality, no. I should have started by writing tests, but the tool simply reads the files and dumps the results out as a log file so how do I write tests for that? Probably I should have divided the task into a series of separate routines for each minor action, and written tests for each one, but that seemed overkill for such a simple project. I did all the testing as I went along by adding errors into a set of existing source docs and then checking that each one was detected as I added features to the code.
After it all seemed to be working, confirmed by fairly comprehensive testing by my advisory panel and beta test team (Hi, Nelly), I decided I should do the agile "refactor" thing. Except there was just one chunk of code that got used more than once (twice in fact), and it was only three lines. Perhaps I could write some generalized code routine with half a dozen parameters for some of the functionality, and call it from the three or four places in the code that seemed appropriate. Would that be more efficient than repeating half a dozen lines of code in four places? It would be easier to maintain, but probably take time to code, fix, and retest everything again.
Next, I did the "make it more efficient" thing. One thing the code needs to do for each inter-topic link it finds in every page is extract the title from the linked page (if it exists) to see if it matches the text in the link. First time round, I had a routine that opened the file, located the <title> element, and returned the contents. But this meant I was opening every topic multiple times; for example, every page contains a link to the contents topic so it gets opened and read for every page in the doc set.
Obviously this is hugely inefficient, even if .NET caches the file contents. So, I simply added code to save each title it found in a Dictionary using the file name as the key, and then look in the Dictionary before reading the disk file. That way the code only opens and reads each topic page once. Makes sense, but when I ran the code again it made almost no difference. I did some comparative tests, and the reduction in time taken to process the doc set was averaging somewhere around 5%. And as the tool, running on a fairly average desktop machine, can process a doc set containing 465 HTML topic files in less than 5 seconds, how much time and effort should I put into fine tuning the code? Especially as you only run the tool a few times at the end of a project after the docs are complete and you are ready to build into a CHM or HxS...
Don't get me wrong, I'm not in any way suggesting you should ignore the principles of coding best practice, agile development and pre-written tests, and proper pre-release validation. But, sometimes, just getting stuff done by relying on the power of modern machines and software environments does seem appropriate. Though I guess I'm not likely to get any jobs building "proper" enterprise applications now I've 'fessed up...
I'm a great believer in the future of "cloud" computing. It seems to be the way forward for both large and small organizations to maximize return on investment and reduce the complexity of managing their own hardware. Not that I'm one to talk about simplifying technology requirements after the past three weeks of virtual notworkingness with new servers, Windows 2008, and Hyper-V (though, to be fair, it eventually evolved into mainly-workingness-with-odd-broken-bits). One thing it has exposed me to, however, is some of the problems that seem to be gathering in the cloud.
Like an increasing number of people, my shopping regularly involves a Web browser and debit card, rather than traffic queues and the search for a car park space. I'd like to think I get a better deal on prices as well, though that's not my primary motivation. And I tend to use companies that I know and respect, rather than chancing my luck with some fly-by-night I've never heard of. Though, thankfully, when I do have to go outside my usual comfort zone, obviously taking appropriate precautions, it's mostly been a problem-free experience (try buying inline Ethernet surge protectors from your "regular supplier" to see what I mean).
I'm one of the many millions of Web-based shoppers who recognize Amazon as a reliable and trustworthy supplier, and I regularly use our local (.co.uk) site to order a variety of stuff I need (or just want). Yes, music, films, books, the usual things; plus, increasingly, electronics and computer-related stuff such as cables and switch boxes. OK, so when I ordered two APC UPSs a while ago they "lost them in transit" and ended up refunding the money, but it was all pretty efficient and painless.
So what suddenly changed? Or has it been a gradual process that only recently rubbed sufficiently to be annoying? The problem seems to be that they are now a "store front" as well as a supplier. When you find something you need, especially stuff other than books, films, and music, it invariably comes from an "associate" that you've never heard of. In some ways, I applaud that. They're providing a great opportunity for small companies who would never be able to build an effective Web presence otherwise. But, in other ways, I wonder if it is damaging their core business. It has certainly changed my behavior.
Let me elaborate. The parcel delivered against one recent order contained, not the electric radiant heater ordered and confirmed, but 1000 empty DVD library cases. Instead of talking to Amazon (if you can actually find an email address or posting facility) I have to directly contact a supplier I've never heard of. In another case, a faulty mobile phone that was my wife's Christmas present had to go away somewhere (I've no idea where) to "be examined". OK, so both purchases were sorted out after a couple of weeks, but I'd have preferred to deal with somebody I know and trust (such as Amazon) rather than having to look at "ratings" and decide if I want to trust some other firm. The redeeming feature is that, I guess, you can go back to Amazon if it all goes pear-shaped.
But the final straw was trying to buy a couple of USB cables, a USB extension cable, and two UPS power extension cords to finish off the network upgrades that have generally blighted my festive season. I found what I wanted easily enough, and was impressed by the prices. But, reaching the checkout, I discovered that the five cables were coming from three different suppliers - each one charging postage and packing. And one of them was trying to charge 18 pounds for post and packing on standard 3-5 days delivery - on an order consisting of two USB cables costing around 3 pounds each! In total, for goods to the value of 17 pounds, I was expected to pay more than 26 pounds post and packing...
Instead, I went back to the main site and searched for products by specifying the name of one of the suppliers (not the 18 pounds delivery one) figuring I'd order all the stuff from one supplier. But examining the items in the list revealed that they were still all from different suppliers. Maybe each supplier puts their competitors' names in the search field to improve the number of hits? After about 40 minutes, I gave up and ordered the whole lot from one of my other regular suppliers (Dabs.com) who have never failed me yet - though I'm touching a large chuck of wood as I write this.
So what's gone wrong with the "cloud" approach in this particular scenario? Thing is, if I want to deal with people I've never heard of who work out of a back bedroom, I can use EBay. Have Amazon damaged their brand by allowing suppliers to hide within their product lists, and by not providing enough interaction in terms of getting support or actually submitting a comment? Or are they bravely promoting the concepts of the cloud and providing opportunities to small suppliers who would otherwise struggle to reach market?
I eventually ended up on some "Your Account" feedback page where I complained about the post and packing cost thing, but I have no idea where my feedback went, or if I was wasting my time. And, strangely enough, the next day I was talking to a friend who I know is an active Web shopper and told them about my experiences. And their response? "Oh yes, I know what you mean, that's why I only ever use Amazon for books and CDs these days..." Maybe this is an issue that new partakers of cloud-based services need to actively address. I can appreciate that part of the ideal of Web trading is to get rid of the need to handle emails and phone calls, but I reckon most people still value the capability to buy from someone they know, and actually talk to somebody when the need arises, or at least get some prompt response and a solution - without having to jump through hoops just to submit it.
I reckon it's a Government conspiracy. Obviously continental drift has speeded up while we weren't looking, and England has drifted north into the Arctic during the last couple of weeks. I did check on Virtual Earth, but the maps are three months old (it takes a while to erase all the UFOs at Area 52). I suppose the experts will blame global warming, and point to "cataclysmic climate changes becoming the norm". So it's fairly predictable that the most commonly heard comment around here this last couple of weeks has been "I'll be glad when we get some of that global warming they keep promising us..."
So, anyway, there I was bravely battling my way through the two inches of snow that has brought the whole country to a standstill, traveling down to Leicester to do a user group presentation on Policy and Dependency Injection with Enterprise Library. I'd have to say that the response to the session was good, even if the turnout was distinctly limited by the weather. But at least it was in the lounge of a rather nice little city centre pub, so suitable refreshment was on hand.
Now, I don't know about you, but when I'm standing at the bar waiting to partake of the liquid gold, I'd expect the response to my request for "a pint of the landlord's finest please" would most likely be something like "straight glass or handle?", "with ice and lemon?", or (if you are a fan of Boddingtons ale) "do you want a flake in that?" Instead, the young slip of a girl behind the bar greeted me with "what's the formula for the Fibonacci sequence?" I suppose I'm rather too old (and married) to have much idea about the ways of the youth of today, and it didn't sound like it was meant to be a chat-up line, so I was somewhat stumped for a suitable response.
Then I noticed the sign behind the bar saying "Tuesday Quiz Nights", so I guessed she was in a team and just boning up on some possible answers. Obviously they have themed quiz nights, and this week it was mathematics. Probably next week it's nuclear physics, followed by investigative medicine and rocket science. Reminded me of the old joke from one of our TV comedians who asked if NASA engineers, when explaining something simple about their job, say "it's not rocket science" when it patently is. Although I also hear it said that rockets are an engineering technology, not a science, so it should be "it's not rocket engineering". Doesn't have the same ring to it...
And, coming back to themed nights, round us the pubs struggle to manage themes like a "Mexican Night" or even a "Steak Night" (the last steak night I went to had fish 'n' and chips and mushroom lasagna on the menu). But then I realized that the pub we were in is just across the road from De Montfort University, so I assume that all the quiz contestants have brains the size of a planet. However, that turned out not to be the case when, partway through my session, the other (empty) half of the room filled up with groups of people armed with pens and paper. There was me trying to explain property setter injection, while somebody else was asking who Abraham's two brothers were in the Old Testament. I wonder if anybody answered "use a Dependency attribute"...
Anyway, it turned out that Sara (the barmaid - try and keep up at the back there) had heard about a new type of charity raffle they are running at another pub locally, where you pay so much and get to choose some numbers between something and something else, according to some rule (OK, so I didn't quite catch all of the details). But she wanted to know how much money they were going to make, and somehow it involves the Fibonacci numbers. Thankfully she got the formula from another (obviously academic) member of the clientele, and calculated that they'll make 450 pounds if they sell all of the numbers.
At that point, I just nodded sagely and offered to pay my one pound now if she promised not to try and make me understand how it worked. Dependency injection, AOP, service location, and inversion of control I can cope with. Biblical ancestry questions and higher mathematics I'm happy to postpone until another evening.
After approximately two weeks of intermittent network upgrades, I seem to still have a working network. I guess at least that's something to be thankful for. But it's still not fulfilled the original plan. And much hyper-ventilation has occurred during the process, particularly when watching those little green caterpillars crawl across the endless "Please wait..." dialogs, and wondering what the next error dialog will say...
Scene I: "Virtual Notworks"
One of the hardest parts of the configuration process for Hyper-V (at least for me) seems to be understanding virtual networking, and applying appropriate network settings. Despite reading up on it beforehand and thinking I grasped how it worked, I encountered endless error messages about multiple gateways and duplicated connections while trying to configure the network connections for the VMs and the host machine. It turns out that I was probably being as dense as usual in that I missed the obvious point about what the virtual switches that Hyper-V creates actually do.
Stepping back, the scenario is a machine with three physical network cards. I want to use one to connect the host (physical) server O/S to the internal network, one to connect specific virtual machines to the internal network, and one to connect specific virtual machines to the outside world. One of the virtual machines, hosting the firewall and mail server, will connect to both.
So step one is to use the Virtual Network Manager to create two virtual switches and connect these to the two physical NICs. When you look in the Manage Network Connections dialog on the host, you see - as expected - the three physical connections and two additional connections. What's confusing is that they are all "Connections". It's only when you examine the properties of each one that you realize two of them are bound to the new Microsoft Virtual Switch protocol and nothing else. At this point, it's a good idea to rename these connections so the name contains the word "Switch" to help you easily identify them.
So, now you can use the Hyper-V Settings dialog for each virtual machine to add the appropriate network connection(s) to that VM. What this actually does is create a connection within the virtual machine and "plug it into" the virtual switch you specify. You can, of course, plug the connections within more than one VM into each virtual switch. It really does help to think of the "switch" connections as "real" network switches like the 4 and 8 port ones you can buy from your local computer store. Ben Armstrong has some nice pictures in his blog post that illustrate this.
What's confusing in almost every post and document I've read is the use of the word "host" or "parent" to refer to the physical machine and its O/S. It implies that the VMs somehow run "inside" the O/S that is running directly on the hardware. I've started to refer to it in my head as the "base machine" and "base O/S" instead. While the base O/S and the Hyper-V runtime implement the virtual switches, these switches are not "inside" the base O/S. The Virtual Network Manager effectively moves them out of the base O/S. So the confusing part (at least for me) was what do I do with the two new "Connections" that are visible in the Manage Network Connections dialog of the base O/S. I know that I must configure the non-virtual connection that the base O/S will use to talk to the network. And I know that I have to configure, within each VM, the connections that I add to these VMs using the Hyper-V Settings dialog.
Unable to find any guidance on the matter, I assumed that the two "Connections" visible in the base O/S were being used to link the physical NICs to the virtual switches, hence the quandary over how to configure them. As it was, I followed the "know nowt, do nothing" approach and left them set to the default of "Obtain an IP address automatically". It was only after a day or so I noticed that file copy speed was erratic, and that the physical servers each had two different IP addresses in the domain DNS list.
Probably you are already hopping up and down, and waving your arms to try and attract my attention, with the answer. My error is obvious now, but wasn't at the time. What the Virtual Network manager does is steal the physical NIC and plug it into a virtual switch. However, this would cause a problem if the machine only had one physical NIC, so it tries to be helpful by automatically creating a new connection in the base O/S for each virtual switch it creates, and then plugs these new connections into the appropriate virtual switch. This means that the base O/S still has access to the physical NIC.
However, this also means that, on a multi-NIC machine, you can easily get duplicate connections in the base O/S. For example, in my case I already have a connection in the base O/S that's nailed to one of the physical NICs, and that's all I need. But when I dig a bit of CAT6 out of the junk box and plug one of the other physical NICs in the machine into the network, the virtual switch links it to one of the un-configured "Connections" in the base O/S. This means I've got two connections from the base O/S to the network for the same machine, but with different IP addresses.
If you managed to follow that rambling description, you'll be pleased to know that it finally dawned on me what was going on, and I confirmed it when I finally came across this advice in the last comment to a long blog post on the subject: "...if you have multiple physical NICs, disable the duplicated connections in the base O/S that the Virtual Network Manager creates". In other words, in the Manage Network Connections list in the base O/S, unplug all the "Connections" (not the "Switches") that Hyper-V so helpfully created (and, coincidently, you don't know what to do with). Unless, of course, you need the base O/S to talk to more than one network, but that probably negates the whole point of having a vanilla and minimum base O/S install that runs multiple VMs containing all the complicated stuff.
Note: In Windows Server 2008 R2 you can untick the Allow management operating system to share this network adapter option in Virtual Network Manager to remove these duplicated connections from the base O/S so that updates and patches applied in the future do not re-enable them.
By the way, if you get odd messages about duplicate connection names, gateways, or other stuff while configuring network connections within a VM, it's worth checking for any "orphan" unconnected connections that the Virtual Network Manager may have created. In fact, it's worth doing this anyway to avoid "connections problems" when you try to import an exported VM if the roof falls in. Use the process described in http://support.microsoft.com/kb/269155 to find these and uninstall them.
Scene II: "An Exchange of Plan"
All that remains now is to get one more VM up and running to host my firewall, public DNS, and Exchange Server. One more day's work and it will all be done. All the hardware is in place, all the infrastructure and networks installed, and most of it is performing without filling the Event Logs with those nasty "red cross" messages. Maybe I can phone the lad down the road who is finding a home for my old boxes and get rid of the last one...
Or maybe not. I just read the "ReadMe" file for ISA 2006 and discovered that I can't run it on a 64-bit machine. Yet Exchange Server really wants 64-bit to work properly (according to the docs). And why should I run 32-bit software on my gleaming new 64-bit boxes anyway? So I check out the replacement, Forefront, but it's still in Beta. Do I want to chance that on my only connection to the outside world? Probably not.
And after reading How to Transition (or Migrate) to Microsoft Exchange Server 2007 I begin to wonder how migration will go when I'm coming from a box that was originally upgraded from Exchange 5.5 to Exchange Server 2000. Do I really need an Exchange Server? Yes, it's useful for experimenting and researching stuff I work on, but the administrative overhead - never mind the upgrade hassle I can see lurking in the wings - probably far outweighs the gains.
In fact, if it's comparable to the struggle with Windows 2000 Server, I'll probably have to book a week's vacation. Or hire someone who knows what they're doing. Maybe I should just have done that in the first place, but then I wouldn't have learned all this valuable stuff about how it all works.
For example, after a couple of days, the old server, which is still the main domain controller for the external network, started filling the Event Log with a message every five minutes telling me that there was a domain error. According to Microsoft, the message you usually get is
Well, that would be useful. What I got was:
However, after implementing the process described in Event ID 1000, 1001 is logged every five minutes in the Application event log and rebooting, it seems to be fixed. The problem was incorrect permissions on the Winnt\SysVol folder and rights assignment for "Bypass traverse checking". Probably another left-over from the original NT4 installation. Thank heavens for Technet...
And, increasingly, I find I'm struggling for disk space. I need 120GB just to back up the three VMs I'm running, and the servers only have a pair of 160GB disks. If you are ordering hardware to do Hyper-V, buy boxes with four times the space you think you'll need. And make sure you get Gigabit NICs in them and use quality CAT6 cable and a Gigabit network switch 'cos you're going to be spending a lot of time copying very large files...
Ultimately, I took the decision to outsource my Exchange Server to a well-known and reputable company here in the UK. The cost is less than I pay now just for outsouorced email filtering services, so it looks like a bargain. And that meant that I could create a virtual 32-bit Windows 2003 instance (ISA will not install on Win 2008) on Hyper-V to run just ISA 2006 and the external DNS server for my public domains. Less stuff to worry about in the long term I hope, though I'll probably have to upgrade that to Forefront on Win 2008 some time in the future. But at least there's no need now for an external domain!
Scene III: "DNS = Decidedly Negative Scenario"
Of course, everyone knows that DNS is a black art, and that you should never expect a DNS server to do what you expect. Well, unless you know about this stuff anyway. Up to now, my old DNS setup seemed to be working fine, though probably more through luck and old shoelaces than any real expertise on my part. So I decided this time to read up on how I should do DNS for ISA and an external DNS server to see if I could get it right. And, having got it all set up and running fine on a spare IP address, all seemed hunky-dory.
Until the "big switch-over day" arrived and I pulled the old ISA box out of the network. Everything stopped working. Every machine began to spew its excess event log messages all over the garage floor. My wife was shouting that she couldn't get her email. And it was only 9 o'clock on a Sunday morning. Maybe I should just put the old ISA box back and go back to bed...? However, after calming down and topping up with coffee, I started to investigate. A couple of wrong gateway entries in the domain controller network connections obviously weren't helping, but fixing these didn't cure it. So I went back to the docs to see what I missed the first time round.
The guidance I'd used was Configuring DNS Servers for ISA Server 2004 (there is no ISA 2006 version), which shows the setup for "Domain Member ISA Server computer with full internal resolution". However, the doc is a bit confusing in that it covers several different scenarios. In the end, it was grasping that the ISA box needs to use the internal DNS server and that the internal DNS server will do all forwarding to other DNS servers. These forward lookups go out to the Internet through the ISA server, but do not go to the DNS server on the ISA box. Read "Why can’t I point to the Windows 2000 DNS first, and then to the ISP DNS?" in the "Common Questions" section of that document to understand why. Plus, the internal domain machines must not include the external DNS server in their list of DNS servers, but should instead reference only the internal DNS and allow that to forward lookups (I use DHCP to set these options). Maybe the following more detailed version of the schematic in the Technet doc will help...
Note: If your public DNS server is only answering queries for zones for which it is authoratative (which is most likely the case) make sure you set the Disable Recursion option in the Advanced tab of the Properties dialog for the DNS server. See Can I Plug My Guitar Into My DNS Server? for more details.
I set the zone TTL for the external DNS server zones to one day, but you may want to increase that if you don't plan moving IP addresses around or updating records very often. Keep the internal TTL at about an hour to cope with DHCP and dynamic address updates. One thing I noticed is that, if you don't specify a DNS server for an interface (i.e. the external network connection), Windows uses the local 127.0.0.1 address "because DNS is installed on this machine". But it doesn't seem to break anything that I've noticed yet...
Scene IV: "Time Passes..."
They say that the show ain't over till the fat lady sings. I sincerely hope she's in the wings tuning up and ready to let rip, because the tidying up after my virtual Yuletide seems to go on and on. Obviously I broke most of the connections and batch files on the network by changing the machine names and IP addresses. But other things about Hyper-V are still catching me out.
For example, I've always used the primary domain controller as a reliable time source for each domain by configuring it to talk to an external time server pool. I even know the NET TIME command line options off by heart. But it all gets silly on Hyper-V because you have multiple servers trying to set the time. The solution, I read, is to get the base O/S to act as a reliable time source, and target the VMs (and other machines if required) to it. You have to use use the more complex syntax of the W32TM command, but it all seemed to work fine until I installed the ISA box. ISA 2006 is clever in that it automatically allows a domain-joined machine to talk to "trusted servers" (which, you'd assume, includes its domain controller). But I had tons of messages saying it couldn't contact or get a valid time through the internal or external connection.
Well, I have to say that I wouldn't expect it to work with the external connection as that is blocked for the ISA box. But why not over the internal connection? Should I just disable the w32time service on the grounds that Hyper-V automatically syncs time for the VMs it hosts (unless you disable this in the Hyper-V Settings dialog for the VM)? Or should I allow external NTP (UDP) access from the ISA box to an external time server? In the end, after some help from other bloggers, I just used NET TIME to remove any registered time servers from the ISA box, restarted the w32time service, and it automatically picked up time from both the "VM IC Time Synchronization Provider" and the domain controller. Perhaps, like me, it just needed a rest before starting again.
Another interesting (?) issue that crawled out of the woodwork after a few days was the error "The Key Distribution Center (KDC) cannot find a suitable certificate to use for smart card logons, or the KDC certificate could not be verified." As I don't use smart cards, I ignored the error until I found the article Event ID 29 — KDC Certificate Availability on Technet. Another example of problems bought on by domain migration from Windows 2003 perhaps. As with several other issues, the solution is less than useful because I get to the bit where it says "...click Request New Certificate and complete the appropriate information in the Certificate Enrollment Wizard for a domain controller certificate", but the Wizard tells me that "certificate types are not available" and I should "contact the Administrator".
Not a lot of use when I am pretending to be the Administrator. Unable to find any other useful guidance, I took a chance and installed the Active Directory Certificate Services role, which created a root certificate in the Personal store and allowed me to create the domain controller certificate I needed. I have no idea if this is the correct approach, but time will no doubt tell...
One thing I would recommend is putting the machine name in big letters on the screen background. I used to get lost just working four machines through a KVM. Now there are multiple machines for some of the KVM buttons. And if you are executing command line options, use the version that contains the machine name as a parameter in case you aren't actually on the machine you think you are...
Finale: "Was It Worth It?"
So, after three weeks, was it actually worth it? I'm not referring to the time you've wasted reading all this administrative junk and doubtful meanderings. I mean, what do I think about the process and the result? Here's my opinions:
And the good news for any remaining readers of my blog is that I can maybe find something more interesting to ramble on about next week...
Outside, the snow lies deep, crisp, and even. Inside, the continuing quest to achieve calm and serenity through the application of virtuality. Noticeably, without much sign of virtuosity. I know that a "Minister of the Church" is somebody who ministers to the poor and sick as well as the good and the godly. I wonder if, being surrounded by all my very old and somewhat sick machines, I am really a "Network Ministrator". So far, "Administration" doesn't seem to be one of my latent talents. Still, onwards ever onwards...
Scene I, "Reality Begins to Bite"
Today, set up the external Web sites on a separate Hyper-V instance that will sit in the DMZ. I run the old DaveAndAl site that still contains the support stuff for books Dave and I wrote before I drifted into the arms of MS as an employee, and I wanted to keep that available. I also run the local village information site that contains news, events, activities, and other resources for the village action group (based on the ASP.NET "Club" Starter Kit).
This all went reasonably well, except for some hassle figuring out that you have to install the SMTP service separately and use IIS 6.0 Manager to manipulate it. However, the Web app just drops messages into the Pickup folder, and the SMTP service instantly sweeps them up and tosses them out onto the Internet. Installing the SMTP role automatically adds a suitable rule to Windows Firewall; however - if you are only sending mail (and have a suitable return address configured for the messages that is on a different and valid mail server) - you can disable this rule to prevent inbound access to the SMTP server. And I did remember to remove all the default exception rules for other services (except for HTTP and HTTPS, of course).
Final check, get few port scans done from various sites (such as Shields UP and Audit My PC), and go live. Wow, maybe I really can be a network administrator after all! While I've got a couple of hours left I'll prepare the old Windows 2000 box for migration of the domain to Server 2008....
So I read the docs about this process, and start to feel the Hyper Ventilation coming on again. If you have installed Exchange Server 2000 (I have), it will have corrupted three attributes in Active Directly that you "should" (it says) resolve before running ADPREP. But elsewhere it says that these are not really important attributes, and it even says you can resolve them after you run ADPREP. Maybe you can on Windows 2003 Server. On Windows 2000, ADPREP simply won't run at all until you fix them. Time, I think, for bed.
Scene II: "Reshaping History"
I've been following the "migration" approach to domain upgrades that Microsoft seem generally to recommend. You prepare the Active Directory forest and domain using ADPREP to get it up to the target version (2008 in this case), then join the new machine to the domain and use DCPROMO to promote it to a second domain controller. Finally, you move the global roles (such as FSO) to the new DC and then demote the old one.
This process worked reasonably well with the internal Windows 2003 domain, but it was looking like being a bit of an awkward task for the old Windows 2000 domain. Especially as I'd suddenly remembered while watching TV the night before that this machine and its domain started life under Windows NT4 running Exchange Server 5.5. Ah, what wondrous history doth permeate and perfuse my historic domain... and might be an additional cause of my problems. Strangely, I can't find any Microsoft KB articles that describe moving from NT4 to Server 2008...
But I did read up on the schema issues with Exchange Server 2000, and created and ran the scripts to fix the incompatibilities. Or rather, I tried to. None worked. In the end, I resorted to using AdsiEdit to manually change the LDAPDisplayName attributes in the AD schema (see http://support.microsoft.com/kb/314649). After that the ADPREP process succeeded and updated the forest and the domain to the Windows 2008 schemas. Four hours gone so far.
Next, join the new Windows 2008 box to the existing domain. No problem - worked a treat. Then install the Active Directory role and promote the new box to a domain controller. Oh dear - "Failed to modify the necessary properties for the machine account. Access is denied." So, off to search for more help and apply the user rights delegation permission configuration specified in http://support.microsoft.com/kb/232070. After these fixes, DCPROMO ran fine and I had a new domain controller. And it only took seven hours in total.
Scene III: "The Plan? What Plan?"
You see, I thought I had this all planned out. Virtualize my three important and complicated servers so I can keep backup copies of the VHDs and just fire the appropriate one up if the active one throws a software wobbly. Or even run all three on the second identical (cold-swap) backup server if bits start falling off the active box.
Except, now, things are starting to look a bit less well planned. For example, if I need to shut down the host box to fix something, or just to copy the VHD off as a backup, I have no domain controller or internal DNS. So all the other machines on the network start wandering aimlessly around like lost souls. And the same will happen with the virtualized external domain controller once I get that part of the network upgraded...
Worse still, the base machine that hosts the VM that is the domain controller complains when it boots that it "...cannot find a controller for the domain..." (not surprising I guess), and so it gets all huffy about doing even ordinary stuff. Probably the worse huff is that it can't access the Hyper-V Manager service. So you are in the interesting position of not actually being able to talk to your VMs, one of which is the domain controller. OK, so if the VM is set to auto-start, things will sort themselves out after about half an hour. But what happens if it isn't set to auto-start? Or if it falls over when starting? In the end, it looks like I need a physical (rather than virtual) domain controller that starts up before the Hyper-V host machine...
And, again, it seems I should have ordered bigger (or more) disks. Turns out you can't just copy a VM file and the corresponding config file to another Hyper-V enabled server and run it, because that screws up the network configuration (see Hyper-V Export and Import Part 1 and Part 2). You need to use the Export command in Hyper-V Manager, but that only works when exporting to the same physical host machine.
I suggest you become familiar with Ben Armstrong's blog at http://blogs.msdn.com/virtual_pc_guy/, 'cos he does document many of these kinds of issues. For example, whether to virtualize domain controllers, how to manage VMs with script, how to export VMs, and much more. He also references other blogs that describe import/export work-rounds, though none are officially supported. I used an old spare machine to expose backup storage space for exported VMs and used the process described in the comments to Brian Ehlert's blog post to export them to that machine. But make sure you read the security caveats.
And if you intend to make backup copies of VMs to run on another machine should a hardware disaster occur, check out John Howard's blog as well - especially Why is networking reset in my VM when I copy a VHD? and MAC Address allocation and apparent network issues MAC collisions can cause.
More next week...
I guess there are at least two people out there who may be interested in hearing about my latest upgrade experiences. One of them I know is just about to experience hyper-ventilation. Perhaps if I sprinkle it with some useful tips and pointers I can make it at least partly worth reading. And maybe mix in some wry comments and general grumbles about the life and times of a reluctant network-basher (that's a bit like a metal-basher, but with a smaller hammer).
Overture: "Where the Story Starts"
So, the story so far is that my network consists of a mixture of old machines acting as servers (only one of which actually is a "proper" server - the rest is a sad selection of aging desktops) that - together - support an internal and an external domain, ISA and mail servers, DNS, DHCP, file storage, backup, time synchronization, and pretty much everything else. A while ago I upgraded the internal domain from Windows 2000 Server to Windows 2003 Server (mainly because I wanted to use media streaming) - see Say Hello to DELILAH for more excruciating detail about that episode.
But the really important stuff (external domain, ISA, Exchange, etc.) is still on an old Dan Technology desktop running Windows 2000 Server, with an identical box available as a "cold swap" emergency backup. As support for 2000 will no doubt disappear in the near future, and the boxes are looking exceedingly delicate with the never-ending stream of patches and updates, I decided to do a full replacement with virtual server technology on Windows Server 2008.
Yes, I did check the Windows Hardware Compatibility List (HCL) to ensure that the new servers I selected will run Windows Server 2008 and are Hyper-V compatible. And I ordered them with three network cards so I can have separate host, internal, and external connections as Microsoft recommends. Mind you, looking back now, it's a shame I was a bit mean about ordering bigger disks. I have no idea how two 160 GB disks can fill up so quickly, and without any real effort on my behalf...
Scene I: "The First Day"
This started with installing Windows Server 2008 on the two new boxes. I actually did read the instructions about how to install an O/S on the servers, and followed the steps in the Dell OpenManage utility right up the point where it reported that my Windows installation disk was not "valid media". So, instead, just do the obvious and stuff the Windows 2008 DVD into the drive and reboot. Try and stay calm (the first Hyper-Ventilation moment) as the loader furkles about inside the server to see if it can match a SATA driver with the various bits of wire and plastic that make up the hardware, and -"YES!" - it installs without a problem.
Repeat on the other server, and then install the Hyper-V role on each one. Ah - forgot to edit the BIOS settings to enable the processor virtualization technology (it's disabled by default). Then use Hyper-V Manager to allocate the two extra network connections on the extension card (an Intel PRO/1000 PT Dual Adapter) to provide virtual "internal" and "external" connections to my network. The built-in network adapter provides a separate connection to the parent or base Windows 2008 O/S instance that runs the Hyper-V role, as suggested by various Hyper-V experts.
Scene II: "Looking Hopeful"
The next day's main task was to set up one of the Hyper-VMs as the main domain controller for my internal domain. Amazingly, ADPREP ran perfectly on the existing Windows 2003 domain controller so it was time to create my first Virtual Machine (VM). I wanted fixed size disks, as Microsoft recommends for production servers, and so had to use the New | Disk option to create the disk first, then New | Virtual Machine to create the VM with the existing disk. The Hyper-V Manager automatically enables the CD-ROM for it, so you just shove the 2008 setup disk in and start the new VM to get installation under way.
However, when I get to the bit where you join the domain, the new VM can't see the network, or even its host. Look in the "Manage Network Connections" list and there aren't any there to manage. It seems that the Hyper-V stuff was a Beta in the release version of Windows 2008, but got upgraded on the host when I applied all of the service packs and patches. Problem is that the new installation in the VM doesn't have the corresponding patches, and can't see the network to fetch them from WSUS or Windows Update.
What you have to do, it turns out, is go into the Settings for the VM in Hyper-V Manager with the VM turned off and remove the Network Connection and replace it with a "Legacy Network Connection" that simply nails the physical NIC to the side of the VM. You can almost image Hyper-V shoving it's nose up to the O/S in the VM and saying "Now can you see it?" in a threatening kind of tone... At least this gets you a network connection you can use to fetch and install the release version of the Hyper-V integration components. Then replace the Legacy Connection with a Virtual one. At this point, if you are installing something other than Server 2008 in the VM, you'll also need to use the Action menu to install the Integration Components (just like with Virtual PC).
Scene III: "Distant Sirens Calling"
Now I can join the new VM to the domain controlled by the old Server 2003 box. Except I can't because the Active Directory installation complains that it cannot find the domain controller - even though NSLOOKUP finds it and PING works on the IP address. But PING no longer works using the FQDN of the machine (as in "name.domain.com"). Why not? It did before.
Turns out that Windows Server 2008 is showing off already by using the funky new IPv6 protocol over the network; which, to the old DNS server in the Windows 2003 box, just sounds like "nah-nah-nah" noises. After another perusal of the fancy new-look KB pages on Technet, I found this note: "The DNS Server service in Windows Server 2008 and Windows Server 2003 supports the storage, querying, and dynamic registration of IPv6 host resource records. DNS messages can be exchanged over either IPv4 or IPv6. To enable the DNS Server service in Windows Server 2003 to use DNS over IPv6, use the dnscmd /config /EnableIPv6 1 command, and then restart the DNS Server service."
That cured it, but it sure would have been nice to be forewarned. Probably it's in little writing at the bottom of one of pages in the increasing stack of printouts I'm now carrying around. I'll soon need to look out my old briefcase from the days when I was "a sales executive". Probably have a shave and wear a suit and tie as well to see if that helps.
Anyway, after that fix, I quickly joined the new virtual box to the domain and took over the FSO and other roles. I also set up the various folders, batch files, and other links so that the parent O/S would act as a file store to back up working documents en-route to the NAS (unless Buffalo come and take it away in a huff after they read last week's post) and then on to my secure backup facility (which is actually a separate USB drive).
Flushed with success, I even managed to move my Windows Server Update Services (WSUS) facility over to the new box and get that sorted as well, along with retargeting all the clients to it. Only about nine hours to get all this sorted!
Microsoft takes security seriously. I take security seriously. I don't have simple one-lever locks on my doors that you can open with a hairgrip, and I wouldn't use the name of my cat as my system administrator password. Well, maybe I would if my cat was called "g&e7532%dH$7", but imagine the fun I'd have calling it in for its supper at night if it was. Besides, I've got two cats, so it would only get confusing. That's why I wanted to call them "Bev" and "Kev" (but I was over-ruled by my wife).
Anyway, this article is not about cats, it's about Buffalo. Or, to be more precise, the Buffalo LinkStation network drive. I've used one of these for some time, and I love it. It is joined to my internal domain as a computer, and it's easy to access from any machine while maintaining security and protecting the content. Though now it's not. Why? Because I switched my domain controllers from Windows 2003 Server to Windows Server 2008 (and why are the names the other way round now?).
After the move, I just couldn't manage to connect with my Buffalo. While it might sound like something that demands counseling, I thought I could fix it by reconnecting to the domain - even though none of my other machines complained about the upgrade to the servers. Aha! It seems that the drive remembers the NetBIOS name of the domain controller (why?). So I set it to "WORKGROUP" mode and then filled in the details to connect to Active Directory. Just enter the DNS domain name, the name of the domain controller machine, and the credentials to connect to AD. Press Submit, and you get the message "ERROR: The Administrator password can contain 256 alphanumeric characters, a hyphen, and an underscore." My secure domain admin password is rejected because the guy who built the Web-based admin interface, while he might have been an expert in Linux operating systems, had no idea how to build Web stuff.
So, I create a new domain account named "TEMP" with the password "temp" and give it every permission I can find. But it still fails. So I try the "Join NT Domain" option, but that fails too. As does delegating to an external SMB server. Even though my domain is still in "2003" mode and not "native 2008" mode. In the end, I'm stuck with having to use the built-in admin account to access the content. Oh, and by the way, you can stop trying to hack my server 'cos I did remember to remove the "TEMP" account again.
Later I did the "visit our support forum" thing on the Buffalo Web site and discovered dozens of posts from people with the same problem. Some are of the opinion that Buffalo are "working on a fix", others report that they were told it wasn't due to be resolved and it was their own fault for upgrading to Server 2008. I suppose as the O/S in the drive is Linux, you're supposed to fix it yourself.
As a temporary solution I just created a local user account on the drive with read/write access to the contents and got each machine to remember the login details. It's not ideal but it works. Except that all of my backup batch files now failed because they run under a domain account that accesses the source on the disks to be backed up. Thankfully, I remembered the NET USE command, which allows you to specify the credentials for accessing a remote machine. My batch files look something like this now:
NET USE "\\LinkStoreName\ShareName" password /USER:usernameXCOPY "C:\MyFiles\Data\*.*" "\\LinkStoreName\ShareName\DataBackup\" /s/y/c/a
You can even get the command to create and use a drive mapping, for example as drive Z:, like this:
NET USE Z: "\\LinkStoreName\ShareName" password /USER:usernameXCOPY "C:\MyFiles\Data\*.*" "Z:\DataBackup\" /s/y/c/a
However, now all I got was "Access denied" messages when XCOPY tried to access existing folders - though the actual error message is "Cannot create folder", which is a little confusing because it did create new folders where there wasn't one already. After fiddling about for a while, it was obvious. The existing folders were created under the domain account I used to use. I renamed the existing folders in Windows Explorer and XCOPY quite happily created the new ones. Where I knew the server still had all of the data in the backup folders, I just deleted the whole folder tree and allowed XCOPY to recreate it using the credentials of the new local Buffalo drive account specified in the NET USE command.
OK, so it's not a great idea having credentials in the batch file, but at least these files are secured on the server away from public view. And it works. At least, it does until Windows Server 2010 comes out. Mind you, I calculated that I am due to retire the same week that support for Windows Server 2008 ends, so maybe I'll never have to upgrade again! I even asked Dell for a 10 year guarantee on the new servers so I was covered all round, but they seemed a bit reticent about that...
So, the only bad news is that over the next few weeks my blog is likely to be full of boring stuff about the issues involved in moving to Windows Server 2008, configuring Hyper-V, getting your head round Virtual Networks and the Windows Time Service, migrating a domain from Windows 2000 Server (that might be two weeks' worth), and other related stuff.
So they had an election ages ago in the US, but I still keep seeing that nice Mr. Bush on TV and in the newspapers. It seems like the even nicer Mr. Obama doesn't actually get the keys to the Oval Office until this year. I suppose that kind of makes sense. I mean, if you were employing a new airline pilot, you probably wouldn't want to give him or her the keys to a 747 until they'd had a few goes at landing one on a simulator, and proved that that they know which door to go in through when it comes time to do it for real. Especially if they haven't actually flown a plane before.
Maybe Mr. Obama has spent the last few months getting up to speed on his new job. He's probably been taking part in reruns of "The West Wing", rushing about being talked at by six people at once. Or perhaps he's been locked away with a headset on working through a "presidency" simulator where he has to balance the economy and try not to start too many wars. I guess with a job as important as he's got, you need to be reasonably good at it from the start rather than spending the first few months messing about installing the software you need on your laptop, trying to remember important people's names, and discovering where the restroom is.
All this contrasts with our somewhat lackadaisical approach on this side of the pond. When we have an election, we don't get to know the result until the next day. That's mainly because we don't actually trust anything that isn't written down on paper with a stumpy and blunt black pencil (usually tied to the voting booth desk with a piece of string for security reasons), so they have to get a heap of people to count them all by hand afterwards. Still, at least it gives the TV presenters plenty of time to play with their "swingometers" and other fancy CGI stuff. But I suppose they've had two years of that already in the US, so people are losing interest by the end of the process.
Meanwhile back in the UK, once they do decide who won, the boss of that party has to drop in for breakfast with the Queen and see if it's OK for him or her to form a Government. Providing she says yes (I'm not sure what happens if she says no), the new Prime Minister can wander down the road to Number 10 and start running the country. Presumably, if they don't need any training, it must be relatively easy. Mind you, as they were most likely to have been up all the night before partying, they'll probably have a bit of a hangover. Probably best not to make too many huge impact major decisions on the first day. And ask someone where the restroom is.
One thing I never discovered about the US election process is what the difference is between a "Soccer Mom" and a "Hockey Mom" (other than the lipstick). I'd assumed that Sarah Palin doesn't actually play hockey, and that the name comes from her transporting the kids to and from their hockey games. Then I found this post in which Lynn Wilhelm explains that there are other less well-known categories of parenting that I wasn't aware of. Such as the "NASCAR Dad". And as Obama is a basketball devotee, we'll presumably soon be seeing the newspapers full of "Basketball Mom" stories. Lynn even goes to the lengths of explaining that ice hockey is more popular in Alaska than it is in Florida (which doesn't seem surprising), and that ten times more "casual participants" (I assume she means kids) play soccer and basketball than hockey.
Here in the UK, we've already had a political focus on "Mondeo Man" (Mondeo is the name of a mid-range Ford motorcar, and supposedly refers to the middle-class amongst the population). Though, if everyone is downsizing to save money and be green, maybe next time it will be "Focus Man" or even "Fiesta Man". Or maybe, instead, we'll continue our USification by seeing an increasing election-time focus on sectors of the population based on their kid's pastimes? Perhaps politicians will start to aim their policies at "Cricket Mom", "Rounders Mom", "Xbox Mom", "Reading Harry Potter For The Fifth Time Mom", or even "Hanging Around On Street Corners Getting Drunk Mom".
And, after all this, I hear from a colleague in the US that their election might not actually be over yet. It seems that some people are waiting for the Supreme Court to rule that Mr. Obama isn't actually eligible to be President because his Dad was Kenyan, had a British passport, and dual nationality at birth. But don't panic - we can send over our nice Mr. Blair (he's not busy at the moment) to handle things until you make up your mind.
So by now you're probably wondering what, other that a brief mention of Xbox, all this rambling has to do with computers, documentation, and software. To be honest, I don't have any idea either...