Random Disconnected Diatribes of a p&p Documentation Engineer
So after I was castigated by Cisco's customer support people for buying from Amazon, who they class as a "grey importer", I decided to do the right thing this time. And look where it got me.
I decided to upgrade my load-balancing router, and chose one that is relatively inexpensive yet has more power and features than the existing one. Yes, it's a Cisco product - the RV320. It looks like it's just the thing I need to provide high performance firewalling, port forwarding, and load balancing between two ISPs.
On Amazon there are several comments from customers who bought one, indicating that the one supplied had a European (rather than UK) power supply. Probably because those suppliers are not UK-based. That does sound like it's a grey import, and - like last time - means I wouldn't get any technical support. However, there are UK-based suppliers on Amazon as well, so I could have ordered it from one of these, and easily returned it if it wasn't the UK version.
But, instead, I used the Cisco UK site to locate an approved Cisco retailer located in the UK, and placed an order with them. Their website said they had 53 in stock and the price was much the same as those on Amazon, though I had to pay extra for delivery of the real thing. Still, the small extra charge would be worth it to get technical support, and just to know that I had an approved product.
And two weeks later, with two promised delivery days passed, I'm still waiting. The first excuse was that their suppliers had not updated the stock figures over the holiday period. So in actual fact they didn't have any in stock, despite what the website said. Probably the 53 referred to what the UK Cisco main distributor had in its warehouse. And, of course, they sold all 53 over the holiday period. Maybe, like me, all network administrators choose the Christmas holiday to upgrade their network.
A query after the second non-delivery simply prompted a "we'll investigate" response. So much for trying to spread my online acquisition pattern wider than just Amazon. I could have ordered one from Amazon.co.uk at the same price and had it installed and working a week ago. Or even paid for next-day delivery from Amazon, sent it back for replacement twice, and still be using it now. In a world that is increasingly driven by online purchasing and fast fulfilment, an arrangement of the words "act", "together", "your", and "get" seems particularly applicable if they want to remain competitive.
But I suppose I should have remembered that you can't believe everything you read on the Internet...
I've been watching the BBC program Stargazing on TV this week, and I have to say that they did a much better job this year than last. As well as stunning live views of the Northern Lights over three nights, and interviews with some ex-astronauts as well as a lady from the Cassini imaging team, viewers discovered a previously unknown galaxy. I guess that's what you call an interactive experience.
I tend to be something of an armchair stargazer. I watch all the astronomy documentaries, and never miss "The Sky at Night" - thankfully the BBC changed their mind about dropping it after Patrick Moore passed away. We do have a telescope, but it seems to rarely see the light of day (or, more accurately, the dark of night). It's rather like the guitar sitting forlornly in the corner of the study. Both are waiting for me to retire so I have endless hours of free time.
Of course, astronomy is like most physical sciences. Some things you can easily accept, such as the description and facts about our own solar system. Though looking at photos of the surface of another planet is a little un-nerving, and it requires a stretch of the imagination to accept that you're not looking at a film set in Hollywood or a piece of the Mojave desert. And the fact that they say they can tell what the weather is like on some distant Earth-like planet in a far-off galaxy seems to be stretching the bounds of possibility.
They also had to mention the old "where's all the missing stuff" question again. Not only do we not know where 95% of the mass of every galaxy (including our own) is, but we have no idea what the dark matter that they use to describe this missing stuff actually is. Though there was an interesting discussion with the Gaia team, who reckon they can map it. We still won't know what it is, but at least we'll know where the largest lumps are.
One exciting segment of the final program was where viewers who were taking part in an exercise to find lensing galaxies, which can help to locate far more distant galaxies, came up with a really interesting hit. So much so that they immediately retargeted the Jodrell Bank and several other telescopes around the world at it. We wait the results with bated breath, including the name - which is currently open to suggestions.
But it's when they start talking about how you are seeing distant galaxies as they were several million, or even several billion, years ago that it gets a bit uncomfortable. Especially how the limit to our discoveries is stuff that is 14 billion light years away or closer, because the light coming from anything further away hasn't had time to reach us yet. Even though it all started in the same place at the Big Bang. And galaxies that are near the limit are actually 40 billion light years away now because they kept going since they emitted the light that we are looking at now. So will we still be able to see them next year?
Also interesting was the discussion of what happens when two galaxies collide. It seems that the Andromeda galaxy, our nearest neighbor, is heading towards us at a fairly brisk pace right now. Due to the vast distances between the stars in each galaxy, there's only a small chance of two stars (or the planets that orbit them) colliding, but they say it will produce some exciting opportunities for astronomical observation as it passes through. And there's a theory that the shape and content of own Milky Way galaxy is actually the result of a previous encounter with Andromeda anyway.
For me, however, one of the presenters managed to top all of these facts and theories almost by accident. When asked what the oldest visible thing in the Universe is, he simply pointed to himself and said that all the hydrogen atoms that make up parts of all of us (and everything around us) were made within two minutes of the Big Bang. So pretty much all of them are 14 billion years old.
Of course, the other things that make up us, the larger and more complicated atoms, are a bit younger. Many of these types of atoms are still being manufactured in distant super-novae, but this stuff inside us has no doubt been around for a very long time. As any good astronomer will tell you, Joni Mitchell was right when she sang "We are stardust"...
After spending part of the seasonal holiday break reorganizing my network and removing ISA Server, this week's task was reviewing the result to see if it fixed the problems, or if it just introduced more. And assessing what impact it has on the security and resilience of the network as a whole.
I always liked the fact that ISA Server sat between my internal domain network and the different subnet that hosted the router and modems. It felt like a warm blanket that would protect the internal servers and clients from anything nasty that crept in through the modems, and prevent anything untoward from escaping out onto the ‘Net.
The new configuration should, however, do much the same. OK, so the load-balancing router is now on the internal subnet, but its firewall contains all the outbound rules that were in ISA Server so nothing untoward should be leaking out through some nefarious open port. And all incoming requests are blocked. Beyond the router are two different subnets connecting it to the ADSL and cable modems, and both of those have their firewalls set to block all incoming packets. So I effectively have a perimeter network (we're not allowed to call it a DMZ any more) as well.
But there's no doubt that ISA Server does a lot more clever stuff than my router firewall. For example, it would occasionally tell me that a specific client had more than the safe number of concurrent connections open when I went on a mad spree of opening lots of new tabs in IE.
ISA Server also contained a custom deny rule for a set of domains that were identified as being doubtful or dangerous, using lists I downloaded from a malware domains service that I subscribe to. I can't easily replicate this in the router's firewall, so another solution was required. Which meant investigating some blocking solution that could be applied to the entire network.
Here in Britain, out deeply untechnical Government has responded to media-generated panic around the evils of the Internet by mandating that all ISPs introduce filtering for all subscribers. What would be really useful would be a system that blocked both illegal and malicious sites and content. Something like this could go a long way towards reducing the impact of viruses and Trojan propagation, and make the Web safer for everyone. But, of course, that doesn't get votes.
Instead, we have a half-baked scheme that is supposed to block "inappropriate content" to "protect children and vulnerable adults". That's a great idea, though some experts consider it to be totally unworkable. But it's better than nothing, I guess, even if nobody seems to know exactly what will be blocked. I asked my ISPs for more details of (a) how it worked – is it a safe DNS mechanism or URL filtering, or both; and (b) if it will block known phishing sites and sites containing malware.
The answer to both questions was, as you'd probably expect, "no comment". They either don't know, can't tell me (or they'd have to kill me), or won't reveal details in order to maintain the integrity of the mechanism. I suspect that they know it won't really be effective, especially against malware, and they're just doing it because not doing do would look bad.
So the next stage was to investigate the "safe DNS services" that are available on the ‘Net. Some companies that focus on identifying malicious sites offer DNS lookup services that automatically redirect requests for dangerous sites to a default "blocked" URL by returning a replacement IP address. The idea is that you simply point your own DNS to their DNS servers and you get a layer of protection against client computers accessing dangerous sites.
Previously I've used the DNS servers exposed by my ISPs, or public ones such as those exposed by Google and OpenNIC, which don't seem to do any of this clever stuff. But of the several safe DNS services I explored, some were less than ideal. At one of them the secondary DNS server was offline or failed. At another, every DNS lookup took five seconds. In the end the two candidates I identified were Norton ConnectSafe and OpenDNS. Both require sign-up, but as far as I can tell are free. In fact, you can see the DNS server addresses even without signing up.
Playing with nslookup against these DNS servers revealed that they seem fast and efficient. OpenDNS says it blocks malware and phishing sites, whereas Norton ConnectSafe has separate DNS server pairs for different levels of filtering. However, ConnectSafe seems to be in some transitional state between v1 and v2 at the moment, with conflicting messages when you try to test your setup. And neither it nor the OpenDNS test page showed that filtering was enabled, though the OpenDNS site contains some example URLs you can use to test that their DNS filtering is working.
The other issue I found with ConnectSafe is that the DNS Forwarders tab in Windows Server DNS Manager can't resolve their name servers (though they seem to work OK afterwards), whereas the OpenDNS servers can be resolved. Not that this should make any difference to the way DNS lookups work, but it was annoying enough to make me choose OpenDNS. Though I guess I could include both sets as Forwarders. It's likely that both of them keep their malware lists more up to date than I ever did.
So now I've removed all but the OpenDNS ones from my DNS Forwarders list for the time being while I see how well it works. Of course, what's actually going on is something equivalent to DNS poisoning, where the browser shows the URL you expect but you end up on a different site. But (hopefully) their redirection is done in a good way. I did read reports on the Web of these services hijacking Google searches and displaying annoying popups, but I'm not convinced that a reputable service would do that. Though I will be doubly vigilant for strange behaviour now.
Though I guess, at some point, you just have to trust somebody...
Having run out of ideas, and given up Binging a solution for my intermittent connectivity problems around Windows 8, IE 10, and Outlook, it was time to stop playing nicely. Time instead for a day with my head in the server cabinet, a handful of network cables, some sticky labels, and decisive action.
The problems documented over the past few weeks (see All I Want For Christmas) were still intermittently annoying, as well as being annoyingly intermittent. I was totally unable to track down any DNS problems, despite many hours experimenting with different forwarders, root entries, test scripts, clearing caches, and more.
I'd logged all dropped packets in ISA server for a day, but there were none related to the problem from the machines under test. Though ISA's performance monitor was consistently reporting an average dropped packet rate of 0.3 per second and I wasted half an hour tracing these back to my wireless access point. Even though all of its fancy USB, printer connection, and media sharing features are turned off it still insists on sending out a network discovery packet every three seconds. Highly annoying.
Then I read more on the ISA Server blog sites about how using the proxy client changes the behaviour of machines connected to ISA. Of course, I haven't actually installed the proxy client directly since XP days. I just assumed that some clever mechanism in Windows Vista, 7, and 8 used the Gateway/Router setting specified in DHCP to find the proxy server and set themselves up for it automatically.
What I read suggested that ISA itself is doing DNS lookups in response to requests from clients, whereas a ping or nslookup on the client uses the network DNS server or does its own DNS lookup. So trying to track faults with nslookup after a connection failure was a waste of time. By now I was rapidly tiring of trying to be a network administrator, and I didn't bother following this up any further to see if it really is the case.
All of which prompted the decision to perform some radical surgery in the server cabinet, and get rid of ISA Server altogether. It's a Hyper-V VM, so it won't reduce the physical server count - but it will simplify administration and backup tasks and, hopefully, resolve my connectivity problems. I replicated all the ISA outbound rules in the firewall of my load-balancing router, which sits between the ISA server and the two ISP modems. A day monitoring the router logs and fine-tuning rules suggested this would work fine.
Reconfiguring the network was deceptively easy. Simply power off the ISA VM, change the IP address of the router to the same as ISA (the address already specified in the DHCP scope options), and run some network penetration tests. Instantly everything seems to be faster, web page loads are snappier, and no sign of smoke or loud bangs. And if it all goes wrong, or turns out to be a mistake, I still have the ISA Server VM so I can easily revert to the previous configuration.
But will my simple load-balancing router be able to cope with all that extra firewalling and packet shifting load once I start hammering the network with my usual working-week vigour? It's only an old LinkSys RV042 with a 100 Mbs Ethernet port. Do I need to upgrade to something like the new RV320 instead? I guess I'll soon find out.
And, of course, the question now is what will I do next if my intermittently annoyingly connectivity problem is still annoyingly intermittent...?