Random Disconnected Diatribes of a p&p Documentation Engineer
A colleague pointed me at an interesting discussion the other day about whether geeks are actually "creative". It comes partly from a recent post by Ian Betteridge that rails against the claims that App Inventor, which is designed to encourage development of simple programs for Android, "enables people to be creative and not just passive consumers". However, what he doesn't explore is the real meaning of the word "creative" in today's terminology.
OK, so the dictionary defines "creative" as meaning "producing or using original and unusual ideas", with synonyms such as "original", "imaginative", "innovative", "artistic", and "inspired". These seem fine if you are talking about someone producing a stunning work of art in Corel Painter, or some spectacular new soundtrack using Roland Cakewalk. But nobody knocked up this software using a utility such as App Inventor. Instead it was created by a team of engineers working to fine tolerances and requiring a deep knowledge of the subject and technologies. Dare I say, "geeks"?
Looking at most Web sites, social network pages, online photo albums, TV programs, and magazine adverts, or listening to a large proportion of recently recorded music, it's hard to find much in the way of material you could even charitably describe as "original" or "innovative". Even computer applications seem mostly to be evolution rather than "inspired". How many word processors or virus scanners have you seen that you can actually say are "innovative", and when did you last install a program that was truly "original"? Maybe, as today's focus seems to be all about style (generally over content) you could perhaps ascribe the term "artistic" to many of these things - but even that rather stretches the imagination in the majority of cases.
However, according to my thesaurus, "creative" also spawns synonyms such as "inventive", "resourceful", "ingenious", and even "productive". So your modern word processor, spreadsheet, photo editing tools, and more - with their clever "automatic everything" and powerful Wizards for anything more complicated than typing a sentence - would fit well with this definition of creativity. Though, personally, I feel that equating creativity with productivity rather stretches the point. If I can type fast, I'm more productive. But the result often isn't creative in terms of the content.
I suppose, as a geek, I don't really want to be "artistic", or even "original", anyway. I'd like to be "productive", "inventive", and possibly "ingenious". I want what I build to be architecturally robust using tried and trusted techniques and proven technologies, and I'm happy to leave it to the UI designers to inject the artistic stuff. And, as I've never been on a creative writing course, I assume that what I do most days here at p&p is more about being technically accurate and informative rather than relying on artistic license (unlike my blogs posts).
Ian also suggests that we geeks no longer rule the universe, and that our era is over. Yet, without us, none of this creative stuff would exist - and the world would be very different. I can't see anyone using App Inventor or similar utilities to implement the software that controls a nuclear power station, or powers communications satellites. And I doubt that most corporate and financial data centers rely on programs written by a media analyst or a fashion designer. Was the O/S for the Android (or even App Inventor itself) written by a social communities coordinator or a society wedding planner? I don't think so. Geeks still do, and always will, shape our world. OK, so it doesn't look very pretty when we've finished with it, but we just hire in a creative artist to make the UI look nice afterwards.
But I guess where all this is going is that - today - the word "creative" actually has negative overtones in many scenarios. In a previous life as a salesman, I could almost guarantee that my manager's response to reading my monthly sales report would be to praise my creativity...
If you're a Douglas Adams fan, you'll know all about the fabulously beautiful planet named Bethselamin. The ten billion tourists who visit it each year were causing so much erosion that they introduced a rule whereby any net imbalance between the amount you eat and the amount you excrete whilst on the planet is surgically removed from your bodyweight when you leave (and therefore, as Douglas mentioned in the book, it is vitally important to get a receipt every time you go to the lavatory).
Yes, it's a well-worn quote, but I'm beginning to get nervous that it's starting to come true here on our own little blue-green planet (located, of course, in the uncharted backwaters of the unfashionable end of the western spiral arm of the Milky Way galaxy). And all because I needed a couple of new batteries for my uninterruptable power supplies.
We have a wonderful battery supplier just a mile or three down the road from here who seems to stock every kind of battery that human endeavor has managed to create, including sealed batteries for every APC UPS made in the last 20 years. That just about covers all the various models I have scattered around my house and office. What's more, they are considerably cheaper than buying online, there's no shipping cost, and they take the old ones back for recycling.
OK, so the APC ones do come with a label so you can ship them back to APC, but the address on the label is New Jersey USA. The nice man at our local post office worked out that it would be cheaper to fly over myself and take them back rather than sending them by post - though, of course, I'm not allowed to take batteries on a plane any more...
So, anyway, there I am with a couple of dead RBC2 batteries and a bag of assorted other used AA and the like, when the young assistant asks me for my name, address, postal code, phone number, and where I obtained the original batteries. As he'd been very pleasant and helpful so far, telling him to mind his own business seemed a bit strong, so instead I politely inquired as to why he wanted to know. I had deposited on the counter a selection of coins and notes of the realm in way of payment, so surely all he needed was to grab the cash, stuff it in the till, and I'd be on my way?
No, it seems that he needs to give me a receipt for the old batteries that are going for recycling. Maybe if there's not quite enough lead in them when they get round to breaking them up, they'll send me a bill for the balance? Or perhaps I'm only allowed to ecologically dispose of a specific quota of murky sulphuric acid each year and he's worried I'm approaching the limit? Aha! - more likely it's in case I stole them so I could sell them for the scrap value (which seems to a little perverse when I'm giving them away free for recycling).
It's plainly all part of a secretive scheme connected with the move to weigh and analyse the contents of our dustbins, film and record the registration number of our cars when we go to the local rubbish tip, and remotely monitor our energy consumption every hour with the new smart electricity meters. It will all be fed into some huge computer that will send out letters once a month with demands for the requisite body parts to make up the imbalance between our resources input and recycled output.
So it's probably a good idea to make sure you do get a receipt every time...
As far as I know, nobody has yet been able to answer the long-standing question of what will happen if you spread butter on the back of a cat. Will it exhibit the buttered-toast effect, or will it still land on its feet? We know from the established principles described by Murphy's Law that toast always lands butter side down; but at least there is a thread of scientific explanation for this, which says it's because the buttered side is smoother and more "slippery" - thus offering less air resistance and causing that side to fall faster and hit the ground first.
Associated with Murphy's Law is the far more interesting Law of Unintended Consequences. I was reminded of this just last week while reading an article by a school teacher who was discussing the worrying trend in the increasing number of school children categorized as having Special Educational Needs (SEN). Of course, many people are applying the political football approach to this; citing the usual factors of poor housing, lack of parenting skills, broken families, and Government interference in the school syllabus.
However, the author of the article took a different approach. See if you can spot any factors that may contribute to the increasing trend in the following:
Of course, unintended consequences due to Government intervention are obvious all over the place. These days the police have to meet targets for the number of crimes they solve, so a constable who sees somebody running away from an illegally parked car they've broken into has to instantly decide whether to give chase on the off chance that he can catch the runner and find some case for arresting them, or instead fill out a ticket and slap it on the illegally parked vehicle.
Likewise, family doctors have to provide an appointment on request within two working days. So, if you decide to book ahead for a routine visit next week, you can't because that would reduce their "patient charter" average score. Meanwhile, as our local council strives to reduce incidences of fly-tipping, the local council-run waste reclamation facility only allows you to use it if you arrive in a car. If you decide to hire a van or truck when you finally get round to clearing out all the rubbish in your grandparent's garage, you need to apply a week ahead by filling in a four page form to apply for a "waste reclamation site access certificate".
But unintended consequences have a far wider area of incidence, especially in our own industry. For example, I decided to green up my server cabinet by virtualizing all of my servers (see blog entries passim). However, now the domain controller is a virtual machine that doesn't start up until the base O/S is running. As it's a member of the domain, it does lots of grumbling and sometimes has a complete hissy fit when it reboots because it can't find its domain controller. And my NAS can't authenticate with my servers because I upgraded them to Windows Server 2008 and the NAS has never heard of that.
However, when it comes to software, I guess the law really comes into its own. I've recently been working on a project that combines Windows Phone 7 with Azure cloud services. But the phone is not exactly a hugely powerful platform for applications. It would be nice to think you could buy one with a quad core Xeon processor and four gig of memory on board, but until they invent mobile power stations that seems unlikely. So I have to minimize my application's memory footprint, reduce power consumption when possible, be aware that garbage collection will interrupt execution, and limit the number of requests I send to remote services to minimize communication costs.
Yet my desires to follow best practice and implement well known design patterns suggest I should use data transfer objects and view models to store intermediate data, use dependency injection to decouple my components, and apply configuration-based composition of interfaces to maximize upgradability and minimize maintenance costs. But I'm not supposed to create lots of objects in memory in order to minimize memory footprint and garbage collection intervention; yet I'm also required to hang onto objects rather than recreating them repeatedly, and use less of them. Maybe I should take up tight-rope walking as a hobby to get some practice in phone application development?
And don't get me started on writing documentation. If I slavishly follow guidelines for minimizing content to only that which is directly applicable to a scenario, users will be left wondering how they accomplish something more real-life than my simple examples. But if I dive deep and cover every aspect, there's too much to read and it becomes overwhelming. And if I simply present the basic facts, it's hard to see how you could describe this as guidance. Yet, if I litter it with links to other resources, users just get lost in the plethora of semi-redundant and partially related information.
It's a good thing that my blog posts are concise and focused, and don't just wander aimlessly from one vague topic to another...
Well, perhaps last week I could have. It turns out that it was quite happily performing as an amplifier. And there was me thinking I understood this stuff. Another Decidedly Negative Scenario in terms of my network administrative abilities. But at least I've learnt some more things I didn't know that I didn't know. What follows is a gentle stroll through the intricacies of DNS and firewall management I encountered.
It seems that DNS recursive amplification attacks are back again (if they ever went away). I first noticed the constant blinking of the router lights last week and decided, even though everything seemed to be working fine, to investigate. I checked that I had all the latest updates and patches installed, so that's not the problem. And the Web server (a Windows Server 2008 Hyper-V virtual machine) has Windows Advanced Firewall and IP filtering enabled to allow inbound packets only to the Web server and the DNS server. After a little digging, by selectively disabling the firewall "allow" rules and watching the network connection status display, I discovered it was regular long bursts of requests over UDP on port 53 to the DNS server on my public Web/DNS server machine. But why? Who could be that interested in my DNS entries?
It didn't take long to realize that they were requests for a list of name servers configured in the DNS. This is, of course, the whole point of the attack. The attacker sends a small payload and gets back a ton of data. And the clever bit in terms of the attack is that the return address for the DNS lookup is spoofed, so the response from my DNS server is not sent back to the attacker, but is instead sent to someone else's machine as my free contribution to a DDoS attack on that sever. So what to do about it? My colleague and I run public DNS servers to host the entries for about twenty local Web site and blog domains that we manage (acting as primary and secondary for each other's domains). So I can't just disable DNS.
I'd assumed that just blocking packets inbound from the supposed sender would made no difference - the packets are not actually coming from that address. In fact, they're probably coming from lots of different addesses in some botnet. And, in addition, the attacked addresses are likely to change regularly. But the firewall also sees the spoofed address, and so blocking these addresses does work. While this is not the ideal way to solve the problem, at least it stops some of the flow over the Internet and removes me from the list of innocent attackers. You can add multiple remote IP addresses to a blocking rule, which makes managing the changing list of attack addresses easier. Perhaps, in real life, I should spend a few thousand dollars on a hardware firewall that detects and automatically blocks these kinds of attacks. It could soak up some more expensive electricity and further raise the ambient temperature inside my server cabinet.
I threw together a simple Windows Forms application that monitors the DNS log for recursive name server queries and notifies me so I can quickly detect new ones. You can download it here.
Another core issue is that a public DNS server should not allow public recursive address lookups. It should only resolve public IP address lookups for domains for which it is authoritative (in other words, public domain names that you manage), and should not be referenced from any machines on your internal network. They should have their own DNS server that does recursive lookups to your ISP, or they should send requests directly to your ISP if you don't have an internal DNS server. Even the DNS server machine itself should use the ISPs DNS server to do its own lookups. If you do have recursion enabled (it seems that 70% of DNS servers out there do), your DNS server will even go off and look up the IP addresses of all the name servers and return them - further adding to the load on your connection and the Internet as a whole. So, open the Advanced tab of the Properties dialog for the DNS server itself (in DNS Manager) and make sure that the Disable recursion setting is ticked. This also disables any forwarders defined in the Forwarders tab.
If yours wasn't ticked before, you'll probably now find that a browser and other applications on the server can't get to any external sites. This is because you accepted the defaults when you originally set up the server (i.e. no DNS server address assigned in the network properties dialog), which causes Windows to use the local DNS server (127.0.0.1). Without recursion and forwarders, your DNS server can only do lookups for the domains it hosts. You'll need to replace the 127.0.0.1 address in the Advanced | DNS tab of the Network Properties dialog for the IPv4 and IPv6 protocols with your ISP's DNS server addresses. Applications and services running on the Web/DNS server will now query the DNS servers that are configured in the network properties dialog, and will not use the local DNS server.
After you get everything working again, you'll probably see that an NSLOOKUP from another network into the DNS server returns a list of root server addresses. These are hints to the requesting server on where to go to find the address it is looking for (because your DNS server can no longer do a recursive lookup). And if you look in the DNS log at the contents of the packets returned by the DNS server, you'll see that they're still quite large. So I started wondering - do I actually need root hints in the DNS server if all it's ever going to do is respond to requests for authoritative domains that it hosts? It's never going to need to know where www.someothersite.com actually is, or tell someone else how to find it.
So I decided to experiment with root hints. If I delete or loose them all, I know I can get them back by copying the cache.dns file installed in the %System Files%\dns\backup folder. So I deleted them and then used the Copy from Server button in the Root Hints tab to load them from one of my ISPs servers. And they were completely different from my original set. So I wandered across to IANA to get the latest list - which is remarkably different from that in my ISP's DNS! And, digging deeper, they seem to have different lists in each of their DNS servers. No wonder DNS lookups are slow sometimes...
Anyway, getting back to the issue in hand, do I really need root hints? As a test, I removed all of them from my public DNS server, and removed the forwarders as well. Now a request for one of the authoritative domains still works fine, as do transfers to the secondary DNS server, but recursive requests prompt a very small return payload containing a "Server Fail" code. It's hard to tell from all the blogs and guidance I've read on DNS good practice if this is an approved approach but, to be honest, nobody should be querying my DNS server for non-authoritative domains anyway. So it's their own fault. Meanwhile my contribution to the DDoS attacks is very significantly reduced. Though now I get a Warning entry in Windows Event Log telling me that there are no forwarders or root hints each time the DNS service starts ... but I'm ignoring these.
Having now got my hands dirty, I thought it would be a good idea to see if I could harden the server a little more. Such as applying outbound filtering in the firewall. I do it in ISA Server on my private internal network, but its easy there because there are lots of appropriately pre-configured rules you can apply. Windows Firewall with Advanced Security (WFWAS) seems to have lots of predefined rules, but not for useful things like allowing Web browsing and sending email. Yet I must have read fifty articles on the Web looking for details of appropriate rule properties, and all I could find were articles and blog posts that said things like "Select the predefined SMTP Server rule ". Err, where is that? Only in ISA Server I suspect.
Mind you, it's not hard to use the New Outbound Rule Wizard create a Custom rule that allows Web browsing and services on the machine (such as Windows Update) to work. Basically it's any service/user using TCP through any outbound port to any remote server on ports 80 and 443 (and maybe 8080 or similar as well if you need to access sites you know are running on this or a non-default port. Visual Studio Team Foundation Server is a typical example). Make sure you create it in the Outbound Rules page, and be sure to specify the Public Profile, not Private or Domain. If you look at the Monitoring page in the WFWAS console, you should see that the Public Profile is active. If not, immediately dive into Network and Sharing Center, click "Customize", and set the connection type to Public!
To be more strict with Web access, you could configure the rule to allow only specific services or applications - but that's much more complicated. For example, what about your "alternative" Web browser, or Java Update Service, or the updates service for Adobe Reader? And remember Windows Update... As a mitigating factor, you should be running your browser in Enhanced Security mode on the server anyway, and only using it when absolutely necessary.
Allowing DNS queries from your server, and responses to other servers from your DNS server, out through the firewall is easier because there are preconfigured rules for these. I found that I needed to enable "All Outgoing (TCP)" and "All Outgoing (UDP)" in the "DNS Service" group, and "Core Networking - DNS (UDP Out)". Again, make sure you select the ones for the Public Profile, not Private or Domain. Finally, if you run an SMTP email server, you need to allow packets from this to escape out onto the Internet. I use the IIS 6.0 SMTP Service that is part of Windows Server 2008, carefully configured to prevent relaying. And as it does not receive email (the Reply To for all messages is my usual Web site administrator email address), it cannot be accessed from the Internet anyway because port 25 inbound is closed.
So, a simple Custom rule that allows only the SMTP server application (inetinfo.exe) to go out using TCP through any local port to only port 25 on remote servers should do it. Then flip over to the Properties dialog for the server itself (the root entry) in WFWAS Manager and set the Outbound connections drop-down to Block (the default is Allow) so that the only permitted outbound traffic is that defined in your enabled outbound rules.
Or so I thought. Being not a little naïve, I expected the SMTP service to use the local DNS Client (not the local DNS Server) to do the lookups required for delivering mail. I mean, everything else seems to use this - but not the SMTP Service. It obviously does its own lookups. Maybe this is something to do with last year's patch that changed the behavior of the SMTP Service and stopped the instance on my internal network from relaying essential email status messages - when it was quite happily doing so before. So, anyway, what you need is another rule that allows inetinfo.exe to go out using UDP through any local port to only port 53 on remote servers. And now (at least temporarily) everything started working properly again.
Of course, a couple of hours later I discovered the error messages in Windows Event Log telling me that the Time Service was broken. Ahh.. forgot that one, so add another Custom outbound rule to allow just the Windows Time Service (click the Customize button next to Services in the wizard) to use UDP on any local port to connect to any remote server on port 123. Want to check that you can ping the time servers you use? That's when you'll discover that PING doesn't work any more. And neither does NSLOOKUP or TRACERT. To allow pings out of your server, you can simply enable the preconfigured rule "File and Print Sharing (Echo Request ICMPv4 Out)" - a nice snappy name, though enabling file and print sharing sounds scary. But if you examine the rule (click Customize in the Protocols and Ports tab), you'll see it only allows ICMP Echo requests to escape.
Meanwhile, for TRACERT and NSLOOKUP, you need another Custom outbound rule that allows any service\user to use UDP through any local port to port 53 on any remote server. I imagined that the predefined rule "Core Networking - DNS (UDP Out)" would allow this, but it is limited to the svchost.exe program and so is no help for the other stuff like DOS utilities. If you want to use TRACERT and NSLOOKUP, and you create the rule for them, you can disable the "Core Networking - DNS (UDP Out)" rule. You can also remove the custom rule you created to allow DNS lookups for the SMTP service, as it also uses UDP to remote port 53. However, it's a good idea to leave it in place so the service will still work if you later decide to block TRACERT and NSLOOKUP.
Except, in my case, NSLOOKUP still wouldn't work - all I got was "UnKnown" (note the interesting letter case) for the DNS server name, and an IPv6 address instead of the usual xxx.xxx.xxx.xxx format one I was expecting. Typing "nslookup", then "set d2", and then a domain name produced the interesting response that the request was too long. In the end, my "large hammer" solution was to simply disable the IPv6 protocol in the properties for the network adapter and normal service was resumed. I guess this is something I'll need to come back and look at again.
What I can't help wondering, though, is when we'll finally solve the problems with spoofing that are already so widespread with email, and are obviously becoming just as common with DNS. I can (and do) use Sender Policy Framework (SPF) to advertise a list of valid IP addresses for email I send, but how do I do the same for DNS? And would it make any difference...?
According to the latest update bulletin from MessageLabs that lands in my inbox each month, around 90% of all emails passing over the Net are spam. And their global report says that around 120 billion spam emails are sent out from botnets every day. That means there's around one and a half million unwanted messages being launched onto and scooting around some part of the network every second. And that's without all the gunk required to accomplish the other types of malicious activity, such as the DNS Amplification attacks I'm being regularly subjected to at the moment.
Of course, the bulk of traffic on the Net is probably HTTP Web browsing, not email. But you have to wonder what percentage of the resources that make our interconnected technological experience work are required just to cope with stuff that nobody actually wants. How many power stations could we close down, how many degrees global warming could we avoid, and what savings could we make in cost, resources, and raw materials if we could just find a way to kill it off?
And how come the people who run the world are more concerned about making me use these awful new low energy light bulbs than applying their influence and capabilities (?) to an issue that should be easy to fix, and would have only positive outcomes for everybody? Not that our temporarily coalesced Government here in the UK have much idea about information technology anyway. They just announced that the previous plan to allow everybody to get "high speed" (2 MB) ADSL connections by 2015 was over-ambitious and will not now happen - even though they are wholeheartedly backing the introduction of multi-channel TV over the Internet in the next year or so. Yeah, that'll work...
And even the ISPs seem to be unable to do anything about it. My ISP, when asked about the recent DNS attacks, agreed that they were killing connection speed for many customers and affecting bandwidth availability across the network; and - yes - they know which ISPs and which IP addresses the attacks are coming from; but they are not allowed to block these addresses as part of some "international agreement". In the same way that they're not allowed to block the torrent of unsolicited fax and phone calls we get here that originate in various countries around the world.
I suppose the only saving grace is that I'm old enough to remember when networks were even less reliable and performant than the Internet is now. My first contact with digital electronic communication was though an acoustically-coupled (clamped onto a telephone handset with a rubber band) dial-up modem that ran at 1200/75. That's a sniffle less than 1.2 KB (not MB) down and 0.1 KB up. Somewhat slower than even the slowest dial-up today, and about one thousandth of the speed of my current (slow) ADSL connection.
Yet there was a real sense of adventure watching each 80 character wide by 25 line screen slowly fill up with text at a rate of about two screens per minute (even Twitter would have seemed slow then), and submitting a document back was often a "leave it running and come back the next day" scenario. Though usually the modem decided to drop the line when your neighbour's phone rang, or when the stroweger mechanical switch in one of the intermediate exchanges had some dust on its contacts. The typical procedure was to hit "send", go off and have dinner, then phone the recipient to see if it had arrived - and then read out the original so they could correct all the transmission errors.
That was the other problem, of course - no reliable error correction in the protocol or hardware. If you tried to communicate in real (slow) time, you had to contend with long waits before each word or part of a w or d a p p ear ed and be able to dec1p&er the stran%e char@£acte~s that ad0ed a cer/ain addi|ional piqu@ncy to the content. Or be prep@r3d to rec0n;ect when it sudden(y st*pp=d half~ay through a vit*lly |mp0~tant
So I guess I shouldn't really complain. I mean, I opened a Web browser and uploaded this blog post in only a few seconds, and was reading the result on screen in less than a minute. What would have been even more useful, of course, is if the content was actually worth all the resources it took to put it there...