Random Disconnected Diatribes of a p&p Documentation Engineer
Let's face it, the only proper way to connect computers together is with real wire in the form of good quality Ethernet cables and switches. The current all-encompassing drive towards wireless was, I reckoned, just fad that would soon pass. At least, that's what I though a few years ago. The reality is, of course, very different now.
I hard-wired the whole of our house when we first moved in; it has cavity internal walls and plasterboard covered external walls so it was relatively easy to poke the wires into them and cut holes for Ethernet sockets. I even put speaker wires into each corner of the lounge so that I didn't have cables tucked under carpets or nailed to the skirting boards.
I suppose in the US what we call plasterboard would be called drywall. Though when a colleague based in Redmond happened to mention that he had water leaks in his bathroom and was having to remove all the boards, I couldn't help asking if it was now actually wetwall.
Anyway, a compact 16-port switch in the study connects everything together, and links into the proxy server in the server cabinet in the garage. Reliable high speed networking, and plug in anywhere - what more could you want? Though this was more than ten years ago, and the discovery that where we live there is almost no FM radio or DAB (digital audio broadcast) signal meant that Internet streaming radio was the only way to satisfy our insatiable demands for loud rock music.
And Internet radios rarely have an Ethernet socket, and end up being located in the kitchen - the one room of the house that doesn't have an Ethernet socket. So, some seven years ago I was forced into nailing a wireless hub to one of the protruding ends of my network. And there it's been ever since, blinking soulfully at me from a high shelf and generally minding its own business.
Of course, over the years, the number of devices it feeds has grown. As well as the high-fidelity Internet radio streams that pass through it for most of every day there are now two smartphones, a couple of tablet computers, a laptop or two, and a bird-box camera. The wireless hub uses the old steam-powered radio standards with a maximum of 54 Mbps and so it's no wonder that, some days, everything slows down.
So I decided that the time was right for a network upgrade. The 16-port switch in the study is only 10/100 so it was replaced with a new TP-Link Gigabyte model, and I ordered a new wireless hub that does dual 2.4 MHz and 5 MHz concurrently, with up to 450 Mbps on each. That should make everything fly!
However, nothing ever seems to be as easy as you expect. I blame the manufacturer's naming policy, though doubtless my own non-capabilities as a network administrator are partially culpable. You see, I reckoned that a wireless router was something with an ADSL or cable modem built it, so what I needed was a wireless access point. But all the ones I found seemed to be for use as repeaters with an existing wireless router. Then I found the NetGear kit that is a "wireless router" but without a modem in it. It seemed to be exactly what I needed.
And I expected that installing it would easy, just a matter of setting the same fixed IP address, the same hidden SSID, the same security mode and passcode, and the same list of MAC addresses as the old one. Until I looked at the installation instructions. For some strange reason the first six pages are full of dire warnings to power off your ADSL or cable modem, take the batteries out, turn round three times and count to ten, and plug the wireless router into it using "the yellow cable supplied in the box." I've had "yellow wire problems" before, and for the life of me I couldn't see what all this palaver has to do with tagging a wireless hub onto the end of my network.
Instead, I plugged it into my laptop. But which of the five ports on the router should I use? The nice bright yellow one seemed too tempting to resist, but that didn't work. Turns out that it's supposed to be LAN port 1. And, amazingly, up popped the configuration screen. Which, of course, refused to do anything at all because it couldn't detect an Internet connection. Only when I found the Advanced Setup pages could I actually do anything with it (and by that time I'd thrown the instructions away).
Not that the instructions or the built-in configuration help notes are actually much help when you need to figure out some of the settings. For example, do I use the same SSIDs for the 2.4 MHz and 5 MHz channels or different ones? I chose to use the same on the grounds that, most days, it would be nice to just connect to anything. I don't really care which. Or even if its my neighbour's. But a search of t'Internet reveals mixed opinions on this; it mainly seems to depend on whether you want to be able to tell them apart when you connect.
After three configuration attempts that ended with the "can't connect to router" message followed by the obligatory "go find a paper clip" (to press the reset button) activity, I finally figured out to completely ignore the tempting yellow socket and any configuration connected with "Internet". After that it all went swimmingly. It's a shame that it was only after all the fiddling about that I found this page on the NetGear site that explains how to do it all when you just want a wireless access point. OK, so it's for older models than the kit I have, but it still seemed to be relevant.
The important bit is where you suddenly figure out that you don't use (or need) the tempting yellow socket, and that you also spent twice as much money as you needed to because what you've bought is a "Wireless Router Without A Modem Even Though The Natural Meaning Of The Term Is One With A Modem In It" instead of a "Wireless Access Point That Is Not A Wireless Repeater And Can Be Used Standalone".
So by now I've got a wireless hub that should be able to connect to the Internet and do magic things, but can't because the tempting yellow socket is empty; will quite happily connect directly to a USB drive to stream music, and even make files accessible over the Internet, but can't because the tempting yellow socket is empty; can do firewalling and provide a guest network, but can't because the tempting yellow socket is empty; has three spare Ethernet ports to connect other stuff to, but they're empty because I already have a proper wired network; can do 450 Mbps on 2.4 GHz but that will kill all my neighbours' wireless connections; and can act as a wireless repeater, but I don't need that capability.
After a restless night dreaming about tempting yellow sockets, the next day I dug out the full PDF manual on the CD provided with the router, convinced that it must say something useful. A search for "access point" found the following help item:
Yep, that's all it says about it. And the option is not even on that page of the configuration interface. But there is another page called "Wireless AP" (I suppose if I'd been thinking logically I'd have realized that AP stands for Access Point). And here's what that page looks like when you select the uninformative "AP Mode" checkbox. Notice the contents of the help page - the big black rectangle at the bottom of the screen.
Aha! When you also check the next (uncaptioned) checkbox, which magically appears after you select AP Mode, you get text boxes to enter the long-anticipated IP address, subnet mask, gateway, and DNS servers. So I fill all that in, select Apply to reboot the router, and swap my network cable to the tempting yellow socket. And it works! Even the "Internet" page in the configuration now shows "Connected" and it sets the router's clock to the correct date and time. Maybe I've solved it?
Except now all those fancy features I paid so much extra for are disabled in the menus - but at least now I know I can't use them. However, where are the other options I expected to find in a top of the range wireless router? Such as the ability to tune the signal strength (I ran my old one at half power both to avoid annoying the neighbors and for security purposes). Or the ability to disable remote access to the configuration pages over the wireless link. Surely this is an obvious attack vector?
And, worst of all, I discovered that none of the wireless devices in our house can actually use the 5 MHz band...
For most of the morning Outlook has been glowering at me and reminding me that it can't connect to the server. Despite me patiently explaining that everything else that connects to the 'Net is working fine, it continued to sulk. Until suddenly an email arrived explaining that there was a major outage of the mail server network. Which, of course, arrived after they fixed it.
And to make matters worse, the message dropped into my Inbox several minutes after one that said the issue had been resolved. That's the problem with being universally electronically equipped and online communication enabled. It's like sending a snail-mail letter to people to tell them that the post office is on strike. Or a hardware manufacturer posting an automatically installed firmware refresh to fix a problem with the previous one that completely bricked everyone's router.
But maybe there's a neat reduction in consumer dissatisfaction if the bad news arrives only after the issue is resolved, or - like my experience this week - after the good news email to say it's fixed. A bit like that hackneyed phrase "Do you want the good news or the bad news first...?" Not that it works too well with jokes. Though if the doctor tells you that the guy in the next bed wants to buy your shoe, at least you can prepare yourself for the next statement that they need to amputate your leg.
Of course, assuming electronic contactability and constant online presence can be risky. I'm in the process of switching my cable Internet connection to a new package, which doesn't contain the phone line that came free with the old package but is a chargeable extra on the new package. So I emailed them to cancel the phone line. They decided to phone me to confirm it, using the phone number of the line I'm cancelling. Which seems sensible except that the reason I'm cancelling it is that I never used it (I have two other phone lines with much cheaper call rates) and consequently there is no phone plugged in. That's probably why the guy who phoned me didn't get an answer.
So they sent me an email instead, but sent it to my unused mailbox on their own system instead of the address I use for all my email (which is registered with them). Mind you, they're not the only people who do this; my other ISP does the same, but at least their email system allowed me to set up a redirection rule to my usual email account. The cable people don't seem to allow that. Luckily I found out about the message after phoning them back, and got a copy sent to the real me.
Maybe the answer is for email to be extended to take account of these kinds of connectivity difficulties. Email servers could automatically copy each important message to an SMS text, and then print it out and send it by snail-mail as well. And maybe also phone you up and read it out. When my wife was away last week our home phone rang and, when I answered it, a nice automated lady read out the contents of my wife's text message. Including automatically converting the "XXX" at the end to "Kiss, kiss, kiss". Isn't technology amazing?
Even more so because my wife depends on the predictive input capabilities of her phone without actually reading what it predicts. One day when I was out I got a text asking me to stop on the way home and get a beard. Luckily I was able to guess she meant to call in at our local bakers shop. And she hasn't yet discovered where the comma and full stop keys are, so reading the text is a bit like doing one of those word search puzzles.
I wonder if the automated lady had to spend ten minutes deciphering the message before she actually phoned me...
I discovered this week that online shopping is not something new and exciting, but has been around here in England since 1984; five years before the World Wide Web saw the light of day at CERN, nine years before the first commercially available web browser hit the streets, and eleven years before Amazon sold its first book (which was, rather eerily, all about computers and is still available).
Our pioneering English retailer was Tesco who, following a request from the local council in Gateshead to help elderly people with their weekly shopping, set up a small experimental scheme by attaching a simple modem-containing box to a telephone and a TV set. The display was text-based with about the same information display capability as a DOS command window, and it took 30 seconds or more to display each page. But it worked, and archive film shows people placing orders and taking delivery (and even paying with real money).
I suppose I'm a computing old-timer. I've been playing with and writing about computers for more than 30 years, and more than 20 of those have been directly or indirectly related to the Internet. Though the more I dig into the history of online retailing, the more amazing it is. Here in England we're known as a "nation of shopkeepers", but it seems we are also a nation of online shoppers. On average we spend more online per person than anywhere else in the world, which is amazing when you consider that in our tiny group of islands you're never very far away from a real shop. Though, considering my own shopping behavior (look on Amazon.co.uk first, and get the car out only if I can't find it somewhere on the web) I probably shouldn't be surprised.
The recent TV program about the 1984 experiment also described how rapidly some of the major players in the market have grown. A small London-based company that started by selling a wide range of items on TV, and whose name ASOS came from "As Seen On Screen", are now the largest online clothes retailer in Australia. Without any physical presence there and about as far away from its home base as you can get. And the Government here is in the process of privatizing the Post Office because it needs massive investment; not for delivering letters, but to compete in the fast-growing market of delivering parcels from online retailers.
You have to wonder how far all this would have got if we'd still been using the original 80 characters by 26 lines display and waiting ages for each page to load. Mind you, some old technology still seems to be working fine according to the news this week about the Voyager space probes. Voyager 1 is some twelve million miles away now, and travelling at eleven miles a second. And still working fine after more than 35 years!
Anyway, now that we all do our shopping on the Internet, and buy from retailers located across the globe almost without noticing it, the notion that the world is getting smaller becomes truer by the day. Meanwhile, a note in the article about Voyager 1 says will not reach the halfway point to our nearest star for another 40,000 years. I guess that shows just how small our world really is, or how big the Universe is.
And, supposedly, the on-board computer, built in the late 70's, has just a quarter of a millionth the processing power of a modern mobile phone. Imagine trying to do your mobile online shopping with that...
At one time you had to work in a museum to be a curator, but the wonders of information technology mean that now we can all exhibit our technical grasp of complicated topics and elucidate the general population by identifying the optimum resources that help to answer even the most complex of questions.
I'm talking about the new Curah! website here. The idea is simple: a resource that gathers together the questions most commonly asked about computing topics; each with a carefully and lovingly crafted set of links to the most useful blogs, reference documents, tools, and other information that offers a solution to the question.
Anyone can register and create a curation, and the site is optimized for search engines to make it easy to find answers. It's still in beta as I write this, but already has hundreds of answers to common questions. The great thing is that the curations are not just a set of links like you'd get from the usual search engines, which tend to optimize the list based on keywords in the resources, the number of links to them from other pages, and the newness of the content. None of these factors can provide the same level of usefulness as a list compiled by an expert in the relevant topic area who regularly creates and uses information that provides the maximum benefits.
My interest in the Curah! site also comes about partly because I am part of the group that defined the original vision and got it started. I've also added a few curations of my own, which are centered on the topic area that I now seem to have been permanently assigned to - Windows Azure application design and deployment. My regular reader will probably have noticed this from the rambling posts on this blog in the past.
However, one point that concerned me was that, having created my own curations, I am now responsible for maintaining them. As I plan to create more in the future, I was beginning to wonder if I would end up spending all of every Monday just checking and updating them as the target resources move, disappear, or I discover new ones. What I needed was some type of automated tool that would make this job easier. So I built one.
The CurahCheck utility is a simple console-based utility that will check one or more views on the Curah! site by testing all of the links in each curation ID you specify. The curation title and the linked page titles can be displayed to ensure that it is valid and that all of the linked resources are still available. It can also be run interactively, or automatically from a scheduled task.
The utility generates a log file containing details of the checks and any errors found. It can also generate an HTML page for your website that shows the results of the most recent check and the contents of the log file. If you have access to an email server, the utility can send warning email messages when an error is detected in any of the views it scans.
If you are a Curah curationist you can download the utility from here, and use and modify it as you wish. The source project and code for Visual Studio 2012 is included. Before you use it, you'll need to edit the settings in the configuration file to suit your requirements - the file contains full details of the settings required and their effect on program behavior.
Of course, the usual terms and conditions about me not being responsible for any side-effects of using the program, such as your house falling down, your children being eaten by a dinosaur, or your computer bursting into flames, still apply...
No, this post isn't about parental difficulties and I didn't spell "paternal" wrong in the title, although I admit it is about problems with relationships. More specifically, the relationship between design patterns and pretty much everything else. And, based on previous experience of dabbling in this area, how I hate design patterns.
For more years than I care to remember I've been driven by situation to describe, document, present, and generally discuss software design patterns. Initially it was just patterns related directly to ASP.NET where common ones such as Factory, Singleton, MVP/MVC, and Publish/Subscribe were obvious - and generally built-in to the framework, or easy to implement. We could never agree on a structure for documenting patterns, never mind the actual defintion of the pattern. Or which implementions to show, and in what programming languages.
Then I got involved in Enterprise Library, and more design patterns surfaced in my world: Builder, Adapter, Decorator, and Lazy Initialization. All good solid patterns that are well documented and easy to use in Enterprise Library. I even write code samples to demonstrate how, together with some tenuously humorous descriptions that attempted to relate the guy who comes to paint your house with the way the Decorator pattern works. Needless to say, those documents never saw the light of day.
But now I'm back in the mire of design patterns again, paddling furiously to try and stay afloat at the same time as writing semi-comprehensible verbiage around patterns in Windows Azure. Some of which seem so vague and newly invented that you might think they were giving out prizes for finding new ones. I've reached the state of wondering what design patterns really are, and if many of the new examples I'm trying to document are just techniques, guidance, general advice for implementation, or made-up stuff that sounds like it might be useful.
According to most reputable resources, a software design pattern is "... a general reusable solution to a commonly occurring problem within a given context in software design" and "... a description or template for how to solve a problem, which can be used in many different situations." But then the definition typically continues with "... they are formalized best practices that guide a programmer on the implementation, not complete designs or solutions that can be transformed directly into code."
So is something that's Windows Azure specific, such as how you perform health verification checking for an application, a design pattern? Or is it just a technique? Or guidance? It certainly doesn't fit the idea of a generally reusable solution or template that can be used in many different situations - it's pretty specifically a technique for checking if an application is alive. But it is, I guess, formalized best practice and definitely not a complete design.
In fact there's nothing I can find on the web that seems to relate to "Health Verification Pattern". Or anything related around "probe" or "ping" that fits with the scenario. Yet it doesn't seem like something that somebody just made up for fun either. There's features in Windows Azure Traffic Manager and Windows Azure Management Services to do health verification, even if it is just a simple probe on a specified URL and port.
Of course, what's clever is that you can have the target of the probe do some internal checking of the operation and availability of the resources the application uses, and maybe some validation of the internal state, then decide whether to send back a "200 OK" or a "500 Internal Error" status code (or some other error code). Though you do need to do it in a timely way so that the probing service doesn't think you've gone away altogether, and flag your application as "failed."
For example, with Traffic Manager you get just ten seconds, including network latency while the request and response traverse the Internet, before it gets fed up waiting. So there's no point is doing things like checking a dozen database connections, or validating checksums of every order in your system, because you probably won't have time. And is there any point in sending back a detailed error message if something has gone wrong in the application? You'll need a custom client in this case to handle it. But surely the application will already contain instrumentation such as error handlers and performance counters that will flag up any failed connections or errant behavior within the code at runtime?
Maybe it really is similar to a paternal relationship after all. Whenever I probe our son to ask if he's OK, all I ever get back is "yeah, cool". The real-world equivalent of "200 OK" I suppose...
My rather staid daily newspaper occasionally makes an attempt to be cool and trendy by squeezing an article about technology and lifestyle between the reports of war, famine, crime, and pictures of the Royal Family. But it was still a bit of a shock yesterday to see the headline "42% of People Admit to Nomophobia."
At first I assumed it was another kind of attention deficit disorder they'd identified in kids, or something you caught from watching too many reality TV shows. But after perusing the article I quickly grasped the real meaning: fear of being without your mobile phone. And, from reading more, it seems there is an acute version of the phobia where you're not only without your phone, but you can't remember where you left it.
I suppose I've never come across this condition before because I always know where my mobile phone is. It's at the back of the third drawer down in the kitchen cabinet next to the sink. And if I did forget, there'd be no point in dialing the number from another phone and trying to trace the sound because it's turned off. So it looks as though I should be suffering from chronic nomophobia. Something else I can ask my doctor about during my next visit.
Yet, strangely, I don't feel any symptoms or stress. I guess some people that don't know me will say it's because I don't have much interest in technical gadgets. But that's obviously not true – we have a ton of them in our house, everything from a computer-powered TV to a fully automated weather station (with added solar intensity recording) to a robot vacuum cleaner. And plenty more gizmos and electronic wizardry in between.
But, somehow, I can't get excited about all this new portable and wearable stuff – though that's probably because I hardly ever go anywhere. I have a wristwatch that is guaranteed to be 100% accurate because it gets its time from a radio transmitter in Rugby, but I can't remember when I last wore it. And my phone is a proper smartphone, even if it is three or so years old, though it only ever gets turned on about once a month. I have a Windows Surface tablet, but I've never found any reason to take it past the front door - for some reason it seems to stop working once I get a hundred yards away from the wireless router.
So will I be a customer for some new wearable technology? I already wear spectacles, and none of the photos of people using Google Glass show it perched on top of an existing pair of prescription spectacles - maybe you can get a prescription Google Glass, or one that fixes to existing spectacles? And I doubt that, even with spectacles, my aging eyes are good enough to read anything useful on the one-inch-square screen of a smart watch. Perhaps they'll bring out a smart watch that projects the display two feet square onto into a nearby wall so that everyone else can read my email at the same time.
Or maybe my prescription Google Glass will have a zoom feature so I can see the screen of my smart watch...
Much as I complain about some TV documentaries being dumbed down (for example, showing a clip of an explosion every time the presenter mentions The Big Bang in case you can't remember what an explosion looks and sounds like), I have to admit that a recent episode of the BBC Horizon series was an excellent in-depth examination of the latest nightmare scenario.
"Defeating the Hackers" explored two recent high-profile cases in detail; the hacking of Wired journalist Mat Honan, who had all of his online presence infiltrated, and the Stuxnet attack on Iran's nuclear plant. It also explained in layman's terms how SSL encryption works, and how the ongoing development of quantum computers will render our current secure communication techniques obsolete.
Of course, anyone following the current events in regard to online privacy and government access to our personal data will already be wondering if there is any security left. Or risk travelling through a UK airport where it seems that all of your digital belongings are open to detailed examination and confiscation. But that's another story.
Anyway, getting back to the Horizon documentary, most of the topics are probably well known to most IT people. But there was one that I hadn't come across before: Ultra Paranoid Computing. It's obviously not a mainstream topic. Wikipedia doesn't know about it and there's little on the web. However, I did find one article on the National Science Foundation site that covers the same ground as the TV program.
Ultra Paranoid Computing attempts to deal with the scenario where every other computer on the planet has been taken over by malware (I guess that's where the "ultra-paranoid" bit comes in – I thought I was a paranoid but I never considered this one). As well as the nightmare scenario of all of our utilities (water, electricity, gas, telephone) being hacked and disrupted, and global finance being completely broken, we need to protect ourselves by finding a way to securely identify users and other computers.
However, all of the techniques we currently use for this can, they say, be defeated. The new quantum computers will crack passwords and certificate keys instantly, and be able to read encrypted data. Even fingerprints and retina scans can be imitated, the program suggested, and so a new way of identifying ourselves - which cannot be replicated - is required.
The NSF article mentions an approach called Rubber Hose Resistant Passwords. I couldn't help getting visions of trying to log on with an elastic stocking by waving a leg in front of some specialist detector, but I'm going to assume that's not the case (I couldn't get the video that explains it to play). But typically our identity will need to be confirmed by some technique that makes use of physical attributes.
In the TV program, they showed an interesting approach using the guitar from the Microsoft Xbox 360 Guitar Hero game. You play a tune several times until the computer has built up a pattern of your timing, mistakes, and responses; and this becomes your physical passkey. You just need to play the same song again (in exactly the same way, of course) to log in. Maybe companies will have a central guitar station where you go to sign into the network every morning. Or, more likely, everyone will turn up for work disguised as an itinerant rock star with a guitar slung across their back, like they showed in the program.
Talking of disguises, I suppose I should keep up my usual tradition of helping to publicize the results of the best joke competition at this year's Edinburgh Festival. Jack: "I'm thinking of going to a fancy dress party disguised as a Mediterranean island." John: "Don't be Scicily!"
Meanwhile, I wonder if I can put in for promotion from just being paranoid to being "ultra-paranoid." Though I doubt it comes with a pay raise...
As summer continues to exhibit its typical level of weather unpredictability here in Ye Olde England, the effects of our harsh spring have faded and everything is in bloom as though nothing untoward had happened. Everywhere you look, the countryside in this part of our green and pleasant land seems to be at its best.
Even the usual populations of bees and butterflies are evident, despite warning from experts that their existence was under threat. Though there is a marked shortage of people's favourite insect - the red and black spotted ladybird - due, they say, to the low number of aphids compared to previous years. And, although we recently had a spell of very hot and dry weather, even the lawns are looking pristine; while the assorted shrubs in my garden are defeating any attempt to keep them under control. From my "alternative office", a desk in the conservatory, even the dull days are filled with the wonderful sights and sounds of an English summer.
Best of all, however, is the confirmation that our local wildlife has survived the winter and still considers our garden to be a welcome stop on their night-time (and sometimes daytime) travels. The local fox family seems to have produced two cubs this year, rather than the more usual three, and there's evidence of young badgers in the woods next door. Though, so far, none of the light-coloured variety like the one we sadly lost some weeks ago.
As usual, my wildlife camera has been keeping watch. This time I set it to movie mode rather than still picture mode. The results aren't perfect at night because it takes a couple of seconds to start up the infra-red LEDs and stabilize the picture after detecting movement. And, typically, only one in fifty of the movies captures anything of interest - especially when we seem to be on the main route that all of our neighbours' cats use during their night-time constitutionals.
But, in case you are interested, I've posted a short video of extracts. The quality is not great, as the original was over 60 MB as so I reduced the frame size to make it more manageable. See if you can figure out what is shown in the first two clips; something scooting down and back up a small shrub, and then what might be a bat flying slowly past the camera.