Random Disconnected Diatribes of a p&p Documentation Engineer
So I finally managed to grab a week away, and spent a restful few days disconnected from both Windows Azure and the Internet. Other than a frustrating sat-un-navigation experience, it was a refreshing change and a chance to see some more of our wonderful English countryside - with the direction and destination chosen by my wife who's been fascinated by the recent TV series "Monkey Life".
The series shows the work of the people at the Monkey World sanctuary in Dorset. Having followed the fortunes of several of the monkeys, apes, and gibbons shown in the series, we were keen to meet them in real life. It's an amazing place, and well worth a visit to see the incredible work they do and the fabulous environments they've built for the various primates rescued from all over the world.
However, our first stop was at the seaside - I'm not allowed to take my wife away on holiday unless she gets to see the sea - and the nearest to our destination was Lulworth Cove. It's an eerie area of the heritage coast of Dorset that demonstrates how the power of the sea and movements of the Earth's crust have shaped the landscape. And it's a pretty place as well, as you can see in the photos below.
Bright and early next morning we were in the queue at Monkey World, listening to the incredible sounds of hundreds of apes calling out for their breakfast. Inside, the first encounter was several troops of chimpanzees.
One of the featured primates in the TV series was an Orangutan named Oshine, who was rescued from a ranch in Johannesburg and was hugely overweight. With a proper diet and plenty of exercise she's doing well now, as you can see here.
The Woolly Monkeys are also a treat to watch as they chase around and perform the most amazing aerobic and aerobatic feats. Though stealing the show was the newest addition to the collection.
Meanwhile, a couple of Lemurs seemed really surprised to see people wandering around inside their enclosure.
However, the one that we really came to see was Mikado, a young Golden Cheeked Gibbon who was hand-reared by the staff in the park after being orphaned in France. Now growing up, he shares an enclosure with another older female who's taken over as surrogate mother.
Of course, there were hundreds of others to see, including the Capuchins that look like little old men. They're a popular pet until they grow and get rowdy, and so the park has lots in several different enclosures.
Finally, after an exhausting day, we set off on the 250 mile drive home. Of course, with modern sat-navs that do dynamic updating based on the current traffic situation and route timings, it should be easy. Just follow the nice lady's instructions. She sent us off on a detour through some pretty villages and country lanes to miss congestion around Southampton, which was nice, even if it did take over an hour to cover the first thirty miles.
But then there was another dynamic route update to avoid an accident on the M4 motorway. And then another to avoid queues in Cirencester. Finally, when she told us that we should divert again through Gloucester I decided enough was enough, turned it off, and just followed the road signs. It took five and a half hours to get home, and I suspect that we'd still be driving round pretty villages and country lanes now if my electronic lady had her way. But I suppose you have to be amazed at the technology, even if it works best when switched off.
However, for motoring fans, here's a photo of two wonderful old E-type Jaguars seen in the car park at Monkey World...
Following a discussion last week about how the most successful national clubs and societies have evolved, I was amused by a response from someone who professed to setting up the National Procrastination Society. He reckons that this is the most successful society ever because none of the members has so far decided to return their application form, and they haven't yet got around to organizing any meetings.
All of which coincided very nicely with a current period of procrastination here at this tiny and remote p&p location where I now seem to spend all my days writing guidance for Windows Azure. The procrastinated topic was startup tasks and configuration files, and a prolonged burst of frenetic procrastination still hasn't produced a convincing result.
The issue is that you really need to put all your configuration settings for a Cloud Service role in the Windows Azure service configuration file, rather than in web.config or app.config, so that the configuration can be changed without having to redeploy and restart the role. But it's not easy to do. As many blog posts report, several components such as the ASP.NET membership and profile providers, Windows Identity Foundation (WIF) modules, and assorted other bits and pieces aren't "cloud-aware". They only look in the web.config or app.config file.
There is, of course, the new CloudConfigurationManager class that reads from the ServiceConfiguration.cscfg file by default, and falls back to web.config or app.config if the setting is not found in ServiceConfiguration.cscfg. Though it only looks in the appSettings section of web.config or app.config, so it won't find things such as connection strings or WIF settings. We got round this in our guide by wrapping the CloudConfigurationManager class in a custom class that does look in the appropriate locations.
But this is the wrong way round anyway, because what you actually need is for the non-cloud-aware components to look in ServiceConfiguration.cscfg. There's several workarounds described in blog posts and documents to get round the issue by copying values from the service configuration into the current configuration loaded from the web.config or app.config file so that the components can find the appropriate values. The various suggestions include using a separate executable startup task with the AppCmd utility, and by running code in the elevated startup event of a role.
However, all of these are what they say they are: workarounds. None can really be described as "the one true way" that we should recommend in our guidance. So, after the requisite period of procrastination, we've come to the conclusion that we won't include any of them in our reference implementation and just describe the options in the written guidance. Instead we use deployment scripts run from MSBUILD tasks that update the web.config files with the appropriate test or production settings before deploying the package. It seems to be the best plan at the moment - especially as, in an environment that is evolving as quickly as Windows Azure, there may well be a "proper" way just around the corner.
In the meantime I'm off to see if I can remember where I put my application form for the National Procrastination Society. Though I still haven't made up my mind whether I want to join...
So when you buy a load balancing router, what do you actually expect it to do? Maybe I just expected too much from it as a solution to my distinctly unbalanced connectivity requirements. But even if the outcome is typically a lack of adequate refreshment, I suppose it's nice to live in hope with a "glass half full" approach. Though, in my case, "network half full" seems more appropriate.
After my recent shuffling of accounts with Internet Service Providers, I now have two connections to the Net. One is through cable and provides 20 Mbit downstream and 1 Mbit upstream. The other is through ADSL and (on a good day) offers 2 Mbit down and 750 Kbit up. They're both linked to my internal network through a LinkSys RV042 load balancing router. So, in theory, I can get great performance combined with automatic failover should one of the connections fail.
However, this arrangement seems to cause more problems than it solves. Some sites, including my personal email provider, treat requests from different IP addresses as coming from different sources. My cable and ADSL connections are very different IP addresses, and my email provider's load balancer and security mechanism don't allow requests from both IP addresses to access the same authenticated session. I guess this is done to mitigate "man in the middle" attacks, but it means that I'm continually prompted for credentials when using Outlook or OWA.
The solution is to force all HTTPS requests to go through one of my two connections rather than being load balanced, and the obvious choice is to use the faster connection. To do that I can set up protocol bindings in the router, but it just raises a heap of new issues. For example, the Protocol Bindings configuration section consists of controls where you specify the service type (such as HTTPS), the source IP address range, and the destination IP address range. There's also an Enable checkbox which, for some reason, is unchecked by default. So if you just enter the values and click the button to add the binding, it doesn't work unless you remember to select it again and check the Enable box.
And then you need to figure out whether adding one binding will still allow all other services and IP addresses to route to one or both of the external connections. There's nothing in the router manual to indicate whether you need to add more bindings for these, or how you can create a rule that excludes some services and IP addresses but includes the rest. And there's no indication either of whether the order of the bindings makes any difference. As there's no "Move up" and "Move down" buttons, I have to assume it doesn't.
So I Binged for help, but all I can find are some non-relevant half-examples. The Cisco forum people keep saying that "you can use protocol bindings to resolve this issue" but never actually show you how. From the number of posts complaining that protocol bindings don't work, I can only guess that most people, like me, can't figure out how to configure them properly.
But I carried on and created a binding to route all HTTPS requests to my email provider through one connection. Except that I have no idea what destination IP address range to specify. I can get the IP address currently allocated to my email server using nslookup, but I can't get the range because their DNS server won't accept requests to list them all. And they seem reticent about providing the information in response to my emails. In the end I decided to route all HTTPS request through one connection rather than just requests to their servers.
So, having resolved that issue I can now enable load balancing secure in the knowledge that I can still get at my email, as well as benefitting from using both connections for everything else. The router is also configured with the throughput rates for each connection and is clever enough to spread requests across them based on the capacity of each one, so more requests go to the cable connection than the ADSL connection. This is good because there is a monthly download limit on the ADSL line, whereas the cable connection is unlimited.
Unfortunately it doesn't actually speed things up in many cases. When I open a web page, the requests for the associated images and other content are load balanced and so, in theory, the entire page loads more quickly than when using just one connection. However, the router doesn't know how big each of these images or resources is, so it may send the request for very large ones over the slower connection; which means the page load is actually slower than using just the faster connection. And it's quite possible that a request for, say, a streaming video file may be routed through the slower connection (whereas I'd want it to go through the faster one) because I can't set up protocol bindings for different content types.
Load balancing is, of course, designed to work over connections that have the same capacity and bandwidth, so I guess I can't expect it to do anything different. In the end I've gone back to using the router in failover mode only on the faster cable connection. At least I can be sure I won't exceed the bandwidth limits on the ADSL connection unless the cable connection fails altogether, though what my ISP will think when my bandwidth usage is zero each month I can't guess. Or whether my ADSL modem will still work when it doesn't have regular traffic it can use to configure connection rates and error checking. And it still feels like I'm wasting a connection that I'm paying for.
But at least I've got a valid reason for being a bit unbalanced...
A long-standing target for comedians is the comb-over hairstyle that footballer Bobby Charlton made famous. This week, however, I had a Chrome-over and it definitely wasn't funny at all.
Though while we're talking about comedians (and before I fall back into my default diatribe mode) I ought to mention that, according to David Quantick's recent article in the Daily Telegraph, they just released the list of the best gags from this year's Edinburgh Festival. Amongst the winners were "You know who really gives kids a bad name? Posh and Becks", and "I took part in the sun-tanning Olympics - I just got Bronze".
However, what really took the shine off my otherwise entertaining week was another Adobe-initiated episode of nonsensical installation. I've moaned about how Adobe Reader takes over your system when you install the latest patched version by adding desktop shortcuts and auto-run programs. But after half an hour of aggravation with the latest Flash update I'm wondering if, together with its unavailability on new mobile devices, Adobe is trying to kill Flash off by annoying people so much that they give up on it altogether.
Why? Well if you didn't notice, the latest update for Flash fires up the Adobe installation page that contains a big yellow "Update" button. Being well trained in the need to keep software up to date for security reasons, my wife did as you would expect and clicked the big yellow button. Then, when the prompt appeared for the installer, I wandered over, checked it was digitally signed and looked valid, and entered the admin credentials.
Within minutes I was summoned back to explain why her web browser had "gone funny", only to discover that she now had Google Chrome installed as the default browser. Grumbling heartily I uninstalled it and the Google toolbar that had also snuck itself onto her computer. Then, when IE prompted, I allowed it to set itself as the default browser, and all seemed well.
But it was only few minutes before the next summoning from "her who must be obeyed". Now, when clicking links in her emails from Facebook and from other friends, the computer politely informed her that she wasn't allowed to open them, and to contact her system administrator (which, unfortunately, is me).
It was only after much furkling on the web that I found the answer, and a Microsoft Fixit, that solved the issue. It seems that uninstalling Chrome breaks Outlook's default associations in the Registry. Thank you for that Google. And thanks Adobe for installing all this stuff in the first place. Yes, I admit that Chrome and the Google toolbar are "optional", as I discovered when my own computer decided to tell me there was a Flash update available, but why on earth is "yes" the default setting? Has Adobe found a new meaning for the word "optional"?
My understanding of things that are "optional" is that you can ask for them if you want them. Imagine if you bought a new car from a dealer that operated Adobe's "optional" policy, and you forgot to specifically tell the salesman you didn’t want electrically operated sun visors on all the windows, 24 inch bright green alloy wheels, a tow bar and bicycle rack, and three inch thick fleece upholstery. Like when you update your Flash player, you might be less than happy when they delivered it.
One of the evergreen one-line gags that regularly surfaces is "What's another word for thesaurus?" According to my thesaurus, "optional" means "non-compulsory", "voluntary", and "uncompelled". Adobe seem to use different kind of thesaurus where "optional" means "recommended", "advocated", and "proposed". It's enough to make your blood boil. Though, according to Phyllis Diller (who died recently) you know you are getting old when you go to give blood and they tell you that your type is discontinued...
So I just found out this week why Automobile Association patrolmen didn't need to carry four one-penny coins around at all times. According to an item on the program "QI", in the early days of Britain's acquisition of a nuclear attack capability they worried how they would contact the Prime Minister when he was out of the office, and they needed someone to push the big red button.
The plan, it seems, was for the staff at the Nuclear Attack Headquarters to phone the AA, who would send a radio message to the patrolman that shadowed ministerial convoys at all times - on his motorcycle and sidecar, if the photos they showed were accurate. He would stop the convoy and tell the Prime Minister. They would then drive to the nearest phone box and call HQ to say "go for it" (or whatever the password of the day was).
The AA bosses decided that patrolmen should always carry coins for the phone in case nobody in the Prime Minister's convoy had the correct change. However, the Home Office told the AA that they needn't bother because the Prime Minister could just phone the operator and ask to reverse the charges. Mind you, as they pointed out on QI, it's possible that the operator may be less than convinced when someone phoned and said "Hello, I'm the Prime Minister, please put me through to the Nuclear Attack Bunker and ask them to accept the charges."
Of course, this prompts some obvious questions, such as why didn't they just put a radio in the Prime Minister's car? And were cars so unreliable at that period in history that they needed to have a repair man following them all the time? But as it was Stephen Fry telling the story I suppose it must be true.
What it does reveal is how quickly technology has changed our whole perception of communication. It's only when you stop and think what it was like, even just twenty years ago, trying to stay in contact with people. For example, I was a travelling salesman in those days and my biggest weekly expense was buying pre-paid phone cards for the new-fangled "cards-only" phone boxes that were appearing all over the country. And maps for each town and city so that I could actually find where I was supposed to be going, plus postage stamps for sending in orders because the cost of calls was far too expensive to spend time reading them out over the phone.
Some years later, when I talked to the young lady who took over my job, she was completely amazed that anyone could survive traveling without a mobile phone and sat-nav; never mind the other fripperies that are standard in cars now such as cruise control and air-conditioning. Of course, there are downsides such as being continually tracked through GPS, and always being contactable on the phone - I guess the spirit we had of a sales rep's life being "a great adventure" has been fatally compromised by technology.
However, what prompted all this unrelated reminiscence was working on our latest guide Moving Applications to the Cloud where we discuss testing Windows Azure applications that will be deployed to Virtual Machines. It was only after many attempts to rationalize deployment and test cycles based around scripts and multiple Windows Azure subscriptions that our own test team pointed out how, in actual fact, the process is no different from when you are doing everything on-premises.
The point is that today's communication and connectivity technologies can make the Internet disappear. All of a sudden my schematics containing clouds and dotted lines that represent the on-premises/cloud boundary are deemed to be unrealistic. We need to pretend that the Internet (and, for that matter, Windows Azure) doesn't actually exist at all. Instead, we just plug all the machines into a Virtual Network and they look like they are in the datacenter next door.
So when the test team starts poking around trying to break the app, or when the admin people deign to promote it to the live server, the servers all look exactly the same as before. The scripts, utilities, tools, and commands are no different, and the view though Remote Desktop is indistinguishable from that with the servers on-premises. To repeat the oft-used mantra: "It just works".
And the network admin guys could even get their own back after all the unfavorable comments they suffer at the hands of developers and testers. Toss all the test and production servers into Windows Azure, connect them up with a Virtual network, and empty your datacenter. Then ask devs and testers to "pop down to the server room and reboot the test server", and see what excuse they come up with for not being able to find it.
A bit like when, as a young and naïve young lad working in a factory, the guy who came to mend one of the machines sent me to the stores to get a long external stand. I stood outside in the rain for an hour waiting for the man on the stores counter to find one...