Random Disconnected Diatribes of a p&p Documentation Engineer
So I just found out this week why Automobile Association patrolmen didn't need to carry four one-penny coins around at all times. According to an item on the program "QI", in the early days of Britain's acquisition of a nuclear attack capability they worried how they would contact the Prime Minister when he was out of the office, and they needed someone to push the big red button.
The plan, it seems, was for the staff at the Nuclear Attack Headquarters to phone the AA, who would send a radio message to the patrolman that shadowed ministerial convoys at all times - on his motorcycle and sidecar, if the photos they showed were accurate. He would stop the convoy and tell the Prime Minister. They would then drive to the nearest phone box and call HQ to say "go for it" (or whatever the password of the day was).
The AA bosses decided that patrolmen should always carry coins for the phone in case nobody in the Prime Minister's convoy had the correct change. However, the Home Office told the AA that they needn't bother because the Prime Minister could just phone the operator and ask to reverse the charges. Mind you, as they pointed out on QI, it's possible that the operator may be less than convinced when someone phoned and said "Hello, I'm the Prime Minister, please put me through to the Nuclear Attack Bunker and ask them to accept the charges."
Of course, this prompts some obvious questions, such as why didn't they just put a radio in the Prime Minister's car? And were cars so unreliable at that period in history that they needed to have a repair man following them all the time? But as it was Stephen Fry telling the story I suppose it must be true.
What it does reveal is how quickly technology has changed our whole perception of communication. It's only when you stop and think what it was like, even just twenty years ago, trying to stay in contact with people. For example, I was a travelling salesman in those days and my biggest weekly expense was buying pre-paid phone cards for the new-fangled "cards-only" phone boxes that were appearing all over the country. And maps for each town and city so that I could actually find where I was supposed to be going, plus postage stamps for sending in orders because the cost of calls was far too expensive to spend time reading them out over the phone.
Some years later, when I talked to the young lady who took over my job, she was completely amazed that anyone could survive traveling without a mobile phone and sat-nav; never mind the other fripperies that are standard in cars now such as cruise control and air-conditioning. Of course, there are downsides such as being continually tracked through GPS, and always being contactable on the phone - I guess the spirit we had of a sales rep's life being "a great adventure" has been fatally compromised by technology.
However, what prompted all this unrelated reminiscence was working on our latest guide Moving Applications to the Cloud where we discuss testing Windows Azure applications that will be deployed to Virtual Machines. It was only after many attempts to rationalize deployment and test cycles based around scripts and multiple Windows Azure subscriptions that our own test team pointed out how, in actual fact, the process is no different from when you are doing everything on-premises.
The point is that today's communication and connectivity technologies can make the Internet disappear. All of a sudden my schematics containing clouds and dotted lines that represent the on-premises/cloud boundary are deemed to be unrealistic. We need to pretend that the Internet (and, for that matter, Windows Azure) doesn't actually exist at all. Instead, we just plug all the machines into a Virtual Network and they look like they are in the datacenter next door.
So when the test team starts poking around trying to break the app, or when the admin people deign to promote it to the live server, the servers all look exactly the same as before. The scripts, utilities, tools, and commands are no different, and the view though Remote Desktop is indistinguishable from that with the servers on-premises. To repeat the oft-used mantra: "It just works".
And the network admin guys could even get their own back after all the unfavorable comments they suffer at the hands of developers and testers. Toss all the test and production servers into Windows Azure, connect them up with a Virtual network, and empty your datacenter. Then ask devs and testers to "pop down to the server room and reboot the test server", and see what excuse they come up with for not being able to find it.
A bit like when, as a young and naïve young lad working in a factory, the guy who came to mend one of the machines sent me to the stores to get a long external stand. I stood outside in the rain for an hour waiting for the man on the stores counter to find one...
Here in Britain we always used to refer to a job that was never-ending as "like painting the Forth Bridge." It came about because the people who paint the huge and magnificent railway bridge over the Firth of Forth in Scotland reportedly start at one end and it takes so long that, when they reach the other end, it's time to go back and start all over again.
However, with the new type of paint they used this time it seems they won't need to do it again for 25 years (see this BBC story). So now we need to find another suitable phrase to describe endless tasks that are overtaken by circumstance. After a couple of years working on documentation and guidance for Windows Azure, I propose we replace it with "like documenting Windows Azure".
See, here's the problem. When applications came in a pretty box with some floppy disks or a CD they tended to have a strict release schedule. You could write a book about Microsoft Access 1.0 (in fact, I did) without worrying that the product would be completely different before you could get your draft through edit and sent to the printer. Even when documenting features of the operating system, such as Active Server Pages, you could reckon to hit the streets well before they released a new version of Windows Server.
But now that we are all web-enabled and Internet-driven, it seems like us poor documentation engineers have almost no chance of keeping up. Product teams can drop in new features any time they like, and entirely change the UI so that our step-by-step guides make no sense at all and all our screenshots are out of date by the time we get the pages onto MSDN. I mean, we're just putting the finishing touches to the third update of the p&p guide "Moving Applications to the Cloud" and so much has changed since we released the last update (July 2011) that we've ended up almost totally rewriting it.
As well as adding in new content around Cloud Services, such as updated configuration techniques and our own Transient Fault Handling and Autoscaling Application Blocks, we widened the migration scenarios to include Windows Azure Virtual Machines, Web Sites, hosted SQL Server, and new connectivity options such as Virtual Networks. We also added in more guidance about choosing the appropriate migration path through the ever-increasing list of hosting options. And, of course, we had to rename everything that's now got a shiny new moniker (think "SQL Azure" and "Windows Live ID").
We're really pleased with how the guide has evolved, and confidently expect the new content to be even more useful to architects and developers considering migration of on-premises applications to Windows Azure. But, at the point where we expect to release, there'll obviously be yet another update to the Windows Azure SDK that will impact the sample code. And then in the next portal refresh they'll probably add another bunch of new stuff so we're out of date again, before we can even get it to the printers.
Meanwhile, the "Hybrid Applications" guide we released only six months ago, and the associated Hands-on Labs we released in May this year, is already starting to look a bit out of date because there's a brand new web portal; new services such as Virtual Machines, Web Sites, and Virtual Networks; and a whole new caching mechanism. This guide's on the list for an update, but we have the second guide in the series, "Developing Applications for the Cloud" to update before then.
In fact I only just discovered that I'm now a member of the "sustained engineering team" here at p&p. At first I was a bit concerned it suggested that, up till now, we only engineered occasionally - when we weren't busy doing something else. Yet I've always considered myself to be a "documentation engineer" because creating guidance is a task I do all day every day; and it has just as strong a relationship with "real" engineering as writing code does.
Documentation engineering is like building a luxury motor car. You sculpt an attractive and intuitive exterior and interior, design and build the underlying structure that connects all the parts together, manufacture (or source from third parties) the components that actually make it work in the most cost effective way, and assemble the whole thing into a package that people will want to use every day.
The problem is that, unlike a car, you can't just paint your guidance a different color, put some fancy alloy wheels on it, and stick a new badge on the back every six months. You actually do have to re-engineer it each time. I did suggest to our project manager that he wander over to the Windows Azure development team's offices and hit them with a big stick until they promised to stop changing stuff. But he said that probably contravenes some company policy.
Maybe, instead, we can get hold of some of that 25-year paint in time for the next update...
It's customary here in England to castigate British Rail for their outlandish non-service excuses. As far back as I can remember we've had "leaves on the line". Then, after they spent several million pounds on special cleaning trains, it morphed into "the wrong kind of leaves on the line". And of course, every winter when the entire British transport system grinds to a halt they blame "the wrong kind of snow." But this week I've been introduced to a new one: "the wrong kind of electricity".
During the summer months I ply my lonely documentation engineering trade using a laptop and enjoying the almost-outdoorsness of the conservatory; soaking up the joys of summer, the birds singing in the trees, the fish splashing around in the pond, and a variety of country wildlife passing by. So when I noticed one of my regular computer suppliers was selling off Windows 7 laptops, no doubt to be ready for the imminent arrival of Windows 8, I thought it would be a good idea to pick up a decent 17" one to replace my aging 14" Dell Latitude. With age gradually degrading my eyesight I reckon I'll soon need all the screen space I can get.
So when my nice new Inspiron arrived I powered it up, worked through the "configuring your computer" wizard, removed all the junk that they insist on installing, and started to list all the stuff I'll need to install. Until I noticed that the battery wasn't charging. So I fiddled with the power settings, dug around in the BIOS, tried a different power pack, and did all the usual pointless things like rebooting several times. No luck.
So I dive into t'Internet to see if there's a known fix. Yes, according to several posts on the manufacturer's site and elsewhere there is. You replace the power supply board inside the computer at a cost of 35 pounds, or – if it's still under guarantee – send it back and they replace the motherboard. Mind you, there were several other suggestions, such as upgrading the BIOS and banging the power supply against a wall, but as I'd only had the machine for two hours none of these seemed to be an ideal solution. So I did the obvious – pack it up and send it back to the supplier as DoA (dead on arrival).
Mind you, when I phoned the supplier and explained the problem the nice lady said that it would be OK if I kept it plugged into the mains socket because then it doesn't need the battery to be charged up. True, but as I pointed out to her, it's supposed to be a portable computer. I'll need a long piece of wire if I decide to use it the next time I'm travelling somewhere by train.
And do I want a replacement? How common is the failure? To have it happen on a brand new machine is worrying. Yet, strangely, only a few weeks ago I noticed one time when I powered up my old Latitude that it displayed a message saying it didn't recognize my power pack, but then decided it did. Yet after wandering around the house I found five Dell laptop power packs and they all seem to be much the same. All 19.6 Volts, either 3.4 Amps or higher current rating. They all have the same two-pole plug, with the positive in the center. The only difference seems to be that the newer ones have 25 certificates of conformance on the label, while the older ones have around 15 (perhaps that's why they seem to get bigger each time - to make room for the larger label).
So how does the computer know which power pack I've plugged in? When I looked in the BIOS it said that the power pack was "65 Watts". Is there some high frequency modulation on the output that the computer can decipher? Or does it do the old electrician's trick of flicking the wires together to see if there's sparks, and measure the effect? Do all computers these days do the same thing? If I buy an unbranded replacement power pack will the computer pop up a window saying "You tight-fisted old miser - you don't really expect me to work with that do you?"
And is all this extra complexity, which can obviously go wrong, really needed? How comfortable will I be with all my computers now if I feel I need to check that the power supply/computer interface is still working every time I switch one of them on? It seems like the usual suspicion most people have that the first thing to die on your computer will be the hard disk is no longer true. Now your computer may decide to stop working just because you're using the wrong kind of electricity...
According to Amazon, I'm interested in buying 5kg of peanuts, a multi-purpose screwdriver, some lithium batteries, Katie Price's latest autobiography, an album by Rhianna, and Mellissa and Doug's Chunky Animal Puzzle. Though, in addition to wondering why people now seem to need several autobiographies to describe one life, I'd have to say that I'm definitely not interested in any of these seemingly random recommendations.
In the world of commerce, and in one of my previous lives in the retaining industry, they call it "related selling". If somebody buys a tin of paint, you make every effort to sell them a brush, wood filler, undercoat, sandpaper, masking tape, and brush cleaner. I've seen this technique double the value of a sale when done properly. It even works to the customers' advantage because they feel like they've been "looked after" and don't have to drive back to the store to get the things they forgot.
However, in our superstore-based and technology-driven world it seems to be somewhat less precise and successful. Without the personal service of an assistant behind the counter, all superstores can do is organize displays so that related items are next to each other. But online there are much richer opportunities. Today Amazon's home page lists 35 items that I "might be interested in", and if I bother to click the links in each section it will show me another hundred or so.
Of course, they create these lists by data-mining my previous purchase history. They know I bought a 5kg bag of peanuts only a week ago, so they must think I have some really voracious visitors to my bird feeder. And even though they only delivered the new batteries for my camera yesterday, they obviously think I'm a fanatical photographer and I'll need more already. And, yes, I did buy a new door lock about three months ago. Though, unless they have been secretly communicating with my wife, how do they know I haven't fitted it yet? Perhaps they think I don't have a screwdriver, and that's why it's in the "might be interested in" list.
I suppose the book and album are there because they know my wife likes Katie Price and Rhianna, based on my history of buying birthday presents. Though neither she nor I have much interest in games designed for children aged 2 to 5. But best of all, after a minor confrontation over household dustbins on our last collection day, I bought a large self-adhesive number 2 to prevent future ownership confusion. Amazon is pleased to suggest that now I "might be interested in" a number 1 and a number 3 to go with it. Related sales algorithm failure, I suspect.
And this lack of sensible related product selection isn't limited to online retailers. While I'm not a regular visitor to fast food outlets, we do occasionally partake of a drive-through. Now, I know that we're all supposed to be reducing our salt intake, but fries without salt seem very bland. Yet none of them include one of those tiny packets of salt by default - you have to ask for it. And then every time, without fail and despite asking for "one packet of salt" to go with the tiny bag of fries, they shove half a dozen packets into the bag. Perhaps somebody should tell their accountants.
Mind you, they also include a dozen paper napkins - though I suspect that's because they know I'll get ketchup all down the front of my shirt...
There's an advertisement on the radio at the moment that explains how a guy named Brian saved four hundred pounds (in money, not weight) on his car insurance by using some price comparison website. Most people I know only pay around half that amount in premiums, so is this a realistic claim? Perhaps you have to pay the first ten thousand pounds of any claim, or are covered only when the car is parked in the garage.
The rules for advertising here in the UK insist that adverts reflect the experience of the majority of people, and are realistic and true, so we can assume that Brian must be just some ordinary guy with an ordinary car. Or rather, that Brian is a very stupid ordinary guy who never bothered to get a quote from other companies in the past and was happy being fleeced. The alternative is that the company chose specific criteria and found somebody who, with some unspecified insurance company, would get a very cheap quote.
The point is that the vast majority of people will not save anywhere near this amount of money when using the site to find an insurer with approximately the same terms and levels of cover. In my experience, saving even fifty pounds is very unusual, and I change insurers almost every year to get the best price. In other words, the headline comparison is, to be blunt, complete balderdash.
It's amazing that, in many other areas of advertising, they wouldn't stand a chance of getting away with this. ISPs that offer broadband services have been hammered here in the UK for advertising unrealistic "up to" speeds, such as "up to 24 MB", that most people will not achieve. Mind you, furniture stores can still get away with advertising a sofa as being 50% off when a cursory examination of the quality will reveal that there's no way it was ever worth the original price, and probably isn't even worth the discounted price.
So maybe I can adopt the new relaxed truth approach to my computing guidance in future. Tell software designers that they can get their code to run ten times faster by using the MVC pattern in their web applications, or that dependency injection will increase the speed of their UI by 200%. If anyone complains I can tell them that I did the comparisons on "standard hardware". Firstly on a laptop with one MB of memory, and then compared the result to the same code running on a web farm of two hundred servers.
But I suppose the core issue is: does anyone actually believe anything they read, hear, or see in adverts these days? Perhaps that's why adverts are becoming more surreal, and even meaningless. For example, I couldn't help noticing an advert from a company that makes plastic water filter jugs. They're trying to persuade people to throw away their boring clear plastic one and replace it with one in an exciting new color (red, green, or blue). The tag line in the advert explains that, because our bodies are made more of water than anything else, then the more enjoyable the water we drink, the better. I'm struggling to understand how the color of a plastic jug has an impact on the mental state of the drinker, but no doubt they've done a study based on the same kind of strict criteria as I did with MVC and dependency injection.
Though I do remember seeing a cartoon some while back that showed a hardware store with a big sign saying "50% Off Ladders", with the small print "12ft now only 6ft, 10ft now only 5ft, 8ft now only 4ft"...
A couple of weeks ago I was ruminating on how somebody in our style guidance team here at Microsoft got a new Swiss army knife as a holiday-time gift, and instead of a tool for removing stones from horse's hooves it has one for removing capital letters and hyphens from documentation. Meanwhile the people in the development teams obviously got handkerchiefs or a pair of slippers instead because they are still furiously delivering capital letters whenever they get the chance.
As you will probably have noticed, the modern UI style for new products uses all small capital letters in top level navigation bars and menus. I guess your view of this is based on personal preference combined with familiarity with the old fashioned initial-capital style; I've seen a plethora of comments and they seem to be fairly balanced between like and dislike. Personally I quite like the new style, especially in interfaces such as the new Windows Azure Preview Management Portal. It looks clean and smart, and fits in really well.
Meanwhile my editor and I have been pondering on how we cope with this in our documentation. No doubt some official style guidance will soon surface to resolve our predicament, but in the meantime I've been experimenting with possibilities for our Hands-on Labs. I started out with the obvious approach that matches the way we currently document steps that refer to UI elements (bearing in mind the accessibility guidelines described in It Feels Like I've Been Snookered).
Choose +NEW, select CLOUD SERVICE, and then choose QUICK CREATE.
But written down on virtual paper that does look a bit awkward and "shouty". Perhaps I should just continue to use the initial capitalized form:
Choose New, select Cloud Service, and then choose Quick create.
However, that doesn't match the UI and one of the rules is that the text should reflect what the UI looks like to make it intuitive and easy for users. Maybe I can just use ordinary words instead, in a kind of chatty informal way, so that they don't actually need to match the UI:
Choose new, select cloud service, and then choose quick create.
But that looks wrong and may even be confusing. Perhaps I should just abandon any attempt to specify the actual options:
Create a new cloud service without a using custom template.
Though that just seems vague and unhelpful. Of course, you might assume that a user would already know how to create a new cloud service, so it's redundant anyway. But something more complicated may not be so obvious without more specific guidance about where to start from:
Open the management window for your Windows Azure SQL Database.
I did suggest to my editor that we simply run with something like:
Choose the part of the window that contains what appears to be some text that would say "cloud services" if it was all lowercase, and then...
Ahh, but wait! In a non-web-based application UI I can use shortcut keys, like this:
Press Alt-F, then N, then press P.
Oh dear, that violates the accessibility rules, and doesn't work in a web page anyway. Maybe I'll just go with:
Get the person sitting next to you to show you how to create a new cloud service.
And, as a bonus, this approach may even foster team cohesiveness and encourage agile paired programming. Though you probably can't call it guidance...
Yes, another episode in my continuing onslaught on the cloud. But this week it's a heartwarming story of intrepid adventure and final success. At last I'm fully resident in the cloud - or, to be more precise, several clouds. And I might even have saved some money as well...
Over the past couple of weeks I've been blethering about getting rid of my very expensive and not always totally reliable ADSL connection by moving all the various stuff attached to it into the cloud. This includes several websites and the DNS services for mine and a colleague's root TLDs. Previous episodes described the cost verification exercise and the experimental migration, but ended with the thorny issue of traffic redirection to the new sites. And I still had the DNS issue to (again, please pardon the pun) resolve.
After looking at several commercial DNS hosting specialists, the situation seemed bleak. While they all appear fully equipped to satisfy my requirements, the cost of hosting DNS services for around 25 domains was prohibitive in my situation. An average quote of somewhere between five and eight US dollars per domain per month meant that the cost of DNS alone would be more than I pay now for all my on-premises infrastructure and connectivity. And I didn't much fancy using a free DNS service with no SLA.
But then I discovered a web hosting provider that does offer full domain management services as an add-on to their very reasonably priced packages. A quick calculation showed that paying GoDaddy.com for a fixed IP address website and all the associated frippery (such as email and other stuff that I don't need), plus the cost of their premium DNS hosting package, was around a tenth of the cost of my ADLS connection. It seemed like the perfect solution, and after signing up and spending a day setting up the DNS records in their superb web interface it all worked fine. They support secondary DNS as master and slave, so I was able to create secondary domains for my colleague's TLDs as well as configuring my own domains to use his DNS server as a secondary. Then it was just a matter of changing the IP addresses of my domains and my own root DNS server entry at Network Solutions to point to their DNS servers.
However, this still didn't solve the problem of redirecting traffic to my Windows Azure websites. I can set up CNAME records in the new DNS server for subdomains, but (as I discovered last week) that doesn't help because Windows Azure Web Sites depends on host headers to locate the sites. But now I have a hosted website with a fixed IP address at GoDaddy, so I can move all the redirection pages from my own web server to this hosted site. As all the domains now point to a single website I'll need to do some fancy stuff to detect the requested domain name and redirect to the correct site on Windows Azure. But I had the forethought to specify a Windows host for the site when I set it up, so I can use ASP.NET for that. Easy!
So I pointed all the root and "www" records in DNS to the fixed IP address of my GoDaddy site and set up a simple default ASP.NET page there that extracts the requested URL as a URI instance from the Request.URL header, parses out the domain, and does a Server.Redirect to the appropriate page on the matching Windows Azure website. There's no need for visitors to see a "we have moved" redirection page, and Windows Azure gets the correct domain name in the request so that it can find my site.
But there's another problem. Requests to anything other than the root of a domain (such as requests for specific pages from search engine results) don't work because the site can't find the specified page or file. The default page doesn't get called in this case. Instead, the server just sends back a 404 "Not Found" page. However, GoDaddy allows you to specify your own 404 Error page, so I pointed this at an ASP.NET page that parses the original request (the bit after the "404;" in the request string) and builds a new URL pointing to the appropriate Windows Azure site, together with the full path and query string of the requested page or file. It displays a "page moved" message for five seconds (yes, I know this is annoying) and then does a client-side HTTP redirect with a META instruction. So that's the problem solved!
Err, not quite. Some of my sites are ASP.NET, and a request for a non-existent page doesn't result in a 404 "not found" error. Instead, the ASP.NET handler creates a 500 "code execution" error. And GoDaddy doesn't allow you to specify the default error page for this. But you can specify the error page in Web.config, or (as I did) just use a global error handler in Global.asax to redirect to a custom ASP.NET error page. My custom error page pulls out the original URL, does the same parsing to build the correct URL on the Windows Azure site, and returns a "page moved" message with a client-side META redirect.
So that's it! My own web and DNS server is switched off, and everything is going somewhere else. A quick call to my ISP means that my expensive business ADSL connection has been replaced by a much simpler business package at a considerably lower price. I even managed to persuade the nice sales guy to cancel the downgrade fee and give me a single fixed IP address on the new service so I can still run a web server for testing, or whatever else I need, should the situation arise.
Was it worth the effort? The original on-premises connection cost (converted to US dollars and including local taxes) was $ 2,148, and that doesn't cover the cost of running the server itself, maintenance, upgrades, and other related stuff. At the moment I'm using Shared mode for the Windows Azure sites, which (together with a Windows Azure SQL Database) is free for a year. My new ADSL connection package is $ 696 per year, and the GoDaddy hosting package (Windows website and premium DNS service) is only $ 186 per year, so the annual cost at the moment is less than $ 900 - a saving of almost 60%!
Of course, when Windows Azure Web Sites becomes a chargeable service (see Pricing Details) I'll need to review this, but the stated aim is to provide a competitive platform so even when using SQL Database I should still see a saving. And I can still investigate moving to a shared MySQL hosted database to reduce the cost. Meanwhile I'm finally free of DNS Amplification attacks, web server vulnerability attacks, and all my inbound ports are closed. I also have one less server to run, manage, monitor, maintain, upgrade, and try to keep cool in summer.
All I need to do now is out-source my day job and I can spend the next few years lazing on some remote foreign sun-kissed beach - preferably one that's got Wi-Fi...
So it's been a week of semi-fruitful searching for lots of people. In China there's a team setting out on a million-pound expedition in the mountains and forests of Hubei province to find the Yeren or Yeti that's supposedly been sighted hundreds of times. In Geneva, scientists have revealed that they've probably found the Higgs boson particle they've been searching for over the last fifty years. Meanwhile, as I mentioned last week, I've been seeking a way to rid myself of the cost and hassle of maintaining my own web servers.
The Geneva team reckons there's only a one in 1.7 million chance that what they've found is not the so called "God particle" but they need to examine it in more detail to be absolutely sure. I just hope that the level of probability for the expedition team in China will be more binary in nature. I guess that being faced with a huge black hairy creature that's half man and half gorilla (and which hopefully, unlike the Higgs boson, exists for more than a fraction of a second) will prompt either a definite "yes it exists" or an "it was just a big bear" response.
Meanwhile, my own search for Windows Azure-based heaven has been only partially successful so far. A couple of days playing with Windows Azure technology have demonstrated that everything they say about it is pretty much true. It's easy, quick, and mostly works really well. But unfortunately, having overcome almost all of the issues I originally envisaged, I fell at the last fence.
The plan was to move five community-style websites to Windows Azure Web Sites, with all the data in a Windows Azure SQL Database server. Two of the sites consist of mainly static HTML pages, and these were easy to open in Web Matrix 2 and upload to a new Windows Azure Web Site using the Web Deploy feature in Web Matrix. They just worked. A third site is HTML, but the static pages and graph images are re-generated once an hour by my Cumulus weather station software. However, Cumulus can automatically deploy these generated resources using FTP, and it worked fine with the FTP publishing settings you can obtain from the Windows Azure Management Portal for your site.
The likely problem sites were the other two that use ASP.NET and data that is currently stored in SQL Server. Both use ASP.NET authentication, so I needed to create an ASPNETDB database in my Windows Azure SQL Database server and two other databases as well. However, my web server runs SQL Server 2005 and I couldn't get Management Studio to connect to my cloud database server. In the end I resorted to opening the databases in the Server Explorer window in Visual Web Developer and creating scripts to build the databases and populate them with the data from the existing tables. Then I could create the new databases in the Windows Azure SQL Database management portal and execute the script in the Query page. I had to do some modification to the script (such as removing the FILLFACTOR attributes for tables) but it was generally easy for the ASPNETDB and another small database.
However, problems arose when I looked at the generated script for the local village residents' group website. This is based on the old Club Starter Site, much modified to meet our requirements, and is fiendishly complicated. It also stores all the images in the database instead of as disk files. The result is that the SQL script was nearly 72 MB, which you won't be surprised to hear cannot be copied into the Query page of the management portal. However, I was able to break it up into smaller pieces and load it into a Visual Studio 2008 database query window, connect to the Windows Azure database, and execute each part separately. It was probably the most time-consuming part of the whole process.
Then, of course, comes testing time. Will the ASP.NET authentication work with the hashed passwords in the new ASPNETDB database, or is there some SALT key that is machine-specific? Thankfully it did work, so I don't have to regenerate accounts for all the site members and administrators. In fact, it turns out that almost everything worked just fine. A really good indication that the Web Sites feature does what it says on the tin.
However, there were three things left I needed to resolve. Firstly, I found that one site which generates RSS files to disk could no longer do so because you obviously can't set write permission on the folders in a Windows Azure Web Site. The solution was to change the code that generated the RSS file so it stored the result in a Windows Azure SQL Database table, and add an ASP.NET page that reads it and sends it back with ContentType = "text/xml". That works, but it means I need to change all the links to the original RSS file and the few people who may be subscribing to it won't find it - though I can leave an XML file with the same name in the site that redirects to the new ASP.NET page.
Secondly, I need to be able to send email from the two ASP.NET sites so users can get a password reset email, and administrators are advised of new members and changes made to the site content. There's no SMTP server in Windows Azure so I was faced with either paying for a Virtual Machine just to send email (in which case I could have set up all the websites and SQL Server on it), or finding a way to relay email through another provider. It turns out that you can use Hotmail for this, though you do need to log into the account you use before attempting to relay, and regularly afterwards. So that was another issue resolved.
The final issue to resolve was directing traffic from the domains we use now to the new Windows Azure Web Sites. Adding CNAME records for "www" to my own DNS server was the first step before I investigate moving DNS to an external provider. It's a part-fix because I really want to redirect all requests except for email (MX records) to the new site, but Windows DNS doesn't seem to allow that. However, there are DNS providers who will map a CNAME to a root domain, so that will be the answer.
Unfortunately, this was where it all fell apart. Windows Azure Web Sites obviously uses some host-header-style routing mechanism because requests using the redirected URL just produce a "404 Not Found" response. Checking the DNS settings showed that DNS resolution was working and that it was returning the correct IP address. But accessing the Azure-hosted sites using the resolved IP address also produced the "404 Not Found" response. Of course, thinking about this, I suppose I can't expect to get a unique IP address for my site when it's hosted on a shared server. The whole point of shared servers in Windows Azure is to provide a low cost environment where one IP address serves all of the hosted sites. Without the correct URL, the server cannot locate the site.
According to several blog posts from people intimately connected with the technology there will be a capability to use custom domain names with shared sites soon, though probably at extra cost. The only solution I can see at the moment is to set up a redirect page in each website on my own server that specifies the actual Windows Azure URL, so that routing within Windows Azure works properly. But that means I still need to maintain my own web server!
Meanwhile, here's a few gotcha's I came across that might save you some hassle if you go down the same route as I did:
So was the whole "migrate to Azure" exercise a waste of time and effort? No, because I know that I have a solution that will let me get rid of my web server and expensive fixed IP address, business-level ADSL connection in time. And in less than two days I learned a lot about Windows Azure as well. However, what's becoming obvious is that I probably need to go down the road of using reserved instead of shared instances, or even Cloud Services instead of Web Sites. But that just raises the question of cost all over again.
Though, just to cheer me up, a colleague I brainstormed with during the process did point out that what I was really doing was Yak shaving, so I don't feel so bad now...