Random Disconnected Diatribes of a p&p Documentation Engineer
Like many people I'm trying to evaluate whether I can save money by moving my lightly-loaded, community-oriented websites to Windows Azure instead of running them all on my own hardware (a web server in my garage). With the advent of the low-priced Web Sites model in Windows Azure (which I rambled on about last week), it seems like it should be a lot more attractive in financial terms than using the setup I have now.
At the moment I use a business-level ADSL connection that provides 16 fixed IP addresses and an SLA that makes the connection suitable for exposing websites and services over the Internet. With one exception the websites and services run on a single Hyper-V hosted instance of Windows Server 2008 R2, on a server that also hosts four other Hyper-V VMs. The web server VM exposes five websites; while also providing DNS services for my sites and acting as a secondary for a colleague (who provides the secondary DNS for my sites). The exception is the website running on a very old Dell workstation that displays local weather information captured from a weather monitoring station in my back garden.
In theory Windows Azure should provide a simple way to get rid of the Hyper-V hosted web server, and allow me to avoid paying the high costs of the business ADSL connection. I have a cable Internet connection that I use for my daily access to the wider world, and I could replace the ADSL connection with a simpler package at a considerably reduced cost to maintain backup and failover connectivity. I'd need to find a solution for the weather web server because it requires USB connectivity to the weather station hardware (which it why it runs on a separate server), but that's not a major issue and could be solved by posting data to a remote server over an ordinary Internet connection.
So I started evaluating possible savings. Using a separate Cloud Services Web role for each site is a non starter because the cost, together with one Windows Azure SQL Database server, is four times what I pay for the ADSL connection. Even taking into account the saving from running one less on-premises Hyper-V instance, it doesn't make sense for my situation. And I'll still need a DNS server, though I can switch to using a hosted service from another company for a few dollars per month to resolve (if you'll pardon the pun) that issue.
But I can run multiple sites in one Cloud Services Web role by using host headers, which gives me a marginal saving against the cost of the ADSL connection. Of course, according to the Windows Azure SLA I should deploy two instances of the role, which would double the cost. However, the expected downtime of a single role instance is probably less that I get using my own ADSL connection when you consider maintenance and backup for the Hyper-V role I use now.
Using a Virtual Machine seems like a sensible alternative because I can set it up as a copy of the existing server; in fact I could probably export the existing VHD as it stands and run it in Windows Azure with only minor alterations. Of course, I'd need SQL Server in the Virtual Machine as well as a DNS server, but that's all fully supported. If I could get away with running a small instance Virtual Machine, the cost is about the same as I pay for the ADSL connection. However, with only 1.75 GB of memory a small instance might struggle (the existing Hyper-V instance has 2.5 GB of memory and still struggles occasionally). A medium size instance with 3.5 GB of memory would be better, but the costs would be around double the cost of my ADSL line.
So what about the new Windows Azure Web Sites option? Disregarding the currently free shared model, I can run all five sites in one small reserved instance and use a commercial hosted DNS service. Without SQL Server installed, the 1.75 GB of memory should be fine for my needs. I also get a free shared MySQL database in that cost, but it would mean migrating the data and possibly editing the code to work with MySQL instead of SQL Server. A Windows Azure SQL Database for up to five GB of data costs around $26 per month so the difference over a year is significant, but familiarity with SQL Server and ease of maintenance and access using existing SQL Server tools would probably be an advantage.
Interestingly, Cloud Services and reserved Web Sites costs are the same when using Windows Azure SQL Database. However, the advantage of easier deployment from a range of development environments and tools would make Web Sites a more attractive option. It would also be useful for the weather website because the software I use to interface with it (Cumulus) can automatically push content to the website using FTP over any Internet connection.
So, summarizing all this I came up with the following comparisons (in US dollars excluding local taxes):
I only included one GB of outbound bandwidth because that's all I need based on average traffic volumes. However, bandwidth costs are very low so that even if I use ten times the estimated amount it adds only one dollar to the monthly costs. Also note that these costs are based on the July 2012 price list for Windows Azure services, and do not take into account current discounts on some services. For example, there is a 33% discount on the reserved Web Sites instance at the time of writing.
It looks like the last of these, one small reserved Web Sites instance with a MySQL database and externally hosted DNS is the most attractive option if I can manage with MySQL instead of Windows Azure SQL Database. However, what's interesting is that I can achieve a saving for my specific limited requirements, and that's without taking into account the hidden ancilliary costs of my on-premises setup such as maintaining and patching the O/S, licensing costs, electricity and use of space, etc.
And if the hardware in my garage fails, which I know it will some day, the cost of fixing or renewing it...
So it's been a week of semi-fruitful searching for lots of people. In China there's a team setting out on a million-pound expedition in the mountains and forests of Hubei province to find the Yeren or Yeti that's supposedly been sighted hundreds of times. In Geneva, scientists have revealed that they've probably found the Higgs boson particle they've been searching for over the last fifty years. Meanwhile, as I mentioned last week, I've been seeking a way to rid myself of the cost and hassle of maintaining my own web servers.
The Geneva team reckons there's only a one in 1.7 million chance that what they've found is not the so called "God particle" but they need to examine it in more detail to be absolutely sure. I just hope that the level of probability for the expedition team in China will be more binary in nature. I guess that being faced with a huge black hairy creature that's half man and half gorilla (and which hopefully, unlike the Higgs boson, exists for more than a fraction of a second) will prompt either a definite "yes it exists" or an "it was just a big bear" response.
Meanwhile, my own search for Windows Azure-based heaven has been only partially successful so far. A couple of days playing with Windows Azure technology have demonstrated that everything they say about it is pretty much true. It's easy, quick, and mostly works really well. But unfortunately, having overcome almost all of the issues I originally envisaged, I fell at the last fence.
The plan was to move five community-style websites to Windows Azure Web Sites, with all the data in a Windows Azure SQL Database server. Two of the sites consist of mainly static HTML pages, and these were easy to open in Web Matrix 2 and upload to a new Windows Azure Web Site using the Web Deploy feature in Web Matrix. They just worked. A third site is HTML, but the static pages and graph images are re-generated once an hour by my Cumulus weather station software. However, Cumulus can automatically deploy these generated resources using FTP, and it worked fine with the FTP publishing settings you can obtain from the Windows Azure Management Portal for your site.
The likely problem sites were the other two that use ASP.NET and data that is currently stored in SQL Server. Both use ASP.NET authentication, so I needed to create an ASPNETDB database in my Windows Azure SQL Database server and two other databases as well. However, my web server runs SQL Server 2005 and I couldn't get Management Studio to connect to my cloud database server. In the end I resorted to opening the databases in the Server Explorer window in Visual Web Developer and creating scripts to build the databases and populate them with the data from the existing tables. Then I could create the new databases in the Windows Azure SQL Database management portal and execute the script in the Query page. I had to do some modification to the script (such as removing the FILLFACTOR attributes for tables) but it was generally easy for the ASPNETDB and another small database.
However, problems arose when I looked at the generated script for the local village residents' group website. This is based on the old Club Starter Site, much modified to meet our requirements, and is fiendishly complicated. It also stores all the images in the database instead of as disk files. The result is that the SQL script was nearly 72 MB, which you won't be surprised to hear cannot be copied into the Query page of the management portal. However, I was able to break it up into smaller pieces and load it into a Visual Studio 2008 database query window, connect to the Windows Azure database, and execute each part separately. It was probably the most time-consuming part of the whole process.
Then, of course, comes testing time. Will the ASP.NET authentication work with the hashed passwords in the new ASPNETDB database, or is there some SALT key that is machine-specific? Thankfully it did work, so I don't have to regenerate accounts for all the site members and administrators. In fact, it turns out that almost everything worked just fine. A really good indication that the Web Sites feature does what it says on the tin.
However, there were three things left I needed to resolve. Firstly, I found that one site which generates RSS files to disk could no longer do so because you obviously can't set write permission on the folders in a Windows Azure Web Site. The solution was to change the code that generated the RSS file so it stored the result in a Windows Azure SQL Database table, and add an ASP.NET page that reads it and sends it back with ContentType = "text/xml". That works, but it means I need to change all the links to the original RSS file and the few people who may be subscribing to it won't find it - though I can leave an XML file with the same name in the site that redirects to the new ASP.NET page.
Secondly, I need to be able to send email from the two ASP.NET sites so users can get a password reset email, and administrators are advised of new members and changes made to the site content. There's no SMTP server in Windows Azure so I was faced with either paying for a Virtual Machine just to send email (in which case I could have set up all the websites and SQL Server on it), or finding a way to relay email through another provider. It turns out that you can use Hotmail for this, though you do need to log into the account you use before attempting to relay, and regularly afterwards. So that was another issue resolved.
The final issue to resolve was directing traffic from the domains we use now to the new Windows Azure Web Sites. Adding CNAME records for "www" to my own DNS server was the first step before I investigate moving DNS to an external provider. It's a part-fix because I really want to redirect all requests except for email (MX records) to the new site, but Windows DNS doesn't seem to allow that. However, there are DNS providers who will map a CNAME to a root domain, so that will be the answer.
Unfortunately, this was where it all fell apart. Windows Azure Web Sites obviously uses some host-header-style routing mechanism because requests using the redirected URL just produce a "404 Not Found" response. Checking the DNS settings showed that DNS resolution was working and that it was returning the correct IP address. But accessing the Azure-hosted sites using the resolved IP address also produced the "404 Not Found" response. Of course, thinking about this, I suppose I can't expect to get a unique IP address for my site when it's hosted on a shared server. The whole point of shared servers in Windows Azure is to provide a low cost environment where one IP address serves all of the hosted sites. Without the correct URL, the server cannot locate the site.
According to several blog posts from people intimately connected with the technology there will be a capability to use custom domain names with shared sites soon, though probably at extra cost. The only solution I can see at the moment is to set up a redirect page in each website on my own server that specifies the actual Windows Azure URL, so that routing within Windows Azure works properly. But that means I still need to maintain my own web server!
Meanwhile, here's a few gotcha's I came across that might save you some hassle if you go down the same route as I did:
So was the whole "migrate to Azure" exercise a waste of time and effort? No, because I know that I have a solution that will let me get rid of my web server and expensive fixed IP address, business-level ADSL connection in time. And in less than two days I learned a lot about Windows Azure as well. However, what's becoming obvious is that I probably need to go down the road of using reserved instead of shared instances, or even Cloud Services instead of Web Sites. But that just raises the question of cost all over again.
Though, just to cheer me up, a colleague I brainstormed with during the process did point out that what I was really doing was Yak shaving, so I don't feel so bad now...
Yes, another episode in my continuing onslaught on the cloud. But this week it's a heartwarming story of intrepid adventure and final success. At last I'm fully resident in the cloud - or, to be more precise, several clouds. And I might even have saved some money as well...
Over the past couple of weeks I've been blethering about getting rid of my very expensive and not always totally reliable ADSL connection by moving all the various stuff attached to it into the cloud. This includes several websites and the DNS services for mine and a colleague's root TLDs. Previous episodes described the cost verification exercise and the experimental migration, but ended with the thorny issue of traffic redirection to the new sites. And I still had the DNS issue to (again, please pardon the pun) resolve.
After looking at several commercial DNS hosting specialists, the situation seemed bleak. While they all appear fully equipped to satisfy my requirements, the cost of hosting DNS services for around 25 domains was prohibitive in my situation. An average quote of somewhere between five and eight US dollars per domain per month meant that the cost of DNS alone would be more than I pay now for all my on-premises infrastructure and connectivity. And I didn't much fancy using a free DNS service with no SLA.
But then I discovered a web hosting provider that does offer full domain management services as an add-on to their very reasonably priced packages. A quick calculation showed that paying GoDaddy.com for a fixed IP address website and all the associated frippery (such as email and other stuff that I don't need), plus the cost of their premium DNS hosting package, was around a tenth of the cost of my ADLS connection. It seemed like the perfect solution, and after signing up and spending a day setting up the DNS records in their superb web interface it all worked fine. They support secondary DNS as master and slave, so I was able to create secondary domains for my colleague's TLDs as well as configuring my own domains to use his DNS server as a secondary. Then it was just a matter of changing the IP addresses of my domains and my own root DNS server entry at Network Solutions to point to their DNS servers.
However, this still didn't solve the problem of redirecting traffic to my Windows Azure websites. I can set up CNAME records in the new DNS server for subdomains, but (as I discovered last week) that doesn't help because Windows Azure Web Sites depends on host headers to locate the sites. But now I have a hosted website with a fixed IP address at GoDaddy, so I can move all the redirection pages from my own web server to this hosted site. As all the domains now point to a single website I'll need to do some fancy stuff to detect the requested domain name and redirect to the correct site on Windows Azure. But I had the forethought to specify a Windows host for the site when I set it up, so I can use ASP.NET for that. Easy!
So I pointed all the root and "www" records in DNS to the fixed IP address of my GoDaddy site and set up a simple default ASP.NET page there that extracts the requested URL as a URI instance from the Request.URL header, parses out the domain, and does a Server.Redirect to the appropriate page on the matching Windows Azure website. There's no need for visitors to see a "we have moved" redirection page, and Windows Azure gets the correct domain name in the request so that it can find my site.
But there's another problem. Requests to anything other than the root of a domain (such as requests for specific pages from search engine results) don't work because the site can't find the specified page or file. The default page doesn't get called in this case. Instead, the server just sends back a 404 "Not Found" page. However, GoDaddy allows you to specify your own 404 Error page, so I pointed this at an ASP.NET page that parses the original request (the bit after the "404;" in the request string) and builds a new URL pointing to the appropriate Windows Azure site, together with the full path and query string of the requested page or file. It displays a "page moved" message for five seconds (yes, I know this is annoying) and then does a client-side HTTP redirect with a META instruction. So that's the problem solved!
Err, not quite. Some of my sites are ASP.NET, and a request for a non-existent page doesn't result in a 404 "not found" error. Instead, the ASP.NET handler creates a 500 "code execution" error. And GoDaddy doesn't allow you to specify the default error page for this. But you can specify the error page in Web.config, or (as I did) just use a global error handler in Global.asax to redirect to a custom ASP.NET error page. My custom error page pulls out the original URL, does the same parsing to build the correct URL on the Windows Azure site, and returns a "page moved" message with a client-side META redirect.
So that's it! My own web and DNS server is switched off, and everything is going somewhere else. A quick call to my ISP means that my expensive business ADSL connection has been replaced by a much simpler business package at a considerably lower price. I even managed to persuade the nice sales guy to cancel the downgrade fee and give me a single fixed IP address on the new service so I can still run a web server for testing, or whatever else I need, should the situation arise.
Was it worth the effort? The original on-premises connection cost (converted to US dollars and including local taxes) was $ 2,148, and that doesn't cover the cost of running the server itself, maintenance, upgrades, and other related stuff. At the moment I'm using Shared mode for the Windows Azure sites, which (together with a Windows Azure SQL Database) is free for a year. My new ADSL connection package is $ 696 per year, and the GoDaddy hosting package (Windows website and premium DNS service) is only $ 186 per year, so the annual cost at the moment is less than $ 900 - a saving of almost 60%!
Of course, when Windows Azure Web Sites becomes a chargeable service (see Pricing Details) I'll need to review this, but the stated aim is to provide a competitive platform so even when using SQL Database I should still see a saving. And I can still investigate moving to a shared MySQL hosted database to reduce the cost. Meanwhile I'm finally free of DNS Amplification attacks, web server vulnerability attacks, and all my inbound ports are closed. I also have one less server to run, manage, monitor, maintain, upgrade, and try to keep cool in summer.
All I need to do now is out-source my day job and I can spend the next few years lazing on some remote foreign sun-kissed beach - preferably one that's got Wi-Fi...
It's customary here in England to castigate British Rail for their outlandish non-service excuses. As far back as I can remember we've had "leaves on the line". Then, after they spent several million pounds on special cleaning trains, it morphed into "the wrong kind of leaves on the line". And of course, every winter when the entire British transport system grinds to a halt they blame "the wrong kind of snow." But this week I've been introduced to a new one: "the wrong kind of electricity".
During the summer months I ply my lonely documentation engineering trade using a laptop and enjoying the almost-outdoorsness of the conservatory; soaking up the joys of summer, the birds singing in the trees, the fish splashing around in the pond, and a variety of country wildlife passing by. So when I noticed one of my regular computer suppliers was selling off Windows 7 laptops, no doubt to be ready for the imminent arrival of Windows 8, I thought it would be a good idea to pick up a decent 17" one to replace my aging 14" Dell Latitude. With age gradually degrading my eyesight I reckon I'll soon need all the screen space I can get.
So when my nice new Inspiron arrived I powered it up, worked through the "configuring your computer" wizard, removed all the junk that they insist on installing, and started to list all the stuff I'll need to install. Until I noticed that the battery wasn't charging. So I fiddled with the power settings, dug around in the BIOS, tried a different power pack, and did all the usual pointless things like rebooting several times. No luck.
So I dive into t'Internet to see if there's a known fix. Yes, according to several posts on the manufacturer's site and elsewhere there is. You replace the power supply board inside the computer at a cost of 35 pounds, or – if it's still under guarantee – send it back and they replace the motherboard. Mind you, there were several other suggestions, such as upgrading the BIOS and banging the power supply against a wall, but as I'd only had the machine for two hours none of these seemed to be an ideal solution. So I did the obvious – pack it up and send it back to the supplier as DoA (dead on arrival).
Mind you, when I phoned the supplier and explained the problem the nice lady said that it would be OK if I kept it plugged into the mains socket because then it doesn't need the battery to be charged up. True, but as I pointed out to her, it's supposed to be a portable computer. I'll need a long piece of wire if I decide to use it the next time I'm travelling somewhere by train.
And do I want a replacement? How common is the failure? To have it happen on a brand new machine is worrying. Yet, strangely, only a few weeks ago I noticed one time when I powered up my old Latitude that it displayed a message saying it didn't recognize my power pack, but then decided it did. Yet after wandering around the house I found five Dell laptop power packs and they all seem to be much the same. All 19.6 Volts, either 3.4 Amps or higher current rating. They all have the same two-pole plug, with the positive in the center. The only difference seems to be that the newer ones have 25 certificates of conformance on the label, while the older ones have around 15 (perhaps that's why they seem to get bigger each time - to make room for the larger label).
So how does the computer know which power pack I've plugged in? When I looked in the BIOS it said that the power pack was "65 Watts". Is there some high frequency modulation on the output that the computer can decipher? Or does it do the old electrician's trick of flicking the wires together to see if there's sparks, and measure the effect? Do all computers these days do the same thing? If I buy an unbranded replacement power pack will the computer pop up a window saying "You tight-fisted old miser - you don't really expect me to work with that do you?"
And is all this extra complexity, which can obviously go wrong, really needed? How comfortable will I be with all my computers now if I feel I need to check that the power supply/computer interface is still working every time I switch one of them on? It seems like the usual suspicion most people have that the first thing to die on your computer will be the hard disk is no longer true. Now your computer may decide to stop working just because you're using the wrong kind of electricity...
Here in Britain we always used to refer to a job that was never-ending as "like painting the Forth Bridge." It came about because the people who paint the huge and magnificent railway bridge over the Firth of Forth in Scotland reportedly start at one end and it takes so long that, when they reach the other end, it's time to go back and start all over again.
However, with the new type of paint they used this time it seems they won't need to do it again for 25 years (see this BBC story). So now we need to find another suitable phrase to describe endless tasks that are overtaken by circumstance. After a couple of years working on documentation and guidance for Windows Azure, I propose we replace it with "like documenting Windows Azure".
See, here's the problem. When applications came in a pretty box with some floppy disks or a CD they tended to have a strict release schedule. You could write a book about Microsoft Access 1.0 (in fact, I did) without worrying that the product would be completely different before you could get your draft through edit and sent to the printer. Even when documenting features of the operating system, such as Active Server Pages, you could reckon to hit the streets well before they released a new version of Windows Server.
But now that we are all web-enabled and Internet-driven, it seems like us poor documentation engineers have almost no chance of keeping up. Product teams can drop in new features any time they like, and entirely change the UI so that our step-by-step guides make no sense at all and all our screenshots are out of date by the time we get the pages onto MSDN. I mean, we're just putting the finishing touches to the third update of the p&p guide "Moving Applications to the Cloud" and so much has changed since we released the last update (July 2011) that we've ended up almost totally rewriting it.
As well as adding in new content around Cloud Services, such as updated configuration techniques and our own Transient Fault Handling and Autoscaling Application Blocks, we widened the migration scenarios to include Windows Azure Virtual Machines, Web Sites, hosted SQL Server, and new connectivity options such as Virtual Networks. We also added in more guidance about choosing the appropriate migration path through the ever-increasing list of hosting options. And, of course, we had to rename everything that's now got a shiny new moniker (think "SQL Azure" and "Windows Live ID").
We're really pleased with how the guide has evolved, and confidently expect the new content to be even more useful to architects and developers considering migration of on-premises applications to Windows Azure. But, at the point where we expect to release, there'll obviously be yet another update to the Windows Azure SDK that will impact the sample code. And then in the next portal refresh they'll probably add another bunch of new stuff so we're out of date again, before we can even get it to the printers.
Meanwhile, the "Hybrid Applications" guide we released only six months ago, and the associated Hands-on Labs we released in May this year, is already starting to look a bit out of date because there's a brand new web portal; new services such as Virtual Machines, Web Sites, and Virtual Networks; and a whole new caching mechanism. This guide's on the list for an update, but we have the second guide in the series, "Developing Applications for the Cloud" to update before then.
In fact I only just discovered that I'm now a member of the "sustained engineering team" here at p&p. At first I was a bit concerned it suggested that, up till now, we only engineered occasionally - when we weren't busy doing something else. Yet I've always considered myself to be a "documentation engineer" because creating guidance is a task I do all day every day; and it has just as strong a relationship with "real" engineering as writing code does.
Documentation engineering is like building a luxury motor car. You sculpt an attractive and intuitive exterior and interior, design and build the underlying structure that connects all the parts together, manufacture (or source from third parties) the components that actually make it work in the most cost effective way, and assemble the whole thing into a package that people will want to use every day.
The problem is that, unlike a car, you can't just paint your guidance a different color, put some fancy alloy wheels on it, and stick a new badge on the back every six months. You actually do have to re-engineer it each time. I did suggest to our project manager that he wander over to the Windows Azure development team's offices and hit them with a big stick until they promised to stop changing stuff. But he said that probably contravenes some company policy.
Maybe, instead, we can get hold of some of that 25-year paint in time for the next update...
So I just found out this week why Automobile Association patrolmen didn't need to carry four one-penny coins around at all times. According to an item on the program "QI", in the early days of Britain's acquisition of a nuclear attack capability they worried how they would contact the Prime Minister when he was out of the office, and they needed someone to push the big red button.
The plan, it seems, was for the staff at the Nuclear Attack Headquarters to phone the AA, who would send a radio message to the patrolman that shadowed ministerial convoys at all times - on his motorcycle and sidecar, if the photos they showed were accurate. He would stop the convoy and tell the Prime Minister. They would then drive to the nearest phone box and call HQ to say "go for it" (or whatever the password of the day was).
The AA bosses decided that patrolmen should always carry coins for the phone in case nobody in the Prime Minister's convoy had the correct change. However, the Home Office told the AA that they needn't bother because the Prime Minister could just phone the operator and ask to reverse the charges. Mind you, as they pointed out on QI, it's possible that the operator may be less than convinced when someone phoned and said "Hello, I'm the Prime Minister, please put me through to the Nuclear Attack Bunker and ask them to accept the charges."
Of course, this prompts some obvious questions, such as why didn't they just put a radio in the Prime Minister's car? And were cars so unreliable at that period in history that they needed to have a repair man following them all the time? But as it was Stephen Fry telling the story I suppose it must be true.
What it does reveal is how quickly technology has changed our whole perception of communication. It's only when you stop and think what it was like, even just twenty years ago, trying to stay in contact with people. For example, I was a travelling salesman in those days and my biggest weekly expense was buying pre-paid phone cards for the new-fangled "cards-only" phone boxes that were appearing all over the country. And maps for each town and city so that I could actually find where I was supposed to be going, plus postage stamps for sending in orders because the cost of calls was far too expensive to spend time reading them out over the phone.
Some years later, when I talked to the young lady who took over my job, she was completely amazed that anyone could survive traveling without a mobile phone and sat-nav; never mind the other fripperies that are standard in cars now such as cruise control and air-conditioning. Of course, there are downsides such as being continually tracked through GPS, and always being contactable on the phone - I guess the spirit we had of a sales rep's life being "a great adventure" has been fatally compromised by technology.
However, what prompted all this unrelated reminiscence was working on our latest guide Moving Applications to the Cloud where we discuss testing Windows Azure applications that will be deployed to Virtual Machines. It was only after many attempts to rationalize deployment and test cycles based around scripts and multiple Windows Azure subscriptions that our own test team pointed out how, in actual fact, the process is no different from when you are doing everything on-premises.
The point is that today's communication and connectivity technologies can make the Internet disappear. All of a sudden my schematics containing clouds and dotted lines that represent the on-premises/cloud boundary are deemed to be unrealistic. We need to pretend that the Internet (and, for that matter, Windows Azure) doesn't actually exist at all. Instead, we just plug all the machines into a Virtual Network and they look like they are in the datacenter next door.
So when the test team starts poking around trying to break the app, or when the admin people deign to promote it to the live server, the servers all look exactly the same as before. The scripts, utilities, tools, and commands are no different, and the view though Remote Desktop is indistinguishable from that with the servers on-premises. To repeat the oft-used mantra: "It just works".
And the network admin guys could even get their own back after all the unfavorable comments they suffer at the hands of developers and testers. Toss all the test and production servers into Windows Azure, connect them up with a Virtual network, and empty your datacenter. Then ask devs and testers to "pop down to the server room and reboot the test server", and see what excuse they come up with for not being able to find it.
A bit like when, as a young and naïve young lad working in a factory, the guy who came to mend one of the machines sent me to the stores to get a long external stand. I stood outside in the rain for an hour waiting for the man on the stores counter to find one...
Some days it would be nice if things just did what you expected. Like music coming out of your MP3 player when you press the play button, or nice brown toast coming out of the toaster after the prerequisite two minutes. Or Windows Media Center quite happily recognizing that, yes, it does have a TV tuner card and a video card installed, instead of just hiding in the corner pretending it wasn't its fault that Coronation Street didn't get recorded.
Last week it was Virgin Media (my cable Internet provider) who decided to do something different by refusing to connect to any website with the word "microsoft" in the URL. When I spoke to the nice man in customer services he said they had a "routing issue in my area", but that it was no problem because almost everything else worked fine. All I had to do was avoid going to Microsoft sites. Though, after some animated discussion, he did reluctantly agree that it might be a problem for me unless I go and work for somebody else for a day or so. Or take a holiday. Thankfully I could use my alternative (ADSL) connection, which quite happily connected to Microsoft, while Virgin fixed their interesting issue.
But while I was playing with the cable modem and router to confirm the routing issue wasn't my fault, I noticed that it has options to block certain types of content. With the recent security scares over Java, I thought it would be useful to block this because I don't use it and - other than my UPS configuration web pages - I haven't found a website that requires it. In fact, other than one machine, Java is not installed anywhere on my network. So blocking it obviously won't cause any problems.
Oh well, I'll just create a rule to allow Facebook to bypass the Java filter. All I need is Facebook's IP address. So here's what you get when you do an nslookup for facebook.com:
Name: facebook.com Addresses: 2a03:2880:10:8f01:face:b00c:0:25 2a03:2880:10:cf01:face:b00c:: 2a03:2880:2110:3f01:face:b00c:: 2a03:2880:2110:9f01:face:b00c:: 126.96.36.199 188.8.131.52 184.108.40.206 220.127.116.11 18.104.22.168 22.214.171.124
As my router doesn't seem to have any facility to specify IPv6 addresses, and it only allows one IP address per rule, maybe this isn't an option either. But notice the last two groups of hex characters in some of the addresses above. I bet Mark and his pals now wish they'd called it "Facebooc" instead...
It's been a month or so since I swallowed the Azure blue pill and moved all my local and community websites to Windows Azure Web Sites and Windows Azure SQL Database. Into each life, they say, a little rain will fall (well, actually Longfellow said it) but so far it's been pretty much sunshine all the way.
I'm not going to say that the Web Sites feature is perfect by any means. The lack of fixed IP addresses has made setting up DNS more difficult; I've had to use a separate website hosted at GoDaddy to do the redirection I need for the domains and paths (see Fully Cloud-Enabled!). There are new domain routing features available in Windows Azure Web Sites now, but they don't apply to the free shared package I'm using.
I also noticed that the sites take a bit longer to start up than they did on my own server. I guess this is because they need to be loaded from the backing storage if they've been idle for a while, and the initial connection to SQL Database also seems to slow down the appearance of the home page for the first hit. In fact, for one complex site, I added a "Loading..." page so that the first response is a bit faster. Yet users report that the sites do seem to run more quickly, even though they're hosted in North Europe now rather than here in Ye Olde England. Though it's probably safe to assume that Microsoft's datacenter has a thicker piece of wet string connecting it to t'Internet than I do.
As to availability, I certainly have no complaints. My home-made website monitoring service checks each site regularly, and its showing around 99.96% uptime at the moment. That's better than I could achieve with my own server in my garage. The only time it's been a bit vague was when GoDaddy's websites and DNS fell over during the recent much-publicized attack they suffered.
I don't do much uploading and modification to the sites, but what bits I needed to do have been easy using Web Matrix's Web Deploy feature. The one exception is the local weather website, which is updated automatically every hour over FTP by the Cumulus software I use. It's suffered occasional connection failures on upload, but they have been very rare - around one every ten days, or less than half a percent.
OK, so we're still six months or so from finding out what the future charges will be for the shared Web Sites feature, so my hosting decisions may need to change. But until then I'm well chuffed (as we Northerners say) with the service. I even have an account executive who sends me regular emails; and responded almost instantly to my only contact with them by phoning me back within the hour! A free service that offers great support and customer service...what more could I ask for?