Microsoft Platform Ready (MPR) is designed to give you what you need to plan, build, test and take your solution to market. We’ve brought a range of platform development tools together in one place, including technical, support and marketing resources, as well as exclusive offers.
It has been my experience that lots of things are possible and can be done in Windows Azure, however some things are relatively easy, some are doable with difficulty and some are normally to be avoided. There is starting to be a lot of good guidance in how to do things with the platform as well as some patterns and anti-patterns. Since the platform is relatively new we still need more.
A complete catalog of patterns (and anti-patterns) would list lots more than are listed here. Fortunately no catalog of design patterns is ever complete so the category is totally open ended. And as the Widows Azure platform evolves additional ones will emerge.
What follows is simply JMNSHO.
The use of more than two instances of a web role is a no-brainer and in fact at least two web role instances are required to receive our Service Level guarantee. The reason that I glorified this with the title Web Farming is because that is essentially what you are doing. Whether you create a web farm yourself on-premise with multiple web servers or deploy two instances of a web role you are creating a web farm. In a web farm, whether on premise or in Windows Azure, you have to worry about state since two request might come to different servers/instances and the state might be on the wrong machine. This means that you should make sure that your applications can work statelessly by storing persistence data external to the server/instance.
This is the classic pattern of using a Windows Azure Queue and either Blob or Table Storage to communicate between web and worker role instances. With this pattern a web role stores a blob or a set of table storage entities in Windows Azure storage and enqueues a request that is later picked up by a worker role instance for further processing.
Since SQL Azure is mostly SQL Server in the cloud offered up as a service this is also a no-brainer, especially since, in most cases, you can change an application or tool from referencing an on-premise database to one in the cloud by simply changing the connection string. That is not to say that it is totally transparent. Since one is located locally and one in the cloud there are some differences mostly dictated by things like latency across the internet vs. an intranet.
If the Inter Role instance communications using queues pattern does not work for you then you can wire up your own inter role instance communications using WCF. You might want to do this if for instance you are implementing a map-reduce type of application where one role instance need to be different than the others, for instance where you need to have a controller instance that fires up and manages a variable set of worker roles cooperating on the partitioned solution of a problem. Of course you could use queues to communicate between them but that may not be performant enough.
In case it has escaped your notice let me remind you that everything in Windows Azure costs money. You pay for every running instance of every role that you deploy based on the amount of time that it is deployed into Azure. In Windows Azure you have a wide variety of Compute Role types (Extra Small, Small, Medium, Large, Extra Large) that you can specify in your application. Each one has a different price. In some cases it may make sense to have fewer instances where the instances do their own multi-threading instead of having more instances running. You need to decide, probably by benchmarking your application, where the best approach and best value lies. Of course you can also implement dynamic scaling to optimize the process but that requires development effort. But then so does multi-threading.
These features, coming “soon” to Windows Azure, include: Remote Desktop Support which enables connecting to individual service instances, Elevated Privileges which enables performing tasks with elevated privileges within a service instance and the full Virtual Machine (VM) Role which enables you to build a Windows Server virtual machine image off-line and upload it to Windows Azure.
For the ISV these features make it easier to port a legacy application that might use things like COM+ to Windows Azure. They also make it easier to peek inside of a live role instance to see what is happening.
But beware! the VM Role is definitely not Infrastructure as Service (IaaS). Managing failures, patching the OS and persistence in the face of VM failures are your responsibility. Note that the VM Role is recommended only as a technique to be used to migrate an application to Azure. The real long-term answer is to re-architect your application to be stateless and fit the capabilities of the Windows Azure.
Building applications that are split between on-premise and in cloud components are inherently more difficult than building applications that live in one place or the other. But there is of course great value in building them in many cases. On-premise applications can leverage SQL Azure, Windows Azure Blob, Table and Queue Storage directly. Windows Azure applications can also, with a bit more difficulty, do the reverse leveraging things like SQL Server and other data sources located on-premise in the data center. The use of REST styled APIs in Azure makes this somewhat easier, however the features here are still evolving.
The Windows Azure AppFabric Service Bus and Access Control Service (together with Active Directory Federation Services) are powerful tools that can be used to build major hybrid cloud/on-premise applications.
The following should be considered anti-patterns (for now). Of course things could always change in subsequent releases. Azure is an evolving platform, but for now they should be avoided.
The VM Role is not IaaS (yet). Building your own clusters out of VM Roles to run servers like Exchange, SQL Server, SharePoint or an Active Directory Domain Controller is not a good idea even if you could set it up.
Remember that the VM Role still expects stateless apps so we are discounting the idea of legacy apps running in a VM Role. Most of our server software was not built with Azure in mind. You can run them in their own VM Role but any the thought of an Exchange or AD server being turned off and rolled back due to some failure should give you pause. Even if you create your own cluster architecture for this the difficulty of setting it up and maintaining it in the face of potential state loss would put it into the anti-pattern category
The VM role is really for software that has a fairly fragile or custom install experience. If what you are installing requires state to be maintained in the face of a failure it is really not well suited to The VM Role.
The same holds true for 3rd party servers such as MySQL, Oracle, DB/2, etc. Remember that, if you need a relational database like SQL Server SQL Azure, in general, is the best way to go.
The same holds true here as for running other legacy server in Azure. However at PDC we demonstrated a port of TFS running in Windows Azure as a proof of concept. There was even talk of a CTP early next year of TFS as a Service. (TFSaaS?).
I realize that the list of anti-patterns is very thin and undoubtedly there are other patterns that you should not try to implement in Windows Azure as it stands today.
Feel free to comment with your suggestions, or send your comments to me (wzack@Microsoft.com).