LinkedIn | FaceBook | Twitter
In an on-premise system, most of us start fulfilling business computing requirements by making decisions around buy or build. If there is a software package that fills the need of the business, depending on the price of the software and other factors you normally use that. some of these packages can be extended or adapted (Like SAP), so it isn’t a purely off-the-shelf decision, but nevertheless you start by typing “setup.exe” or “.setup” on a physical server, or more often on a Virtual Machine hosted in an Infrastructure as a Service (IaaS) configuration. In fact, “boxed software” is probably the primary use of an IaaS solution.
If, however, you make a decision to build software, or perhaps your company actually sells software, the overall system architecture design is driven by multiple people and multiple decision points. In the past, the entire IT team worked together to create an architecture. Developers select the language for writing the software, the Infrastructure team configure various physical servers or VM’s to run the software, each with its own complete environment. . Once those decisions are made, the rest of the architecture is often dictated by what servers (and licenses), networks, security, talent and other “Platform” elements including the operating system, the scale systems (up or out), High-Availability and so on are available for the organization.
But with the advent of Platform as a Service (PaaS) systems like Windows and SQL Azure, these decisions change – dramatically.
PaaS is not IaaS – meaning that the idea of having to build a VM, configure it in an IaaS provider, architect in scale, HA, DR, etc, goes away. PaaS already has a system of components running, which provide compute, storage, queue messaging, service busses, and many other operations. The PaaS provider monitors and manages these components. Scale is built-in (in the case of Windows Azure). Disaster Recovery (DR) is now a shared responsibility between the PaaS provider and the software architect.
The developer now chooses the languages he or she wants to run (.NET or open-source languages like Java) and designs the system from the component level. Since there is no infrastructure team involved, the developer and software architects now select the components they want to use, and how they want to use them. Licensing changes to a consumption model (pay for what you use). Because of these factors, the system design selections are pivotal – from cost, performance, HA/DR, and many other standpoints. In fact, done properly, the code now drives the way the systems are laid out and used – in effect, this type of computing is now a code-based infrastructure.
I’m often asked “how does IT adapt to distributed (cloud) computing? My team isn’t involved in some of these decisions anymore.” We adapt the way we always have – we look at the technology and understand where it fits. We tool up to make the best use of the technology to move our company or organization forward. Just like the inclusion of PC’s and LAN’s into the mainframe era of the past, we’ll adapt to this new way of computing as well. This time, with the code in the forefront, not the physical (or even virtual) systems.
My recommendation is that you learn the architecture of systems like Azure, and use the same architecting skills you’ve developed for physical systems. Help developers figure out the way to handle large sets of data, code-near or code-far decisions and others. I’ve got information on these components here: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx
DR goes away? Shouldn't your DR plan be what happens when your cloud provider suffers a major outage or goes out of business unexpectedly?
twh - you may want to read the post again. DR does NOT go away - as I stated, DR is now a shared responsiblity. You have every right to expect the vendor to keep backups and so on for DR, to have good systems, etc. But you have an equal responsibility not to trust them and develop your DR as well. :)