Larry Franks and Brian Swan on Open Source and Device Development in the Cloud
Since this blog will focus on OSS and device development for the Windows Azure platform, we want to develop a shared understanding of what it means to design and build applications for the cloud. To make sure that we start with that common understanding, we’ll pose answers to these three questions in this post:
Clearly, we can’t hope to explore possible answers to these questions in great depth in one post, so we’ll attempt to answer them only at a high level here. Our focus will be on exploring answers to these questions for any cloud application (not just Windows Azure applications) with the promise of getting into the details of OSS/device development for the Azure platform in the weeks and months ahead. And, we don’t expect our answers to be definitive…cloud computing is still too new and wide-open to expect that. We’d love to hear what you think about what it means to build an application for the cloud.
What are the benefits of cloud computing?
No matter how you slice it, the “benefits” of cloud computing come down to a single benefit: cost savings.
If you are considering using a cloud platform to run an application, then you are probably thinking about the availability and scalability of your application, or the cost savings of not having to maintain your own datacenter (and if you are not, then you should ask yourself why you are considering the cloud). Aside from development costs, your largest costs (or among the largest) are a data center (possibly several) capable of handling peak usage and people to manage the data center(s). By using a cloud offering instead of a traditional data center, you remove much of the effort required to manage your machines and also allow for rapid provisioning of new machines as you need them. The ability to rapidly provision/remove new machines and the decreased effort in managing machines allow you to ultimately save money. (That is, of course, over-simplifying things. Different cloud offerings offer different ways to reduce the cost of running a highly available, highly scalable application, but all are ultimately focused on reducing cost.)
The classic example is an ecommerce application that has varied use according to the time of year (e.g., high use during the winter holidays). If you are responsible for the data centers that run the application, you have to make sure you have enough capacity to handle peak traffic. Of course, this means many of your servers sit idly by for much of the year – you are paying for more than you use over the course of a year. When you run the same application in the cloud, you can match your capacity much more closely to the demand, thus paying only for what you use.
This diagram was adapted from a presentation by Josh Holmes.
The diagram above helps to illustrate the benefit. Note that with in-house servers, capacity often far exceeds actual demand (and can sometimes be less than the demand!). With servers in the cloud, you can rapidly spin up and spin down servers to closely match demand, thus paying only for what you use and always keep up with demand.
What are the principles of cloud computing?
To take advantage of the benefits of cloud computing, you need to design and build applications with the cloud in mind. The problems you will run into by simply taking an in-house application and deploying it on cloud servers will quickly demonstrate how important this point is.
The basic principles on which good cloud applications are built are the same as those for building a distributed application (with the added twist of building for elastic scalability). If you have been building applications on commodity hardware with availability and scalability in mind, you are likely already familiar with the principles of distributed computing. On the other hand, if you have been solving problems related to availability and scalability by upgrading hardware (i.e. throwing more CPUs per server and more RAM per server at the problem), then you have some work to do in understanding how to design an application for the cloud. In other words, if you understand scale out (as opposed to scale up), then you are well on your way to building applications for the cloud.
The following are the principles of distributed computing:
The last principle I’d like to call out here (although I’m not sure this is technically a “principle”) is not necessarily related to distributed computing, but it is related to the value proposition of cloud computing: Anticipate scale up and scale down needs. In order to take advantage of the benefits of cloud computing, you have to plan for scaling up and scaling down in advance. Some of your planning can be programmed into your application (i.e. when traffic hits X requests per minute, spin up Y new servers), but other planning may be “manual”. In the context of the e-commerce example, you may want to adjust your programmatic scale rules for November and December (when you expect more traffic). And, in anticipation of Black Friday or Cyber Monday, you may want to spin up new servers in anticipation of more traffic (spinning up new servers isn’t instantaneous, so it may be a good idea to do this in anticipation of a dramatic spike in traffic).
What are the challenges of cloud computing?
Designing according to the principles above may seem like challenge enough for building a cloud application, but there are a few other challenges worth pointing out:
So, those are the benefits, principles, and challenges of cloud computing as we see them. As mentioned earlier, we’d love to hear your thoughts in comments. We’ll use this post (and your comments) as a reference point as we write and publish content about building OSS and device applications for the Windows Azure platform.