Analysts such as Gartner say the cloud platform business has a PaaS future. But when you look at the market today, it’s the IaaS market that is strong healthy and growing very fast. Let’s stick with the 2 main protagonists of each approach – Amazon for IaaS and Microsoft for PaaS – despite the rosey future of PaaS, Amazon still continues to show strong growth. How can that be? Is it that PaaS just hasn’t captured the imagination of IT folks yet? I don’t think so. It has more to do with the number of applications that are able to easily take advantage of IaaS technologies today, compared to PaaS technologies.

There is a continuum from control to abstraction. At the control end, you get complete power over the platform. You decide when and how you will patch, apply security fixes, service packs and so on. You control the network, the OS the runtimes and middleware (meaning platforms like Java and .Net), the app’s data and the application itself. At the abstraction end – you are denied all but the most simple and necessary elements of control. You can’t do backup and restore operations, you can’t apply patches no matter how important you think they are – in fact you might not even know what OS, hardware, runtime platforms etc are sitting under the software you are using. Maybe the provider will let you create your own users, rather than phoning a help-desk to do it for you: more a move of convenience on the part of the service provider than anything we could really call “control” The SaaS extreme. Let’s place various technologies on that continuum.

image

The above diagram is my personal view of where they fit.

 

With great power comes great responsibility” – Peter Parker, Spiderman…

 

It doesn’t matter whether you consider a global social networking site or a simple CRM system, the value of the application is in what is delivered to its end-users. There is no inherent value in the behind-the-scenes operational management of it. It’s something that cannot be ignored: it’s a necessary evil. Without that kind of management, applications would go offline, corrupt data, become unreliable and as a result ultimately fail to provide the end-users with the service they need. But the processes of making sure the servers don’t overheat, taking regular backups to cope with the inevitable failures that will occur, monitoring for gradually increasing disk errors, monitoring for memory leaks and unexplained high CPU-utilisation. None of those things actually give value to the users. They enable the application to give value, but they don’t give value themselves.

But it’s when these things go wrong or when the processes that look after them aren’t robust enough that users fail to receive the value the application gives them. The result – they may not be able to communicate with their friends and family on the other side of the world for a day, maybe two. They may not get a large sales order in to their system which skews the quarter-end results causing investors to retreat. A couple of these well timed failures could result in the collapse of a company.

The result over the years has been the need for more and more monitoring, more and more control. Installing a piece of hardware which has no knobs, buttons or software controls is fine, until the post-mortem (witch-hunt) after the failure when it is realised it was never able to give a continuous commentary concerning a sustained growth in errors. Had it been noticed before it got too bad, it could have been dealt with.

So the growth in instrumentation, telemetry, automatic monitoring and systems management technologies has itself spawned an entire industry, growing applications which themselves deliver a value to their users – the value of keeping them in control of the applications that, in their turn deliver value to the end-users. It’s all about control. It’s almost certainly the lineage of IT that has generated this notion. The “men in white coats” syndrome: back in the early days, IT was a scientific endeavour.

It has become so normal that the default mindset is that in order to deliver good service, IT needs to have this thing called control. I think that is one aspect that is making IaaS platforms such a success. Though the IT departments that deploy their applications to public IaaS providers may not own the equipment, they can still control it, just like they always have. They can set up load-balancers and firewall configurations, they can apply security fixes, patches, service packs, they can set up VPN configurations and so on. I can see how it’s an uncomfortable feeling to let all that go and simply “trust” that somebody else will do as good a job as the IT department.

The other thing is that IaaS, to a large extent, replicates what is in the on-premise data-centre. So there is great familiarity with this – but also, many applications grew up in on-premise data-centres that were very close relatives of the IaaS model. So when it comes to moving an existing application, there is greater symmetry between the two environments.

To fully take advantage of a cloud platform (whether IaaS or PaaS) it’s necessary to assume failures will occur and build the application architecture around this assumption. So applications need to run on load-balanced machines, be designed to scale out (not up) in order to cope with increased load, be stateless so that failure of a load-balanced machine can be coped with and so on. It’s just the way a lot of modern applications are developed. The unit of compute in both of these environments tends to be the virtual machine, which can be provisioned and de-provisioned within minutes, which leads to modern applications truly being able to take advantage of modern architectures and especially the cloud.

It’s when we get to older legacy applications that the symmetry between the on-premise data-centre and the IaaS environment looks attractive. It’s possible to “lift and shift” an older application straight in to an IaaS data-centre. There is still a sense of “control” because a good proportion of the infrastructure management is still offered to the application’s owner and the advantage of no longer owning the hardware and other necessary infrastructure (cooling, power, data-centre rack-space etc). Of course with that sense of control, also comes a never-ending responsibility to monitor, maintain and look after the infrastructure. It’s up to the application-owner to understand the OS image they are running and how to patch, update and service pack it for the application they are running.

The application, if it is a traditional, single-instance beast is just as vulnerable to failure in its new environment as it was in its previous data-centre. Even IaaS cloud operators recommend applications should be built the modern way – to expect failures and deal with them appropriately. The difference between IaaS and PaaS in this environment as that you can move a legasy application to an IaaS data-centre. OK, so it goes with the same set of risks as when it was in a private data-centre, but it can be moved.

The level of abstration is more pronounced in a PaaS cloud data-centre. As said earlier, the unit of compute is the virtual machine. This is taken much more literally in the PaaS world. The set of machine instructions, and the data they operate on (say for example the code of an ASP.Net web site, plus the html, jpg, css files etc in the site) is the unit of execution. Changes to the local file system are not durable across reboots, for example. This is actually a very good thing in a modern application. The unit of execution will always be instantiated to a known state. Rather like creating an instance of a class in a piece of code. The object always appears in a pre-defined, known state. PaaS uses this notion several thousand feet higher up the application stack. Application data itself is stored either in a separate storage sub-system or in a Database-as-a Service store.

It heralds a new way of thinking about applications where the truly major components (compute, storage, database etc) can be considered as instances of a class. These major instances are defined in a service model.

Imagine creating an object from say the Java or .Net classes. You really don’t need to concern yourself with the internals of these platforms. If a bug is discovered, the platform is patched. The next time you create that object you go through exactly the same motions but you now have an object that doesn’t have the bug. It’s the same with say compute instances in a PaaS model. The service model specifies the type of compute instance you need. The cloud platform itself takes care of its internal make-up. If there is a bug in the OS, it is patched. This is actually done is such a way that even when your application is running, it can be patched underneath you and you need not concern yourself with how your app will continue to run – it just will.

It does mean though, that the platform itself has the control, not you, the individual. But let’s remember what we said right at the start – that the control and management of an application has no inherent value, it’s the service the application gives to the user where the value is derived.

So I believe those IT departments who are moving existing legacy applications to the cloud are the ones who are mostly using IaaS. And as there are more existing legacy applications in the world than modern or greenfield app projects, there will be a natural skew toward the platform that allows it in the fewest barriers – we called it “lift and shift” earlier. But as time advances, as more net-new applications are developed, as more legacy applications are updated to the newer modern architectures, there will be a greater movement toward PaaS platforms. “With great power comes great responsibility” and that responsibility exists in perpetuity. We can’t say there is no place for IaaS, there clearly is. It continues to grow and I imagine will do well, for a time. Then when all that low-hanging fruit has been picked and the only fruit on the tree is a huge collection of modern applications well suited to PaaS – we’ll see a big change. I think this is really why the analysts say the PaaS has the bright future – you only do 2 things with a PaaS platform , supply the app and the data. The rest is abstracted away. It’s a way we’ll all gradually think and I think it’ll come sooner rather than later.

The driver, I’m sure, is the consumerisation of IT. Almost everybody in Generation Y (those born after 1981) is using IT at work at rest and at play. There are a host of applications we don’t even know we need yet and Generation Y are going to develop them. They never really grew up with the idea of IT being a scientific endeavour. They’ll be great consumers of cloud services to provide the power behind their services. I’m convinced they’ll only want to write great apps. The idea of managing the platforms, dong operational stuff, monitoring – it’s just not sexy, it’s just not going to appeal. Especially if somebody else can do it – the PaaS operators of the future.