If you are looking for a good collection of notes regarding the topics covered at the Seattle Conference on Scalability, you can do no better than what James Hamilton put together. Instead, I'll write a quick commentary on what I experienced.
Scalability Is Your Problem Too
The goals of the conference are laudable. Scalability is an issue that almost all practitioners of software engineering face, especially as we move towards offering services both inside and outside the enterprise. Many are taken off guard by the sudden issues that confront them after wiring up a large scale services-based environment; especially around distributing load, distributing the data, and writing the data quickly. Sadly, I didn't see too many people from large companies there - most were software companies like Microsoft, Google, MySpace and Amazon.com. The attendance may be a consequence of the subject matter. This was some intense stuff dealing with MPI at Cray and its hopeful successor, Wikipedia redone with DHT and Erlang, a b-tree vs. Hashmap debate and scalable storage issues when dealing with billions of files. A more fun loving person would have done better going over to Adobe and hanging out at BarCampSeattle, which was going on at the same time.
Despite the intimidating material, there are real architectural and design issues that these discussions present that should be in the mind of anyone dealing with large datacenters that scale globally or even nationally. The approach of GIGA+ file storage, maidsafe's new computer architecture, and NetWorkSpaces for the R language was uniform: off-loading responsibility for management of data (meta or otherwise) to all vertices in the deployment graph instead of a central repository. NetWorkSpaces in R and maidsafe even discussed computational scalability - while Cray's new Chapel language and the discussion around Software Transactional Memory focused on scalability across processing cores as well as machines.
GIGA+'s approach of maintaining a small bitmap file on each node and passing that around - while anticipating and accepting stale data on a few edge nodes - was brilliant in the patterns it hinted at, including that perhaps being right all the time isn't as important as being fast. You can be right most of the time and accept the performance hit of not being right some of the time. There are many people who would cringe at this, but at this point we're going to have to play loose and leave a few balls up in the air as we juggle - doing the math of how often one may fall while keeping the rest going as fast as we can.
Pay No Attention To The Man Behind The Curtain
Yet if I had to sum up the content of the conference I would say it was big on strategy and architecture but short on implementation. There was a lot of things hinted at "behind the curtain" but nothing assured hand raising from the compsci geeks in the room more than hand waving when you got to the distributed piece of your solution. For instance, one of the big benefits of Chapel - the MPI successor that Bratford Chamberlain of Cray presented - was that you could have distributed arrays and graphs that would be automatically sliced up to be distributed to parallel cores or even other locales if desired. How the language determines where to split these large arrays and graphs and farm them out was not discussed. One of the more interesting slides was dashed lines drawn across various nodes and vertices of a graph symbolizing how it would be chopped and distributed. Chapel was called a "multi-resolution" language where one could start fairly abstract and then add more detail and control to get the best desired result - something I assume you have to do to get good or intelligent chopping and distribution of the data. Given that one of his slides was a comparison of code lines between Fortan using MPI and Chapel: seeing a working code snippet of Chapel would have been helpful.
This was the trend, as all of the presentations had a bit of hand waving regarding performance metrics and distribution of computation. This was highlighted by the talk of Vijay Menon of Google - whose work at Intel I was familiar with - discussing Software Transactional Memory. He illustrated the challenges of implementing this in an imperative language, but beyond suggesting the keyword "atomic" to replace "synchronized" in the Java language there was very little real content discussed for those already familiar. Concurrent Haskell and other language approaches wasn't mentioned. A better introduction and discussion is to be had by watching the O'Reily's OSCON video from Simon Peyton-Jones (the writer of GHC and now at Microsoft Research) on the subject.
Of course the point of these conferences is the discussions that occur during the breaks and in the networking event afterwards - something that I love. I get to be at the same table as Vijay Menon, Thorsten Schuett, Swapnil Patil, Paul Watson and others. It was great fun.
Summary of the Patterns I Saw
To summarize what I took away from the conference from a high-level, here are they are:
- Every node must be aware of the state of every other node without a centralized controller.
- To do this, a mechanism should be in place to share state quickly but peer-to-peer.
- It's ok to let some nodes go stale.
- Client/Server is now one thing. Pub/Sub with computation. Every node on the graph should do work.
- As much as possible, each node should maintain its own security and state. You should be able to have anonymous resources appear in your data center and be put to use without much configuration.
- As much as possible, abstract the distribution of processing away from programmers.
- Key,Value with Hashes are best for scalability and distribution (it seems to have won out in all the solutions presented here.) Blame MapReduce.
- Ants can be used to demonstrate anything.
I hope everyone had a good of a time as I did.