Here, There, Everywhere...
Hey there! I’m Viraj Mody, a developer on the Live Mesh services team. I bet some of you are curious about how the Live Mesh software running on your devices detects when others add files to your shared Live Folders, quickly changes icon colors when devices go from offline to online, or updates your Live Mesh News Feeds as changes occur. This post will give you some insight into the facilities we’ve built in the Live Mesh back-end that enable client applications to do all of the above and much more.
In order to be responsive and perform well, Live Mesh, like most other software + services systems, requires that the cloud (services) be able to send out-of-band messages to clients (software). Most actions initiated by the client (MOE and Live Desktop) are triggered as a result of these messages sent from the Live Mesh services cloud to that specific client. These could be messages informing the client that items in the cloud have changed, that another client wants to initiate a peer-to-peer connection to this client, and so on. Such out-of-band messages that the cloud sends to clients are called Notifications. In order to keep the overall system performing well and to safeguard the privacy of clients, it is important that a client only receives Notifications for items it is interested in and has permissions to access. Each client has the ability to tell the cloud what items it is interested in monitoring. This expression of a client’s interest in changes to specific information in the cloud is referred to as a Subscription. Together, these Subscriptions and Notifications form the building blocks of Live Mesh’s Pub-Sub System.
In this post, I’ll go over the different back-end services that comprise the Pub-Sub System and provide an overview of how the system works end-to-end. In many ways, the Pub-Sub System can be understood by comparing it with a ‘secure and smart postal service’, where the post office only sends you mail that you’re interested in receiving and that the sender says you have permission to receive. A spam-free postal system – wouldn’t that be awesome!
Establishing a Communication Channel
For the Pub-Sub system to be effective there must be some way for the cloud to identify and communicate with each client. Live Mesh exposes facilities that allow clients to create a Queue which is uniquely associated with that particular client and assigns a unique name that is used to identify this queue. When creating a Subscription to a specific item, the client passes along its unique Queue name which allows Notifications to be delivered to that Queue. Going back to the postal system analogy, this Queue is like the mail box outside your home, and the unique name of the Queue is like your mailing address. Similar to how the postal system ensures that mail addressed to you only ends up in your mail box, Live Mesh services ensure that Notifications for a given client end up only in that client’s Queue.
The client is responsible for retrieving Notifications from its Queue. By separating delivery of Notifications from the act of retrieving them, the architecture enables various transport types to be used to retrieve Notifications from a Queue. Currently, the Live Mesh service cloud exposes two transports for retrieving these Notifications – HTTP and TCP. Given the pervasiveness and ease of use of these two transports, these were the first types we decided to support. As our service offering and scope grows, we can enable more transport types without major changes to the back-end architecture. Using the HTTP resource model exposed by the Live Mesh services, clients can choose to periodically fetch Notifications from their Queue. Using TCP enables clients to establish a long-lived connection to the Live Mesh service cloud so that Notifications can be pushed to clients as soon as they arrive. Different clients have different behavior, requirements and constraints – by providing various transport types we can enable several classes of clients and applications to leverage Live Mesh’s Pub-Sub system. Of course, these communications are encrypted and only the owner of a Queue can retrieve Notifications from it. It’s important to note that as it currently stands, the Queue and related services only support unidirectional messaging. Messages from clients to the Live Mesh service cloud do not flow via the Queue. Also, as of writing this post, the Live Mesh client available for Tech Preview doesn’t yet leverage the TCP transport solution – it will in a future release.
Besides change Notifications, the client’s Queue is also populated with a special type of Notification when another client wishes to initiate a peer-to-peer connection. Only authorized clients are allowed to send such peer-to-peer invites. The purpose of the peer-to-peer connection might be for file exchange and sync, for Live Remote connections, etc. Details about our peer-to-peer design can be found in this Channel 9 video.
All Queues and Queue-related information are managed by the Queue Service. Like other services described later in this post, the Queue service is a transient-state service and is built to scale horizontally. I’ll touch on both these characteristics of the services later.
Once a client has created its Queue, it can create Subscriptions to items in the Live Mesh cloud that it wants to stay current on by providing its Queue name along with the Subscription. Typically, each Live Mesh device creates Subscriptions for user/device presence, for each Live Folder’s various feeds (Membership, Contents, etc), for the news feed, and so on. In order to create a Subscription for a specific item, the client must provide proof that it has privileges to access the item. Once these Subscriptions have been created, the system ensures that the client will be informed when any of the items it has subscribed to are changed – each Notification that is delivered into the client’s Queue contains information about which specific resources has changed, in addition to other potentially interesting information about the changed resource.
Subscriptions created by clients are held in the PubSub Service. It is responsible for maintaining information about which client is interested in what item and for fanning out Notifications to the right clients when an item changes. You can think of it as the central post office of the ‘secure and smart postal system’ which acts as the one location for collecting all mail and then routing it onwards to the right destination. Just like the Queue Service, the PubSub Service is a transient-state service and is built to scale horizontally.
Once a client has created a Queue and all its Subscriptions, it doesn’t aggressively keep refreshing information from the cloud. If using the HTTP transport to talk to its Queue, the client periodically polls the Queue to check for any Notifications that may have arrived. In cases where it’s using the TCP transport, the client is waiting for a Notification to be pushed down to it over the TCP channel. When an item changes in the cloud (let’s say, a file was added in a folder that the client has expressed interest in knowing about), the service responsible for that item (in this case, the storage service) informs the PubSub Service that the item has changed. The PubSub Service, which has been keeping track of all clients who are interested in that particular item, drops an appropriate change Notification in each interested client’s Queue. As soon as clients retrieve these Notifications (or as soon as the Notification is pushed to them) they can react. In the case of a new file being added, they might choose to begin initiating a peer-to-peer connection in order to sync the new file.
In our ‘secure and smart postal system’ analogy, this is equivalent to a magazine publisher informing the post office that a new edition is available, and the post office dropping a letter in each subscriber’s mail box informing them that a new edition is available. Optimally, the magazine publisher could also deliver one copy of the latest edition to the post office and the post office could be smart enough to create the right number of replicas and deliver a copy to each subscriber’s mail box. Here’s where the ‘smart’ in ‘smart and secure’ could come in!
Pub-Sub System end-to-end flow
Transient State Services and Scale-out
As I mentioned previously, all services that comprise the Live Mesh Pub-Sub System are transient-state services. Queues created by clients, Notifications that are delivered to specific Queues and Subscriptions representing a client’s interest in a particular resource are only ever held in memory and never persisted to any kind of store. As you might probably guess, performance was one of the biggest motivators for this design. Pub-Sub is characterized by short-lived rapidly changing data and data that needs to be readily available. Short-lived and changing because Queues, Subscriptions and Notifications are, by nature, transient – once a Notification is delivered to its intended recipient, it’s of no use; once a client is offline or doesn’t care about receiving Notifications, its Queue is of no use; Subscriptions could come as and go as application state changes. Data must be readily available because subscriber lists can often be huge, so retrieving them from persistent stores can introduce latencies that only increase as the service grows in size. Holding these in memory allows reads and writes to be processed very fast. Since they live in memory, both Queues and Subscriptions have lifetimes associated with them – clients must perform certain actions (some explicit, some implicit) to keep Queues and Subscriptions ‘alive’ and prevent associated resources from being reclaimed by the server. Given the initial wave of Live Mesh experiences and applications, having these be transient-state services definitely helps ensure high throughput and low latency. Of course, the fact we chose to implement the system as a transient state system is an implementation detail – as the product evolves and use cases changes, there might be reasons to prefer some kind of hybrid approach.
The system is also designed to enable horizontal scale-out. As the system needs more space to hold Queues and Subscriptions, we can bring up new instances of these services to increase capacity. Using a scheme based on consistent hashing, the Live Mesh services cloud guarantees that there is ever only one specific server instance that can ‘manage’ a given Queue or Subscription. The system also enables routing of messages for specific Queues and Subscriptions to the correct current manager. As new service instances come online and others go offline, the system automatically re-balances the distribution of Queues and Subscriptions to the currently available servers such that every Queue or Subscription is managed by one and only one server instance.
One of the obvious concerns with the system being implemented in-memory only is data recovery – when servers go down because of hardware/software issues, are re-booted, or otherwise need to be reset, all data resident in memory on those servers is also gone. For Queue Service instances, this implies that Queues belonging to several clients and potentially interesting Notifications in those Queues might be gone. For the PubSub Service, several subscriber lists might be lost when a server loses state. This is a problem we spent a huge amount of time addressing and designing for, and probably deserves a post of its own at some future time. A short summary of the solution is that in cases where one or several Queue and/or PubSub Servers go down, the system is able to detect exactly what happened and take remedial action to restore state in the cloud in cooperation with clients (because clients were the original source for all the transient data that was resident on those servers before they lost state).
As the future scope of the Live Mesh experience, services and platform evolves (for instance, potentially allowing third-party services to subscribe to items in Live Mesh, enabling aggregators to leverage Live Mesh Pub-Sub, etc), the current Pub-Sub System architecture will hopefully provide a good scalable foundation which we can leverage to rapidly increase capabilities of the service.
We’re working on exposing Pub-Sub capabilities via the Live Mesh SDK when it becomes available so that developers can leverage the system in innovative ways to build responsive applications. Be sure to visit the Live Mesh Developer page and join the developer waiting list for announcements around the Live Mesh SDK when it’s available.
I hope this post gives a little more insight about how things work behind Live Mesh. Be sure to install Live Mesh on all your devices, give us feedback and report any issues you see!
PingBack from http://blog.a-foton.ru/index.php/2008/10/09/behind-live-mesh-the-pub-sub-system/
what if a client is interested in a particular live folder(subscribed), goes down and up after a long time.whe n it send a subscription request again to folder,does it receive the updates happened while the client was down? if yes who manages it. does client send pubsub service with some more info like file already synced....
"...In cases where one or several Queue and/or PubSub Servers go down, the system is able to detect exactly what happened and take remedial action to restore state in the cloud in cooperation with clients".
Isn't this what you would have to do for a client that goes offline? If I shut down my laptop for a day, then boot it back up, is the process in the quote above what you use to determine what files and other changes have been made so that I can get back up to sync? Or is this PubSub system complimented by another system for this scenario?
A client going offline for long periods of time is no different than it's Queue and Subscriptions in the cloud being deleted, so yes - the same recovery protocol kicks in.
When the client re-subscribes, it gets back an E-Tag representing the latest state of the resource in the cloud which it can then use to compare with its own E-Tag to determine whether things changed while it was offline.
I am a bit worried about the following design decision:
"once a client is offline or doesn’t care about receiving Notifications, its Queue is of no use"
Similar to the other questions here, I have a use case where I have my Pictures folder synchronized (peer to peer) between my home PC and my laptop on the road. I have 6,000+ files in this folder, and I was wondering what would happen after a flight, when I connected back to the cloud, if my wife had been making modifications to some pictures at home. Would my queue persist until I log in (>5 hours...) or would my laptop client and home PC need to go through a massive resource intensive re-sync process?
When your makes reconnects to the cloud after a long time, its Queue might be gone but upon re-creation of the queue no massive re-sync is needed. As part of re-creating subscriptions, the client is told about any changes to resources it subscribes to (see my comment about E-Tags previously) any only those changes are sync'd down (either via the cloud or P2P - whichever is available and more efficient).