Windows Azure Caching is a distributed, in-memory, scalable, reliable caching solution that can be used to speed up Windows Azure applications and reduce database load. The first production service was released in May 2011; we are pleased to see that there are several customers using the service. Since its release, we have also received valuable feedback from you and have been hard at working incorporating those along with some deep investments in supporting new patterns of cache usage.  We are pleased to announce a preview release of this work with the Windows Azure June 2012 SDK.  Let’s talk about what’s in the preview, but before that…

A bit of background

Today, Windows Azure Caching is offered as a Windows Azure service – the service uses machines in Windows Azure that are shared across multiple tenants (henceforth, we will refer to this as Windows Azure Shared Caching). Each customer pays for a certain amount of memory (128MB to 4GB) and it comes with a limit on transaction throughput and network bandwidth based on the memory chosen. This ensures everything is “fair” across tenants and cache usage is throttled when these limits are exceeded. While this is a perfectly fine way to use cache for many applications, we have some applications where customers can not estimate the exact resource needs (memory, transactions, bandwidth) a priori and would like a consistent behavior based on pre-allocated machines in their application. Given caching is a core component for performance, consistency of performance is critical in many of these applications.

So...

With the latest Windows Azure SDK, we are introducing a preview version of Windows Azure Caching that is hosted as a part of the customer’s cloud application. Consequently, it has none of the quota constraints and is limited only by the resources assigned to cache by the customer.  

But that is not all…

Windows Azure Caching now has much better performance, more consistent performance and High Availability.  It has a simpler integrated development experience with Visual Studio and management & monitoring experience with the Windows Azure portal. Several new features (regions & tags, local cache with notifications) have also been made available in this version of Caching. Finally, if you have an application using memcached, we provide you a migration path with support of memcached protocols – so you can migrate at your convenience while using differentiating features of Windows Azure Caching such as High Availability. If you are as excited as us and want to know more, read on!.

New Deployment Model

As we said earlier, the cache is now a part of the customer application. Caching (Preview) – distributed server and client library – ships as a plug-in with the new Windows Azure SDK. When you build your application to use the cache, the cache libraries are packaged along with your other application artifacts into the cspkg file, and when you deploy your application to Windows Azure, Caching gets deployed into Web or Worker roles that you control.  You get to select how big or small these instances are and the number of instances for the cache, and manage cache capacity at run time by scaling these up or down like other Windows Azure roles.

Deployment modes

You can choose one of two deployment modes for using the cache:

  1. Cache collocated with web/worker role: Some of you may have an application that is already deployed without cache in just a couple of nodes. You can speed up your application by adding cache to use the spare capacity in memory, CPU and bandwidth. It’s practically free (actually, you have already paid for it, but this is as close free cache). All you need to do is enable the web or worker role to also host Caching and you are done. You can now call into Caching from any of your roles within the application.
  2. Consistent cache in isolated nodes: If you have a more serious caching requirement that needs consistent and predictable performance, we’d advise you to have dedicated roles which host Caching. Our capacity planning guide helps you determine the size of the role and the number of instances you need based on the kind of load you expect to put on the cache.

So that's it?

Not even close. Aside from the flexibility of using cache in different kinds of deployments, Windows Azure Caching a combination of features that differentiates it from other caching solutions in the market. Here are some of the  benefits:

  1. It's cost effective: You only pay for cost of compute instances on which Caching is hosted, there is no additional premium. The co-located mode is practically free and can leverage the spare CPU, bandwidth and memory resources you may have on your compute VMs.
  2. Elastic scale out to 100+ GB of cache: You can increase or decrease the number of instances per your requirement and the cache clients will automatically start taking advantage of the new roles without any configuration change. Also, the existing data gets redistributed among all the nodes to balance the load.  You can scale up your Cache as large as 10 XL VMs which will give you over 100 gigabytes of cached data. We think it can go much higher – so if you have a need for a larger cache, talk to us! Additionally, the throughput of the cache is only limited by the Windows Azure resources you devote to it.
  3. It's fast! There are multiple improvements we’ve done to make Caching faster. You can now expect latencies to be in the order of 1 millisecond for an appropriately configured cache – approximately a 5x improvement! With appropriate usage of local cache feature in Windows Azure Caching, you can bring this down by another order of magnitude! We also have support for bulk operations that reduce roundtrips and latency.
  4. New Features: You now have access to a host of other features as well. Some of these were available in the on-premise release of Caching (Microsoft AppFabric 1.1 for Windows Server) and were oft-requested on Windows Azure.
    1. Multiple named caches: You can now logically divide your cache into multiple named caches; each named cache can have a separate set of data policies applied to it, depending on your use case.
    2. Highly available cache: for cache-sensitive applications, you can configure high availability on a cache. This would keep an additional copy of each item you put in the cache on a separate instance. This significantly reduces the chances of data loss and maintains high ratio of cache hits during node failures. Even in cases where HA is not enabled, Windows Azure Caching works to prevent data eviction due to changes in the Windows Azure environment that impact the underlying cache compute instances.
    3. Use local cache with notifications: accessing an in-memory cache that is distributed is an order of magnitude faster than a DB access, but accessing it locally is even better. We enable with the local cache feature. This is useful for read-heavy data and where consistency requirements aren’t paramount. The cache client library natively supports a local cache from which data can typically be retrieved extremely fast (no network hops, no serialization/deserialization costs).Further, you can register for notifications to update the local cache at regular intervals from the cache cluster.
    4. Use regions and tags: This feature allows you to keep related objects together. For example, you can put all objects for a user session in a region to be able to query them together. In addition, you can add tags to the objects in the region so that queries can be filtered by these tags using just the API calls.
  5. Run your memcache applications without changing code: Finally, we know that you may already be using one of the caching solutions out there – a popular one being memcache. You may want to move to Windows Azure Caching for one or more of the above features, but are worried about a full rewrite. We heard you – so we made Windows Azure Caching understand memcache clients! So you can point the memcache client directly to the Caching cluster to get started.  This would work for most – but if you are really picky about performance, you should use our memcache client shim which gives better performance (the earlier case routes through a ‘gateway’ in the cache cluster which is avoided with the shim) . Either way, you can leverage the features of Caching without having to rewrite your client application! 

Over the next few weeks, we will go deeper into each of these topics and more. We’ll be blogging about how to do capacity planning for your cache, monitoring, debugging and fixing common issues. If there’s something you’d like us to talk about, please leave a comment! Meanwhile, give the Windows Azure Caching (Preview) a test drive and let us know how it goes. You can always reach out to us via comments on this blog or post your queries to our forums, which our team regularly responds to.

Happy caching!

- by Prashant Kumar, Sr. Program Manager, Windows Azure