Windows Azure SQL Database Marketplace
Windows Azure Caching is a distributed, in-memory, scalable, reliable caching solution that can be used to speed up Windows Azure applications and reduce database load. The first production service was released in May 2011; we are pleased to see that there are several customers using the service. Since its release, we have also received valuable feedback from you and have been hard at working incorporating those along with some deep investments in supporting new patterns of cache usage. We are pleased to announce a preview release of this work with the Windows Azure June 2012 SDK. Let’s talk about what’s in the preview, but before that…
Today, Windows Azure Caching is offered as a Windows Azure service – the service uses machines in Windows Azure that are shared across multiple tenants (henceforth, we will refer to this as Windows Azure Shared Caching). Each customer pays for a certain amount of memory (128MB to 4GB) and it comes with a limit on transaction throughput and network bandwidth based on the memory chosen. This ensures everything is “fair” across tenants and cache usage is throttled when these limits are exceeded. While this is a perfectly fine way to use cache for many applications, we have some applications where customers can not estimate the exact resource needs (memory, transactions, bandwidth) a priori and would like a consistent behavior based on pre-allocated machines in their application. Given caching is a core component for performance, consistency of performance is critical in many of these applications.
With the latest Windows Azure SDK, we are introducing a preview version of Windows Azure Caching that is hosted as a part of the customer’s cloud application. Consequently, it has none of the quota constraints and is limited only by the resources assigned to cache by the customer.
Windows Azure Caching now has much better performance, more consistent performance and High Availability. It has a simpler integrated development experience with Visual Studio and management & monitoring experience with the Windows Azure portal. Several new features (regions & tags, local cache with notifications) have also been made available in this version of Caching. Finally, if you have an application using memcached, we provide you a migration path with support of memcached protocols – so you can migrate at your convenience while using differentiating features of Windows Azure Caching such as High Availability. If you are as excited as us and want to know more, read on!.
As we said earlier, the cache is now a part of the customer application. Caching (Preview) – distributed server and client library – ships as a plug-in with the new Windows Azure SDK. When you build your application to use the cache, the cache libraries are packaged along with your other application artifacts into the cspkg file, and when you deploy your application to Windows Azure, Caching gets deployed into Web or Worker roles that you control. You get to select how big or small these instances are and the number of instances for the cache, and manage cache capacity at run time by scaling these up or down like other Windows Azure roles.
You can choose one of two deployment modes for using the cache:
Not even close. Aside from the flexibility of using cache in different kinds of deployments, Windows Azure Caching a combination of features that differentiates it from other caching solutions in the market. Here are some of the benefits:
Over the next few weeks, we will go deeper into each of these topics and more. We’ll be blogging about how to do capacity planning for your cache, monitoring, debugging and fixing common issues. If there’s something you’d like us to talk about, please leave a comment! Meanwhile, give the Windows Azure Caching (Preview) a test drive and let us know how it goes. You can always reach out to us via comments on this blog or post your queries to our forums, which our team regularly responds to.
- by Prashant Kumar, Sr. Program Manager, Windows Azure