Greetings. You might already know that we now and then publish free ebooks on new technologies. (There are 13 here.) Today we’d like to share a preview of a free ebook that will be available in July: Rethinking Enterprise Storage: A Hybrid Cloud Model, by Marc Farley. Below is Chapter 1, which introduces the ebook’s topic. Please note that this material is in DRAFT form. And enjoy!

 

Chapter 1

Rethinking enterprise storage

The information technology (IT) world has always experienced rapid changes, but the environment we are in today is bringing the broadest set of changes that the industry has ever seen. Every day more people are accessing more data from more sources and using more processing power than ever before. A profound consequence of this growing digital consumption is that the corporate data center is no longer the undisputed center of the computing universe. Cloud computing services are the incubators for new applications that are driving up the demand for data.

IT managers are trying to understand what this means and how they are going to help their organizations keep up. It is abundantly clear that they need the ability to respond quickly, which means slow-moving infrastructures and management processes that were developed for data center-centric computing need to become more agile. Virtualization technologies that provide portability for operating systems and applications across hardware boundaries are enormously successful, but they are exposing the limitations of other data center designs, particularly constraints that hinder storage and data management at scale.

It's inevitable that enterprise storage technologies will change to become more scalable, agile, and portable to reflect the changes to corporate computing. This book examines how storage and data management are being transformed by a hybrid cloud storage architecture that spans on-premises enterprise storage and cloud storage services to improve the management capabilities and efficiency of the organization. The StorSimple hybrid cloud storage (HCS) solution is an implementation of this architecture.

The hybrid cloud management model

As a subset of hybrid cloud computing, hybrid cloud storage has received far less attention from the industry than the larger dialogue about how to enable hybrid applications. However, pragmatic IT leaders are also anticipating new hybrid cloud management tools to help them improve their IT operations. Hybrid cloud storage is an excellent example of this type of hybrid management approach that uploads data and metadata from on-premises storage to the cloud, fulfilling the roles for a number of storage and data management practices.

Don’t just take it from me

Another example of the power of hybrid management is the Hyper-V Recovery Manager which is described in an article written by John Joyner and published on the TechRepublic website titled "Hyper-V Recovery Manager on Windows Azure: Game changer in DR architecture." The article can be found by following this link: http://www.techrepublic.com/blog/datacenter/hyper-v-recovery-manager-on-windows-azure-game-changer-in-dr-architecture/6186. In the article Joyner explains how a cloud-based service controls the operations of on-premises systems and storage.

As a management abstraction, hybrid cloud management can provide centralized monitoring and control for on-premises and in-cloud systems and applications. If there are going to be applications and data that span on-premises and in-cloud resources, it only makes sense that there will be a need for management tools that facilitate those applications. Figure 1-1 depicts a hybrid cloud management model where three separate on-premises data centers are exchanging management information with resources and management services running in the cloud..

image

Figure 1-1 Three on-premises data centers exchange management information with cloud resources and management services across the hybrid cloud boundary.

 

We'll now turn our attention from the general case of hybrid management to focus on storage.

The transformation of enterprise storage with cloud storage services

Storage has been an integral part of information technology from its inception and will continue to be throughout the cloud computing transformation that is underway. That’s because all the data we create, use, and share has to be stored somewhere if it has to have more than fleeting value. A lot of this data is stored in corporate data centers, but a rapidly growing percentage is being stored in the cloud.

It follows that enterprise storage architectures will adapt to this reality and integrate with cloud storage. Just as cloud services have changed the ways we consume data, they will also change how we store, manage, and protect it. It is short-sighted to think of cloud storage merely as big disk drives in the sky when there is so much compute power in the cloud to do interesting things with it. If it is possible to find information needles in data haystacks using data analytics, it is certainly possible to discover new ways to manage all that data more effectively. For example, the implementation of erasure coding in Windows Azure Storage demonstrates how advanced error-correction technology can also be used to effectively manage cloud storage capacity.

But the advancements in enterprise storage won't all be cloud-resident. In fact, many of the most important changes will occur in on-premises storage management functions that take advantage of hybrid cloud designs. The section "Change the architecture and change the function," later in this chapter, examines how extending traditional storage architectures with the addition of cloud storage services makes familiar storage management functions much more powerful.

The constant nemesis: data growth

IDC's Digital Universe study estimates that the amount of data stored worldwide is more than doubling every two years, so it's no surprise that managing data growth is often listed as one of the top priorities by IT leaders. IT professionals have ample experience with this problem and are well aware of the difficulties managing data growth in their corporate data centers. Balancing performance and data protection requirements with power and space constraints is a constant challenge.

IT leaders cannot surrender to the problems of data growth, so they need a strategy that will diminish the impact of it on their organizations. The hybrid cloud storage approach discussed in this book leverages cloud storage to offload data growth pressures to the cloud. Storage, which has always had an integral role in computing, will continue to have a fundamental role in the transformation to hybrid cloud computing—for its primary functionality (storing data) as well as its impact on those responsible for managing it.

Increasing the automation of storage management

Historically, storage management has involved a lot of manual planning and work, but as the amount of data continues to grow, it’s clear that the IT team needs more automated tools in order to work more efficiently. This book describes how hybrid cloud storage enables higher levels of automation for many different tasks. Chapter 2, "Leapfrogging backup with cloud snapshots," for instance, examines how hybrid cloud storage technology virtually eliminates the manual administration of one of the most time-consuming IT practices—backup.

People expect that their data will always be available when they want it and are unhappy when it isn’t. Traditional data center solutions that provide high-availability with remote data replication are resilient but have high equipment, facilities, and management costs—which means there’s a lot of data that companies can't afford to replicate. Automated off-site data protection is an excellent example of a storage management function that is much more affordable with hybrid cloud storage. Chapter 3, "Accelerating and broadening disaster recovery protection," explores this important topic.

Virtual systems and hybrid cloud storage

IT teams use virtualization technology to consolidate, relocate, and scale applications to keep pace with the organization’s business demands and to reduce their operating costs. Hypervisors, such as ESX and ESXi from VMware and Hyper-V from Microsoft, create logical system images called virtual machines, or VMs, that are independent from system hardware, enabling IT team to work much more efficiently and quickly.

But virtualization creates problems for storage administrators who need more time to plan and implement changes. The storage resources for ESX and ESXi hypervisors are Virtual Machine Disk Format (VMDK) files and for Hyper-V hypervisors they are Virtual Hard Disk (VHD) files. While VMs are rapidly moved from one server to another, moving the associated VMDKs and VHDs from one storage system to another is a much slower process. VMs can be relocated from one server to another without relocating the VMDKs and VHDs, but the process of load balancing for performance usually involves shifting both VMs and VMDKS/VHDs. Data growth complicates the situation by consuming storage capacity, which degrades performance for certain VMs and forcing the IT team to move VMDKs/VHDs from one storage system to another, which can set off a chain reaction of VMDK/VHD relocations along the way. Hybrid cloud storage gracefully expands the capacity storage, including VMDKs and VHDs, eliminating the need to move them for capacity reasons. By alleviating the pressures of data growth, hybrid cloud storage creates a more stable environment for VMs.

Data portability for hybrid cloud computing

VM technology is also an essential ingredient of cloud computing. Customers can instantly provision cloud computing resources as virtual machines running in the cloud without spending capital on equipment purchases. This gives the development organization a great deal of flexibility and allows them to test their work in a way they couldn’t afford with their own equipment in their own data centers. The end result is rapid application development that brings innovations to market faster.

Organizations want to develop software in the cloud and deploy it there or in their data centers—or in both places, using the hybrid cloud model. For example, Microsoft Windows Azure provides an environment that allows customers to deploy applications running on Windows Server 2012 with Hyper-V on Azure virtual machines.

If VMs can run either on-premises or in the cloud, companies will want a way to copy data across the hybrid cloud boundary so it can be accessed locally ("local" in the cloud context means both the VM and data are located in the same cloud data center). But if copying data takes too long, the hybrid cloud application might not work as anticipated. This is an area where hybrid cloud storage could play a valuable role by synchronizing data between on-premises data centers and the cloud. Chapter 7, " Imagining the possibilities with hybrid cloud storage," discusses future directions for this technology, including its possible use as a data portability tool.

Reducing the amount of data stored

Considering that data growth is such a pervasive problem, it makes sense for storage systems to run processes that reduce the amount of storage capacity consumed. Most new storage arrays incorporate data reduction technologies and the hybrid cloud storage design discussed in this book is an example of a solution that runs multiple data reduction processes—on-premises and in the cloud.

Know your storage math

Many of the advancements in storage and data management today are based on advanced mathematical algorithms for hashing, encoding, and encrypting data. These algorithms tend to assume that there is enough processing power available to not impact system performance and that the data being operated on is stored on devices with sufficient performance so as not to impact overall system performance. Much of the design work that goes into storage systems today involves balancing the resources used for serving data with the resources used for managing it.

So, if data growth has been a problem for some time, why hasn't data reduction been used more broadly in enterprise storage arrays? The answer is the performance impact it can have. One of the most effective data reduction technologies is deduplication, also known as dedupe. Unfortunately, dedupe is an I/O intensive process that can interfere with primary storage performance, especially when device latencies are relatively high as they are with disk drives. However, enterprise storage arrays are now incorporating low-latency solid state disks (SSDs) that can generate many more I/O operations per second (IOPS) than disk drives. This significantly reduces the performance impact that dedupe has on primary storage. The StorSimple HCS solution discussed in this book uses SSDs to provide the IOPS for primary storage dedupe.

Chapter 4, "Taming the capacity monster" looks at all the various ways the StorSimpleHCS solution reduces storage capacity problems.

Solid State Disks under the covers

SSDs are one of the hottest technologies in storage. Made with nonvolatile flash memory, they are unencumbered by seek time and rotational latencies. From a storage administrator’s perspective, they are simply a lot faster than disk drives.

However, they are far from being a “bunch of memory chips” that act like a disk drive. The challenge with flash is that individual memory cells can wear out over time, particularly if they are used for low-latency transaction processing applications. So, SSD engineers design a number of safeguards—including metadata tracking for all cells and data, compressing data to use fewer cells, striping data with parity to protect against cell failures, wear-leveling to place dormant data in cells that have been the most active and active data in cells that have been the least active, “garbage collecting” to remove obsolete data, trimming to remove data that was deleted and metering to indicate when the device will stop being usable.

SSDs manage everything that needs to be managed internally. Users are advised not to use defrag or other utilities that reorganize data on SSDs. They won't perform faster, but they will wear out faster.

Best practices or obsolete practices?

The IT team does a great deal of work to ensure data is protected from threats such as natural disasters, power outages, bugs, hardware glitches, and security intrusions. Many of the best practices for protecting data that we use today were developed for mainframe environments half a century ago. They are respected by IT professionals who have used them for many years to manage data and storage, but some of these practices have become far less effective in light of data growth realities.

Some best practices for protecting data are under pressure for their costs, the time they take to perform, and their inability to adapt to change. An example of a best practice area that many IT teams find impractical is disaster recovery (DR). DR experts all stress the importance of simulating and practicing recovery, but simulating a recovery takes a lot of time to prepare for and tends to be disruptive to production operations. As a result, many IT teams never get around to practicing their DR plans.

Another best practice area under scrutiny is backup, due to chronic problems with data growth, media errors, equipment problems, and operator miscues. Dedupe backup systems significantly reduce the amount of backup data stored and help many IT teams successfully complete daily backups. But dedupe systems tend to be costly and the benefits are limited to backup operations and don't include the recovery side of the equation. Dedupe does not change the necessity to store data off-site on tapes, which is a technology that many IT teams would prefer to do away with.

Many IT teams are questioning the effectiveness of their storage best practices and are looking for ways to change or replace those that aren't working well for them anymore.

Doing things the same old way doesn't solve new problems

The root cause of most storage problems is the large amount of data being stored. Enterprise storage arrays lack capacity "safety valves" to deal with capacity full scenarios and slow to a crawl or crash when they run out of space. As a result, capacity planning can take a lot of time that could be used for other things. What most IT leaders dislike most about capacity management is the loss of reputation that comes with having to spend money unexpectedly on storage that was targeted for other projects. In addition, copying large amounts of data during backup takes a long time, even when they are using dedupe backup systems. Technologies like InfiniBand and Server Message Block (SMB) 3.0 can significantly reduce the amount of time it takes to transfer data, but they can only do so much.

More intelligence and different ways of managing data and storage are needed to change the dynamics of data center management. IT teams that are already under pressure to work more efficiently are looking for new technologies to reduce the amount of time they spend on it. The StorSimple HCS solution discussed in this book is a solution for existing management technologies and methods that can't keep up. .

Introducing the hybrid cloud storage architecture

Hybrid cloud storage overcomes the problems of managing data and storage by integrating on-premises storage with cloud storage services. In this architecture, on-premises storage uses the capacity on internal SSDs and HDDs, as well as on the expanded storage resources that are provided by cloud storage. A key element of the architecture is that the distance over which data is stored is extended far beyond the on-premises data center, providing disaster protection. The transparent access to cloud storage from a storage system on-premises is technology that was developed by StorSimple and it is called Cloud-integrated Storage, or CiS. CiS is made up of both hardware and software. The hardware is an industry-standard iSCSI SAN array that is optimized to perform automated data and storage management tasks that are implemented in software.

The combination of CiS and Windows Azure Storage creates a new hybrid cloud storage architecture with expanded online storage capacity that is located an extended distance from the data center, as illustrated in Figure 1-2.

image

Figure 1-2 In the hybrid cloud storage architecture, CiS accesses the expanded capacity available to it in Windows Azure Storage over an extended distance.

Change the architecture and change the function

CiS performs a number of familiar data and storage management functions that are significantly transformed when implemented within the hybrid cloud storage architecture.

Snapshots

CiS takes periodic snapshots to automatically capture changes to data at regular intervals. Snapshots give storage administrators the ability to restore historical versions of files for end users who need to work with an older version of a file. Storage administrators highly value snapshots for their efficiency and ease of use—especially compared to restoring data from tape. The main limitation with snapshots is that they are restricted to on-premises storage and susceptible to the same threats that can destroy data on primary storage. Implementing snapshots in a hybrid cloud storage architecture adds the element of extended distance, which makes them useful for backup and disaster recovery purposes. Cloud snapshots are the primary subject of Chapter 2, "Leapfrogging backup with cloud snapshots".

Data tiering

CiS also transparently performs data tiering which moves data between the SSDs and HDDs in the CiS SAN system according to the data's activity level, with the goal of placing data on the optimal cost/performance devices. Expanding data tiering with a hybrid cloud storage architecture transparently moves dormant data off site to the cloud so it no longer occupies on-premises storage. This transparent, online "cold data" tier is a whole new storage level that is not available with traditional storage architectures and it provides a way to have archived data available online.

Thin provisioning

SAN storage is a multi-tenant environment where storage resources are shared among multiple servers. Thin provisioning allocates storage capacity to servers in small increments on a first-come, first-served basis, as opposed to reserving it in advance for each server. The caveat that is almost always mentioned with thin provisioning is the concern about over-committed resources, running out of capacity, and experiencing the nightmare of system crashes, data corruptions, and prolonged downtime.

However, thin provisioning in the context of hybrid cloud storage operates in an environment where data tiering to the cloud is automated and can respond to capacity full scenarios on demand. In other words, data tiering from CiS to Windows Azure Storage provides a capacity safety valve for thin provisioning that significantly eases the task of managing storage capacity on-premises.

Summary

The availability of cloud technologies and solutions is pressuring IT teams to move faster and operate more efficiently. Storage and data management problems are front and center in the desire to change the way data centers are operated and managed. Some of the existing storage technologies and best practices are being questioned for their ability to support business goals.

A new architecture called hybrid cloud storage improves the situation by integrating on-premises storage with cloud storage resources. This provides the incremental allocation and use of cloud storage as well as remote data protection. Extending the traditional on-premises storage architecture to include cloud storage services enables much higher levels of management automation and expands the roles of traditional storage management functions, such as snapshots and data tiering, by allowing them to be used for backup and off-site archiving.

The rest of the book explores the implementation of the StorSimple HCS solution and how it fundamentally changes how data and storage management is done.