Editor’s note: The following post was written by Windows Server – Virtualization MVP Anil Desai
What do driver-less cars, invisibility cloaks, and 3D Printing, have in common? They’re all amazing examples of technology that is just of out of reach for most of us. That doesn’t belittle their value or technical benefits, but it does limit the impact they can make (for now, at least). The same seems to happen within IT environments of all sizes. The latest and greatest technology is also often the most difficult to obtain. Often, good solutions are too expensive or difficult to deploy (at least, as broadly as we’d like). The real challenge with technology, then, is in making it readily accessible and affordable to the masses – in essence to make it a commodity, rather than a luxury.
OK, enough abstract philosophy: Perhaps it’s not quite as exotic as the aforementioned high-tech items, but storage-related performance and high-availability features have typically required dedicated SAN systems. Storage protocols such as Fibre Channel (FC), FC over Ethernet (FCoE), iSCSI, and related infrastructure have become the de facto standards for shared storage environments and have traditionally been requirements to setup highly-available environments. They work well, but there are some issues.
First, these solutions can be quite costly. Indeed, Microsoft has mentioned that in interviewing IT staff while planning for Windows Server 2012, costs related to acquiring, configuring, and managing storage were often a large barrier. Second, they often require expertise in vendors’ management tools and implementation methods. Also, they can involve some vendor lock-in based on detailed implementations of “standard” protocols and storage methods. Finally, they’re not as readily available for many “non-mission-critical” environments (think test / development environments).
Enter Windows Server 2012: A server product that ships with all of the required ingredients to brew your own highly-available storage environment. In this post, I’ll focus on the storage and high-availability-related features that ship as part of Windows Server 2012. Specifically, I’ll discuss what’s required to build and deploy a fault-tolerant Hyper-V deployment using only in-box features. I’ll start with the configuration basics and then list higher-end features that are available for production environments.
While preparing this post, I was pleasantly surprised to find out how many excellent, in-depth resources there are for learning how to implement Windows Server 2012’s many new storage and virtualization features. Rather than try to re-write that content, I’m going to focusing on architectural information about these features and why they matter. I’ll leave the procedural details of how to implement them to official documentation and some excellent blog posts that explain the steps (using both GUI and PowerShell methods). Fear not: Though I’ll focus onwhat to do (rather than how to do it), I’ll include links for more information wherever they’re relevant.
The heart of a highly-available virtualization environment is to have reliable, fault-tolerant storage. Windows Server 2012 provides all the building blocks that are required as part of the operating system. In a simple configuration, you can configure all of these features to run on a single Hyper-V host server (of course, that won’t protect against many possible types of hardware failures). The process involves installing and configuring the Scale-Out File Server role with active-active file shares. Once that’s configured, you can use the Failover Clustering Manager to add highly-available storage. Windows Server 2012 includes features that support NIC teaming, multi-pathing, and a variety of performance and reliability features that are implemented in the SMB 3.0 protocol. To see the extremely long list of new features in SMB 3.0, see the TechNet article “Server Message Block overview”.
So how do you setup a Scale-Out File Server and create and manage cluster storage? A good starting point is the TechNet article, “Scale-Out File Server for Application Data Overview”. For more in-depth technical details, see the Channel 9 webcast by Claus Jorgensen, “Continuously Available File Server: Under the Hood”.
Once you have created and configured your file share, it’s time to put it to use. The process couldn’t be much easier: when you’re creating your virtual hard disks (VHDs), just provide the UNC-based path to your storage (for example, \\HA-FS01\VM\) instead of using a local path. The Hyper-V 3.0 GUI and PowerShell commands both support the process. For procedural steps, the Microsoft TechNet document, “Deploy Hyper-V over SMB” is a great place to start.
Jose Baretto’s post Windows Server 2012 Beta – Hyper-V over SMB – Quick Provisioning a VM on an SMB File Share provides the steps required to create your VHD’s on the shared folder. If you want even more detailed steps on the end-to-end setup process of a highly-available storage environment, start with the post titled Windows Server 2012 Scale-Out File Server for SQL Server 2012 - Step-by-step Installation. This post covers the process of setting up a highly-available file server configuration from scratch (including a new domain and the required virtual network connections). It can be done on a single computer, and the steps include the use of both the Windows Server 2012 GUI and PowerShell. While the end result of this post is deploying SQL Server as a workload, once you setup the VMs and file server, you can use them for storing Hyper-V virtual machines instead.
In order to implement true high-availability, administrators need to take a layered approach that ensures that every potential point of failure is protected. Most often, this is accomplished through redundancy (multiple devices and paths to limit the effects of a failure). It’s no easy task, and the following figure shows just some of the potential areas that must be considered.
Perhaps the biggest question on most administrators’ minds is whether SMB-based can meet the performance, reliability, and availability standards for shared storage. Microsoft has engineered the storage and network improvements in Windows Server 2012 with that goal in mind, and some initial testing has shown that it works in even the most demanding of environments. But, there’s a catch: I’ve seen administrators compare dedicated Fibre Channel connection performance with shared iSCSI or SMB connections. The primary differentiator is shared vs. dedicated connections, not the underlying media or protocols. It’s very important in the real world to have dedicated network bandwidth to ensure high performance, low latency connections. If you’re using the same NIC for management, backups, remote administration, and Netflix streaming, you’re asking for trouble.
Windows Server 2012 provides numerous additional features to help protect and improve upon the high-availability configuration that I have discussed so far. Important options for production environments include:
1) NIC Teaming: Windows Server 2012 now provides NIC teaming functionality as part of the base OS. You can easily use Server Manager or PowerShell scripts to quickly and easily configure groups of NICs to meet fail-over and load-balancing requirements.
2) Multi-path I/O: Production servers often need to survive the failure of switches, in addition to individual NIC ports. To meet this need, create multiple connections and paths to storage through separate switches.
3) Backups: H-A features are primarily for uptime, and they don’t remove the need for reliable, tested backups. H-A doesn’t necessarily protect against unwanted configuration or data changes and doesn’t serve as a historical record for security, compliance, or archival. Don’t forget the basics when implementing high-availability!
4) Bandwidth and network infrastructure: While I used a $20 unmanaged gigabit Ethernet switch for testing while writing this post, you’ll want a higher end device in production. Features such as managed interfaces, support for NIC teaming, VLANs, automatic fail-overs, and dynamic load balancing can help tremendously.
5) Reduce I/O Bottlenecks: Wherever possible, you should invest in hardware that supports Offloaded Data Transfer(ODX) to reduce processing overhead and increase network and storage efficiency. See Windows Offloaded Data Transfers Overview for more information on how this works.
6) Single-Root I/O Virtualization (SR-IOV): By eliminating bottlenecks that can occur between physical NICs and virtual ones, SR-IOV can greatly increase virtual network performance. You’ll have to have hardware that supports SR-IOV, though. You can find more information on SR-IOV Architecture on MSDN.
7) Enable Caching: Windows Server 2012’s Cluster Shared Volumes (CSV) now includes built-in caching capabilities that can improve overall performance, often dramatically. For more information, see How to Enable CSV Cache, written by Elden Christensen and on MSDN Blogs.
8) Enable BitLocker Encryption: Administrators can now encrypt CSV disks to provide added security. For more information, see How to Configure BitLocker Encrypted Clustered Disks in Windows Server 2012.
As you can see, there’s a lot of technology that’s available in Windows Server 2012 (and I certainly didn’t cover it all). For some more details, you can start at “Increasing Server, Storage, and Network Availability: Scenario Overview” on Microsoft TechNet.
In this post, I provided an architectural overview of the features and options that can help you deploy a highly-available Hyper-V installation using features that ship “in the box” with Windows Server 2012. I also provided links for details on how you can understand, deploy, and manage these different features. I hope the information is useful to those of your that are planning to
Anil Desai is an independent consultant based in Austin, TX. He has over 15 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a seven-time Microsoft MVP Award (Windows Server – Virtualization) recipient.
Anil is the author of over 20 technical books focusing on the Windows Server platform, virtualization, databases, and IT management best practices. He is also a frequent contributor to IT publications and conferences. For more information, please see http://AnilDesai.net, or e-mail Anil@AnilDesai.net.
About MVP Monday
The MVP Monday Series is created by Melissa Travers. In this series we work to provide readers with a guest post from an MVP every Monday. Melissa is a Community Program Manager, formerly known as MVP Lead, for Messaging and Collaboration (Exchange, Lync, Office 365 and SharePoint) and Microsoft Dynamics in the US. She began her career at Microsoft as an Exchange Support Engineer and has been working with the technical community in some capacity for almost a decade. In her spare time she enjoys going to the gym, shopping for handbags, watching period and fantasy dramas, and spending time with her children and miniature Dachshund. Melissa lives in North Carolina and works out of the Microsoft Charlotte office.
can i use VMs of other virtualization systems in hyper-V
Hi, I keep reading about limitations with running hyper-V on SMB 3.0 - "Bandwidth limitations make the use of SMB storage impractical for large numbers of VMs." but i can't get any clear picture of what these limits are. Is this something you have encountered? Thx
You post looks perfect for what I'm trying to achieve.
I'm looking to virtualise a linux server on two hosts without the SAN.
I'm trying to lab on two laptops, I have installed 2012 made one a DC then have built a Failover cluster.
But now I'm not showing disks that I can add to the cluster.
Do these need to be special disks?