Failover Clustering and Network Load Balancing Team Blog
Hi Cluster Fans,
Recently Windows Server 2008 R2 Failover Clustering has changed the support statement for the maximum number of Virtual Machines (VMs) that can be hosted on a failover cluster from 64 VMs per node to 1,000 VMs per cluster. This article reflects the new policy in Hyper-V: Using Hyper-V and Failover Clustering.
Supporting 1000 VMs will enable increased flexibility to utilize hardware that has the capacity to host more VMs per physical server while maintaining the high availability and management components that Failover Clustering provides.
Number of Nodes in Cluster
Max Number of VMs per Node
Average Number of VMs per active Node
Max # VMs in Cluster
2 Nodes (1 active + 1 failover)
3 Nodes (2 active + 1 failover)
4 Nodes (3 active + 1 failover)
5 Nodes (4 active + 1 failover)
6 Nodes (5 active + 1 failover)
7 Nodes (6 active + 1 failover)
8 Nodes (7 active + 1 failover)
9 Nodes (8 active + 1 failover)
10 Nodes (9 active + 1 failover)
11 Nodes (10 active + 1 failover)
12 Nodes (11 active + 1 failover)
13 Nodes (12 active + 1 failover)
14 Nodes (13 active + 1 failover)
15 Nodes (14 active + 1 failover)
16 Nodes (15 active + 1 failover)
Note: There is no requirement to have a node without any VMs allocated as a “passive node”. All nodes can host VMs and have the equivalent to 1 node of capacity unallocated (total, across all the nodes) to allow for placement of VMs if a node fails or is taken out of active cluster membership for activities like patching or performing maintenance.
It is important to perform proper capacity planning that takes into consideration the capabilities of the hardware and storage to host VMs, and the total resources that the individual VMs require, while still having enough reserve capacity to host VMs in the event of a node failure to prevent memory over commitment. The same base guidance of Hyper-V configuration and limits of a maximum number of VMs supported per physical server still apply. This currently states that no node can host more than 384 running VMs at any given time, and that the hardware scalability should not exceed 4 virtual processors per VM and no more than 8 virtual processors per logical processor. Review this Technet article on VM limits and requirements: Requirements and Limits for Virtual Machines in Hyper-V in Windows Server 2008 R2
Here are some Frequently Asked Questions:
1. Is there a hotfix or service pack required to have this new limit?
a. No, this support policy change based on extra testing we have performed to verify that the cluster retains its ability to health detect and failover VMs with these densities. There are no changes or updates required.
2. 64 VMs per node on a 16 node cluster equals 1024 VMs, so aren’t you actually decreasing the density for a 16 node cluster?
a. No, the previous policy was to have 64 VMs per node in addition to one nodes equivalent of reserve capacity, which is 15 nodes x 64 VMs which equals 960 with the spare capacity of a passive node. This policy slightly increases the density for a 16 node cluster and the density for an 8 node cluster is more than twice and a 4 node cluster more than 4-times as high as before.
3. Does this include Windows Server 2008 clusters?
a. This change is only for Windows Server 2008 R2 clusters.
4. Why did you make this change?
a. We are responding to our customers’ requests to have flexibility in the number of nodes and the number of VMs that can be hosted. For VMs running workloads that have relatively small demand of VM and storage resources, customers would like to place more VMs on each server to maximize their investiments and lower the management costs. Other customers want the flexibility of having more nodes and fewer VMs.
5. Does this mean I can go and put 250 VMs on my old hardware?
a. Understanding the resources that your hardware can provide and the requirements of your VMs is still the most important thing in identifying the capacity of your cluster or the specific Hyper-V servers. Available RAM and CPU resources are relatively easy to calculate, but another important part of the equation is capacity of the SAN/Storage. Not just how many GB or TB of data it can store, but can it handle the I/O demands with reasonable performance? 1000 VMs can potentially produce a significant amount of I/O demand, and the exact amount will depend on what is running inside the VMs. Monitoring the storage performance is important to understand the capacity of the solution.
Thank you to everyone in the cluster community that provided us feedback and ideas. We appreciate our community’s involvement and the Failover Clustering team is dedicated to providing solutions that work for you and the companies that use our technology. We hope that this is but one more example of how we listen and respond to your needs.
Steven EkrenSenior Program ManagerClustering & High-AvailabilityMicrosoft