Anthony F. Voellm (aka Tony)
This FAQ is titled R2 since it updates the original FAQ as of the Windows Server 2008 R2 Hyper-V. Since Hyper-V originally released we have continued to improve it. We have new features in R2 like Live Migration, support for 64 processors and lots of improvements in networking and storage.
· Q: Is there a place that compares features and versions of Hyper-V?
· Q: What is the recommend configuration for performance testing?
· Q: How do I monitor performance?
· Q: Is there any “official” documentation for Hyper-V performance counters?
· Q: Are there any official virtualization benchmarks I can use to compare machines and virtualization solutions?
· Q: Is there any common terminology used to talk about virtual machine configurations?
· Q: How much memory should I reserve for the root?
· Q: Are there any services that should be stopped?
· Q: Is it ok run applications / processes in the root OS?
· Q: Is there a simple way to disable the hypervisor to run some baseline tests on the native system
· Q: Should I use passthrough disks or iSCSI attached to the guest for storage?
· Q: Are there ways to reduce overall networking overhead?
· Q: Are there additional knobs for performance nuts?
· Q: Are their additional resources that are useful for understanding Hyper-V?
A: Here are some simple steps:
1. Be sure to have the latest WS08 Hyper-V build – Windows Server 2008 R2 is the current verion.
2. Next you need to make sure you are running a “Supported OS” with the latest Service Packs installed on the guest OS. For the latest on which OS are supported and the number of virtual processors that can be used see http://support.microsoft.com/kb/954958
3. Make sure the guest and root OS have integration components installed (http://blogs.msdn.com/tvoellm/archive/2008/04/19/hyper-v-how-to-make-sure-you-are-getting-the-best-performance-when-doing-performance-comparisons.aspx **and** http://blogs.msdn.com/tvoellm/archive/2008/01/02/hyper-v-integration-components-and-enlightenments.aspx )
4. Make sure you are using the “Network Adapter” and not the “Legacy Network Adapter”. The legacy adapter uses emulation which creates additional CPU overhead.
5. Use pass-through disks attached to SCSI for the best performance. Next best is Fixed VHD attached to SCSI. To understand storage better see (http://blogs.msdn.com/tvoellm/archive/2007/10/13/what-windows-server-virtualization-aka-viridian-storage-is-best-for-you.aspx )
6. Follow these tips for avoiding pitfalls http://blogs.msdn.com/tvoellm/archive/2008/04/19/hyper-v-how-to-make-sure-you-are-getting-the-best-performance-when-doing-performance-comparisons.aspx
A: First you need to understand that the clocks in the root and guest Virtual Machines may not be accurate See (http://blogs.msdn.com/tvoellm/archive/2008/03/20/hyper-v-clocks-lie.aspx ). Given an understanding of clocks you can see why we implemented the “Hyper-V Hypervisor Logical Processor” performance counters (access using perfmon) which are not skewed by clock effects. There are other Hyper-V performance counters that are useful. See the following for more details (http://blogs.msdn.com/tvoellm/archive/tags/Hyper-V+Performance+Counters/default.aspx )
There is also a detailed posting on how to monitor Hyper-V performance at http://blogs.msdn.com/tvoellm/archive/2009/04/23/monitoring-hyper-v-performance.aspx
A: As of the writing of this blog there is no “official” documentation. However you can find documentation on my blog here http://blogs.msdn.com/tvoellm/archive/tags/Hyper-V+Performance+Counters/default.aspx .
A: There are no official and widely accepted benchmarks. That said, Intel put together a benchmark called vConsolidate and the SPEC group is working on SpecVirt.
A: Yes - Internal to Microsoft we typically use the following;
· Native = System without the Hyper-V role. This means you have no virtual drivers, virtual switch, …
· Root = Most people call this the host however we refer to the management partition of Hyper-V as the root because it is technically not a hosted VM solution. You can also think of this as the OS that boots Hyper-V and handles the management of VMs, devices, memory and other share VM resources.
· Guest = Guest Virtual Machine.
· 8p.child.2x1p or better 8p.child.2VMx1VP = A system with 8 logical processors / cores running 2 Virtual Machines (VM) each with 1 Virtual Processor (VP)
A: There is a good discussion on how much memory to leave for root in the “Windows Server 2008 Tuning Guide.” You can find it here http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv.mspx. The relevant section from the guide is as follows:
Correct Memory Sizing
You should size VM memory as you typically do for server applications on a physical machine. You must size it to reasonably handle the expected load at ordinary and peak times because insufficient memory can significantly increase response times and CPU or I/O usage. In addition, the root partition must have sufficient memory (leave at least 512 MB available) to provide services such as I/O virtualization, snapshot, and management to support the child partitions.
A good standard for the memory overhead of each VM is 32 MB for the first 1 GB of virtual RAM plus another 8 MB for each additional GB of virtual RAM. This should be factored in the calculations of how many VMs to host on a physical server. The memory overhead varies depending on the actual load and amount of memory that is assigned to each VM.
A: Not if you are running Server Core which is the ideal root OS to use. Regardless of running server core vs full server you should close the Hyper-V Management Console because it has a noticeable impact on CPU. If you want the details see http://blogs.msdn.com/tvoellm/archive/2008/04/19/hyper-v-how-to-make-sure-you-are-getting-the-best-performance-when-doing-performance-comparisons.aspx
A: You should avoid running any Role / Feature or custom service in the root. If you have services you want to run put them in a guest VM. Running roles in the root can have a negative impact on the guest VM’s. This is due to how the Hypervisor scheduler handles the root virtual processors.
Note: Outside pure performance deploying applications in the root has licensing implications as in a virtualized environment the so called virtual use rights only allow to run virtualization itself (and its associated management like management agents etc) in the physical environment. More information can be found at:
· Windows Server 2008 R2 Licensing overview - look for Virtual Use Rights
· Windows Server 2008 Licensing Overview (Pdf)
A: Yes. “bcdedit /set hypervisorlaunchtype off” and reboot the server. You should also consider changing the protocols on the root network device to re-enable TCP/IP and turn off the “Microsoft Virtual Network Switch Protocol”.
To turn it back on do “bcdedit /set hypervisorlaunchtype on” and reboot the server.
A: The decision depends on what features you need to expose to the guest. In the passthrough case the drive will show up without knowledge of the underlying LUN.
Educated guess: If you are looking for raw performance passthough will give you the best result.
Reason: When doing IO from the guest using passthough you traverse the guest storage stack + disk stack in the root. When doing iSCSI you traverse the storage + networking stack in the guest + root networking stack
A: The way to reduce storage overhead (ie adding CPU use) is to use passthrough disks. There are also ways to reduce networking overheads by using two new features in R2. One feature is called Virtual Machine Queues (VMQ) and the other is called Chimney Offload. VMQ reduces overheads by reducing the cost of routing incoming packets, more optimized copy paths, and better interrupt distribution. Chimney offload is helpful for long running connections. It also improves overhead by reducing copy path costs. The challenge is that no networking devices I’ve seen support both of these features. Typically they support VMQ or Chimney offload.
If you have a device that supports one of these features the following article will help you configure it:
Networking Deployment Guide: Deploying High-Speed Networking Features
A: We are trying to make Hyper-V knob-less. However we are engineers and here are some tips.
1. Remove the CDROM drive from the guest if you don’t need it
2. Look into the Caps / Weights / Reserves in the CPU config. You can use these to “balance” workloads.
3. You can use the WMI interfaces to force a VM to a particular node (http://blogs.msdn.com/tvoellm/archive/2008/09/28/Looking-for-that-last-once-of-performance_3F00_-Then-try-affinitizing-your-VM-to-a-NUMA-node-.aspx) Hyper-V does not guarantee node affinity for VP’s but it does for memory. There is a good chance the VP’s will stay on the node because the scheduler is NUMA aware.
A: Yes – here is a list