Hyper-V and slow guest OS installation

  • Comments 10

I have seen a number of reviews and comments about the fact that while Hyper-V virtual machines appear to be quite fast once they are up and running - operating system installation seems to take quite a while.  The reason for this is relatively easy to explain.

With Virtual Server and Virtual PC we only had emulated devices to use - and as a result we spent a lot of time optimizing and tweaking the performance of these emulated devices.  When we implemented the emulated devices under Hyper-V we had to remove many of these optimizations due to the entirely different architecture of Hyper-V.  We did not, however, spend much time re-optimizing the emulated devices on Hyper-V because we had the new synthetic device architecture where we have focused our attention for performance tuning. 

This means that Hyper-V emulated devices are slower than Virtual Server / Virtual PC emulated devices - but Hyper-V synthetic devices are much faster than Virtual Server / Virtual PC emulated devices.

The catch here is that when you install an operating system you are almost always using our emulated devices - and you do not start using synthetic devices until after you have installed the operating system.

So in conclusion:

  1. Yes - operating system installation on Hyper-V is slower than on Virtual Server / Virtual PC.
  2. No - I do not expect this to change much for the first release of Hyper-V.
  3. Yes - once you are up and running and have integration services installed performance of Hyper-V virtual machines is much better than Virtual Server / Virtual PC.


Leave a Comment
  • Please add 8 and 8 and type the answer here:
  • Post
  • what i would really like to see is a GUEST that is comparable in performance with the actual HOST.

    Why does the VM has to use a pre-defined/fixed set of hardware drivers ??

    why can't the VM use New/Actual device drivers of the hardware available ?

  • Been testing windows 2003 and windows 2008 as hosts with the synthetic devices and it works great :-) Fun to experiment with in my (IT Pro) environment and considering moveing it into the test environment for software developers soon.

    Tested to run Windows XP as a host os and it was horrible! :-( Simply useless so back to vmware server for the client os. Is there any plans to make xp work as a client?

  • Slow as molasses OS installs are a definite barrier to adoption.

    Might want to seriously consider that before calling any decission "final".

  • Xepol, could you contact me about your experience with OS installs in Hyper-V? I'd like to talk to you about it. I'm at kward@1105media.com.

  • @MAJawed: because the guest's use of the hardware has to be shared with other guests and the host itself. The native driver for the device would expect to have full control of the hardware. The only way that Virtual PC/Virtual Server/Hyper-V can share the device is by intercepting the commands to the emulated or synthetic device and redirecting them to the host operating system, either in user- or kernel-mode APIs.

    However, the drivers for the emulated devices are expecting to talk to real hardware. That means they're using instructions and physical memory areas that are banned from use by user-mode programs. Without a hypervisor, the processor raises a hardware exception which Windows turns into a software exception. Virtual PC or Virtual Server can then emulate the requested operation and dismiss the exception, allowing the guest driver to perform the next step. With hardware virtualization and a hypervisor, the processor instead calls the hypervisor directly, a much faster operation than the exception handling.

    Installing the 'additions' drivers allows the communication between guest and host/hypervisor to be improved, but I believe the device is still emulated.

    The new 'synthetic' devices have a much closer match to the Windows API so they effectively turn a guest I/O request directly into a host I/O request. This cuts out many of the steps where a high-level request is turned into lower-level requests by the guest, which then has to send many more requests through the exception/hypervisor channel.

    You can get a 'cleaner' experience by using SCSI devices on the guest, as the SCSI interface is a better match to the OS file system API. I believe you can get an even better experience if the controller for your hard disks implements the SCSI protocol itself (RAID controllers for SATA drives tend to appear as SCSI adapters to Windows). I believe this is the function of the 'storage bus' driver, to inject I/O requests into the OS at a lower level.

    If you want to improve your disk performance, you can avoid the file system overhead by using a raw physical disk. In Virtual Server 2005, this is done by creating a Linked Virtual Hard Disk. This does mean you need a separate physical hard disk per VM. If you do have a RAID controller it might be able to carve out a separate volume to present to the host OS from one or more physical disks.

    However, if you're looking at doing this anyway, you should be aware of the physical characteristics of hard disks and how they behave when handling random and sequential I/Os. Basically the observed speed of a hard disk is governed by the disk head seek time, which is the reason that sequential I/O is far faster than random I/O. If you require sequential I/O speeds but you share the physical disk with something else doing random I/O, your sequential I/O performance is destroyed.

  • As to why the OS installation is slow, it's usually the case that the OS has to install using only the drivers available on its boot disc. There isn't much opportunity to load drivers for any devices that weren't known when the OS boot disc was built, and that certainly applies to Hyper-V's synthetic devices.

    The one place that Windows allows drivers to be added is to load new storage bus drivers, by pressing F6 when Windows 2000/XP/2003 is loading or clicking Load Driver at the 'select volume for installation' prompt in Windows Vista or 2008. For Virtual Server 2005, if the guest OS hard drive is attached to the emulated SCSI controller, there is a virtual floppy you can attach to load the Additions SCSI driver by pressing F6. I would have expected this to be the case for Hyper-V too (note that Additions are now called Integration Components) but I haven't yet installed it.

  • If the speed of guest OS installation from an optical media is a concern, another option is to configure your VM [with emulated NIC] to boot off the network and then do an install via WDS (Windows Deployment Services). That is my preferred primary guest OS install mechanism.

  • You really only need to do an OS install once. After that you can simply copy the installed VM and run sysprep or newSID.  Not even an issue.

    Anyway what sys admin has time to sit and watch an install.  The VM install is always waiting for me to come back as I'm doing other things

  • So the upshot is to not bother with Hyper-V for OSes that don't have Integration Components?

    Best to stick with Virtual Server / VMWare Server for these guests?

  • From what I've seen, it seems that the biggest bottleneck is the CD/DVD access speed. It's been one of my few serious gripes about hyper-V so far - if I'm pointing at an ISO it should be quite fast (OS installs via an ISO are very fast in VMWare). Even in a running guest OS, reading ISOs is unusually slow, and incurs significant guest CPU time.

Page 1 of 1 (10 items)

Hyper-V and slow guest OS installation