How to get the best disk performance with Linux on Hyper-V

How to get the best disk performance with Linux on Hyper-V

  • Comments 4

I was recently reading our documented Best Practices for running Linux on Hyper-V and noticed this section:

Use I/O scheduler NOOP for better disk I/O performance.

The Linux kernel has four different I/O schedulers to reorder requests with different algorithms. NOOP is a first-in first-out queue that passes the schedule decision to be made by the hypervisor. It is recommended to use NOOP as the scheduler when running Linux virtual machine on Hyper-V. To change the scheduler for a specific device, in the boot loader’s configuration (/etc/grub.conf, for example), add elevator=noop to the kernel parameters, and then restart.

This is interesting – as I often get asked by people about what they can do to ensure the best performance when running Linux on Hyper-V.

For a bit of background here – Linux utilizes a number of techniques to try and get the best performance out of your storage (you can read all about this if you do a search on “Linux IO elevator”).  Unfortunately, all of this logic is completely undone inside of a virtual machine – as we are then responsible for mapping virtual storage to physical storage in a way that is hidden from the guest operating system.  Turning effect of turning on the NOOP I/O scheduler is that Linux stops trying to be clever about storage activity – and instead relies on the underlying hardware (virtual hardware in this case) to do the right thing.

In our testing this has always yielded the best results.


Leave a Comment
  • Please add 6 and 2 and type the answer here:
  • Post
  • Thanks for this. Can you be a bit more specific what "best results" mean? I mean, what performance gain should I expect when changing this parameter? 1%? 10%? Thanks.

  • This for pointing this out Ben, does this apply to all versions and distributions of Linux or only particular kernels i.e the most recent versions with the integration components?

  • Nejc Škoberne -

    It really depends on the workload.  Some will see ~30% difference, some will see none.

    Paul -

    Anything running kernel 2.6 or later will see this.



  • Does this apply to Linux VMs that have 'direct access' to disks (i.e. have been assigned the physical disk as a device, rather than an VMDX?) - in fact, a discussion of what direct access means in the Hyper-V world would be great (i.e. does it benefit from VT-d support, etc).

Page 1 of 1 (4 items)