Taken from our Virtualisation with Microsoft® Hyper-V eBook (available to view and download below).

We recently published a post about deciding on a virtualisation scenario for your school.  Now we are going to address how you choose your hardware. This means working out how many host servers you will need to buy, and to what specification. The factors governing this decision include –

• Number of virtual servers
• Network bandwidth required
• Memory requirements of virtual servers
• CPU requirements of virtual servers
• Storage

All these will have an impact on your virtualisation design and purchasing so let’s look at each one in more detail.

Number of virtual servers

As we saw in the virtualisation scenario, the number of virtual servers you plan on hosting can have a large effect on your hardware decisions. If you only ever plan on hosting a small number of servers then you may very well get away with just one virtualisation host, but remember that this will not give you any redundancy if your host dies, or any room for growth.If you plan on hosting a large number of virtual servers, then this will force you down certain paths about the number of hosts and the storage of the virtual hard drives.

Network bandwidth required

When you are designing your host servers then the network resources required by each virtual server will have an impact on the number of network interface cards (NIC) that you will have built into the host server. You need to consider that if you host 5 servers on a host and only have 1 NIC then all data traffic will be going down that one network card. This may have a detrimental effect on your user’s experience. Later, we’ll discuss the setting up of the management side of virtualisation, which will take up at least one NIC for management traffic across the network.

Memory requirements

The amount of available memory has a massive effect on how any computer performs. This is no different in servers and in some ways it is more important. When designing the memory requirements for your host servers you will need to consider both the memory requirements for the host server and also for all of the virtual servers. Let’s look at the setup.


Let’s assume that each of these servers has 10Gb of memory and we are going to virtualise all of them except the Active Directory Server. A quick calculation shows us that for the virtual servers we will need 30Gb of memory and if we give the host the minimum of 4Gb then the host server will need 34Gb of memory.

That seems simple, and to some extent it is. But as we’ll see later, if you are planning for redundancy, then that minimum is not enough. Instead, you will need to allow enough total memory on your host servers to cope with the failure of 1 or 2 of them, and the consequent failover of the virtual servers installed on them to the remaining hosts. In Hyper-V this is called failover clustering and is the best way to ensure that your users are not affected if you suffer the failure of one or more virtualisation hosts.

Central Processing Unit (CPU) requirements

Designing your CPU requirements for your host servers is done in a very similar way to the memory requirements. You need to consider how much CPU activity, and what load, each of your virtual servers will take – plus the load that will be required by the host operating system. You also need to give consideration to what will happen in the event of a host failure and the failover of other virtual servers to the remaining hosts.



How you connect to your storage solution is also a key factor. Two of the main choices are iSCSI and Fibre Channel. Which you choose can be affected by a number of factors, but the common choice among education is iSCSI mainly because of the cost.

Once you have chosen your system of connectivity then you need to consider how much traffic will be travelling between your hosts and the storage system. You also need to consider redundancy. Again, a single point of failure could be introduced if you connect all your hosts to your SAN or NAS using a single network cable and switch.

A simple scenario is shown in the diagram below, this shows two routes for each host to access the storage system. This will remove the connectivity single point of failure.



Throughout your planning and implementation of virtualisation, you need to have an eye on the future. This means knowing whether, and how, you can readily expand the capacity of your system.

This, of course, includes your storage solution, and that’s where what’s called “Dynamic LUN expansion” comes in.

When setting up your storage system, you will purchase two distinctly different items, the physical hard drives and the ‘housing’ for them to go in. The housing is the piece of equipment that will manage your hard drives, the iSCSI connections and the partitioning of the hard drives into what are called LUN’s which then you can connect to your hosts. Here’s where, if you haven’t considered how your environment will grow, you could lock yourself into a situation where if your storage system becomes full you may need to remove all the current LUN’s before you can increase the amount of available storage space.

That’s because some of the low end storage solutions will have the basics, such as RAID array ability, dual controllers, dual iSCSI connections but will not have dynamic LUN expansion. This basically means that once you set the size of the LUN on the storage box it is fixed and if you ever want to increase the size you will need to backup all the data, delete the LUN and then rebuild it with the increased size. Some of the more expensive storage solutions, though, do have dynamic LUN expansion, which is the ability to increase the size of a LUN as you insert more physical hard drives. As so often, the major factor in this decision is obviously cost. The more features in a storage solution the higher the cost. 


You can view and download our Virtualisation with Microsoft® Hyper-V eBook below.