[Last updated on: Jan 3, 2011]

As you plan to adopt VS Lab Management 2010 for managing your labs, you will probably have several questions on your mind such as:

  • How many servers do I need?
  • What kinds of servers should I buy?
  • Can I use a SAN for storage?
  • How much storage capacity do I need?
  • Can I setup everything on one big machine?
  • How do I setup an isolated lab?

This article provides you some guidance around these questions and points you to other resources that you can tap into. These guidelines are applicable when you plan to setup a real lab beyond demo use. For demo purposes, most of the rules can be relaxed :)

 

In addition to having Team Foundation Server, Test and Build Controllers, setting up a lab requires an SCVMM server, Hyper-V hosts, and (optionally) library servers.

 

SCVMM Server

 

1.      Machine configuration: We recommend installing SCVMM Server on a machine with the following configuration for a lab with less than 50 VMs:

·        64 bit processor

·        4 GB memory

·        300 GB hard disk

·        Windows Server 2008 R2 operating system

·        All latest updates of Windows

These requirements will be higher if the number of VMs is higher.

 

2.      Library: An installation of SCVMM server also acts as a default library server. So, the following recommendations assume that you will use the SCVMM server machine also as a library server machine.

 

3.      Storage for library: Make sure that there is enough space on the drive that you plan to use as library. By default, the library share that is created by SCVMM is on C: drive. So, you either need to have more than 200 GB of space on C: drive or use a different drive in your machine as the library share.

 

4.      Disk types for library: For small labs that do not plan to use library heavily, a disk with good speed will be sufficient. For larger labs with higher usage of library, RAID 5 configured disks are highly recommended. For even better performance in large labs, have multiple library servers. The storage for library can come from Direct-attached storage or from SAN. When using SAN, create a LUN to be used solely for library machine. Clustering of library machines is not supported.

 

5.      Shared machine: If you plan to install SCVMM along with some other software on the same machine, you need to first ensure that SCVMM server can get the necessary amount of resources as described above after deducting the resource consumption of the other software. For instance, if you want to install SCVMM on TFS machine, then add the above requirements to those of TFS, and then ensure that the machine has enough capacity. All of the following additional considerations assume that you have met this basic requirement.

 

For SCVMM and TFS to be installed on the same machine, TFS should be running under a regular domain user account as opposed to network service account. If this is not feasible in your setup, you cannot put TFS and SCVMM on the same machine.

 

For SCVMM to be installed on a Hyper-V host, it is highly recommended that the disk used for storing Hyper-V hosted virtual machines is different from the disk used for library. For instance, use C: from one disk for library, and D: from another disk for Hyper-V virtual machines. SCVMM server, in this case, will be running in the primary OS in Hyper-V. So, when the primary OS is loaded, all Guest OS (VMs deployed in Hyper-V) will have performance impact. To reduce this impact, configure the host reserves for that machine by adding the Hyper-V host reserves (described in next section) to the SCVMM machine requirements mentioned earlier. Host reserves can be configured using SCVMM administration console.

 

6.      Running in a VM: Do not run SCVMM in a virtual machine especially if you also use it as a library.

 

7.      Networking: SCVMM should have line-of-sight visibility to TFS, hosts, and other library servers.

 

SCVMM should be connected to hosts through a gigabit network.

 

SCVMM machine should ideally be on a network from where windows updates can be automatically applied. If this is not feasible, you should plan on keeping track of Windows and SCVMM updates, and apply them manually as and when they become available.

 

8.      Domain: SCVMM machine should be joined to a domain that has 2-way trust with the domain of TFS machine and hosts.

 

Hyper-V hosts

 

1.      Machine configuration: The number of Hyper-V hosts and the capacity of each host depends on the number of VMs that you need to host in your lab. Use the simple capacity planner tool for guidance on how you can size your hosts. If you decide to setup a small lab, we recommend installing Hyper-V role on machines with the following configuration:

·        Two dual-core 64 bit processors that are Hyper-V compatible

·        16 GB memory

·        300 GB hard disk space

·        Windows Server 2008 R2 operating system

·        All latest updates of Windows

 

If you have large number of VMs, and you decide to setup a small number of ‘big’ hosts, the following configuration is recommended for each host:

·        Two quad-core 64 bit processors that are Hyper-V compatible

·        64 GB memory

·        1 TB hard disk space

·        Windows Server 2008 R2 operating system

·        All latest updates of Windows

 

2.      Host reserves: Out of the host capacity requirements listed above, you must set aside the following resources just for the smooth functioning of the hypervisor. For a 16GB host, set aside:

a.      20% CPU

b.      2 GB memory

For a 64 GB host, set aside:

a.      30% CPU

b.      4 GB memory

These host reserves must be configured on the host using SCVMM administration console. Look for these settings under the host properties. Only the resources left on the host after deducting the host reserves can be used for virtual machines.

 

3.      Storage for virtual machines: It is highly recommended that the partition used for virtual machine storage is different from the primary partition of Hyper-V server. Use D: for virtual machine storage and C: for primary partition. Once you decide on the virtual machine storage location, you need to configure that location in Hyper-V manager or through SCVMM administration console. In Hyper-V manager, change the Virtual Hard Disks folder and the Virtual Machines folder. In SCVMM administration console, change the Placement Path under the host properties.

 

4.      Disk types for host: A disk with good speed is necessary. RAID 5 configured disks are highly recommended. The storage for host can come from Direct-attached storage or from SAN. With regard to SAN support, Visual Studio Lab Management 2010 does not support or leverage clustering. This would mean you cannot have a setup with VMM host clustering or create VMs that are cluster aware. However, if you decide to have your host’s disk come from a SAN drive for space and reliability needs, you will have to have separate LUNs mapped to each host. Even if the LUNs are managed by same controller, given that Visual Studio Lab Management 2010 does not leverage any of SAN functionalities, the underlying BITS copy during a virtual machine deployment will happen all the way from library to host via your LAN network.

 

5.      Shared machine: You should not install any additional software such as TFS on your host. If you have sufficiently powerful hosts (exceeding the aggregate needs of hypervisor and virtual machines), then you can have SCVMM or library server co-located on the host, provided you account for the resource constraints of those servers as well. For instance, if you want to install SCVMM on Hyper-V host machine, then add the host’s requirements, virtual machine requirements, and SCVMM requirements, and then ensure that the machine has enough capacity. All of the following additional considerations assume that you have met this basic requirement.

 

For SCVMM to be installed on a Hyper-V host, it is highly recommended that the disk used for storing Hyper-V hosted virtual machines is different from the disk used for library. SCVMM server, in this case, will be running in the primary OS in Hyper-V. So, when the primary OS is loaded, all Guest OS (VMs deployed in Hyper-V) will have performance impact. To reduce this impact, configure the host reserves for that machine by adding the Hyper-V machine’s host reserves to the SCVMM machine requirements mentioned earlier. Host reserves can be configured using SCVMM administration console.

 

For a Hyper-V host to be used as a library server as well, you must have multiple disks in the machine. Separate disks must be used for the host’s virtual machines and for the library storage.

 

6.      Networking: Hyper-V host should have line-of-sight visibility to TFS, SCVMM, and other library servers.

 

Hosts should be connected to SCVMM and library servers through a gigabit network.

 

Hyper-V hosts should ideally be on a network from where windows updates can be automatically applied. If this is not feasible, you should plan on keeping track of Windows and SCVMM updates, and apply them manually as and when they become available.

 

7.      Domain: Hyper-V host should be joined to a domain that has 2-way trust with the domain of SCVMM server.

 

Additional tools and resources

 

1.      Topologies: You may have complex networking topology requirements that restrict the networks in which TFS, SCVMM, Hyper-V hosts, and virtual machines running the application-under-test can be located. Or, you may want to configure network load balancing on your TFS. Use the following three topologies as examples of what you can setup.

-        Topology #1 – Topology with multiple AT, Load Balancer and Test network with firewall settings controlling the test traffic in and out into the Corp network.

-        Topology #2 – Topology with multiple ATs and DTs without load balancers and Test network with SAN based library and host.

-        Topology #3 – Topology with TMG, Windows NLB and with Test apps having DB tier outside of LE.

-        Topology #4 – Topology with multiple ATs and DTs, load balancers and Environments joined to a different domain.

 

2.      Simple Capacity planner: See attachment.