Support for greater than 64 virtual machines in x64 VS 2005 R2 SP1

Support for greater than 64 virtual machines in x64 VS 2005 R2 SP1

  • Comments 15

Another change that has come with Virtual Server 2005 Service Pack 1 is a change to the limit on the number of virtual machines that you can run on a single computer.  Previously Virtual Server has been limited to only be able to run 64 virtual machines at any given time, now if you are running on a 64 bit host operating system we allow you to run up to 512 virtual machines at the same time.

Obviously you need to have some fairly hefty hardware in order to do this, and the 64 virtual machine limit still remains for 32 bit host operating systems.

Cheers,
Ben

Leave a Comment
  • Please add 7 and 8 and type the answer here:
  • Post
  • Ben,

    When you mention "Hefty Hardware", do you have an example of what would be needed to run for example 40+ VM's on VS2005 R2?

    is this overkill or insufficient?

    - Quad Core Intel® Xeon® Processor E5345 2.33GHz, 2 X 4MB L2,1333,English

    - Windows® XP Professional, x64 ion SP2

    - 16GB, DDR2 SDRAM FBD Memory, 667MHz, ECC (8 DIMMS)

    - All SATA drives, Non-RAID, 3 drive configuration

    - 80GB SATA, 10K RPM Hard Drive with 16MB DataBurst Cache™ (Windows Partition)

    - 500GB SATA II, 7200 RPM Hard Drive with NCQ, 16MB DataBurst Cache™ (will contain the *.VHDs)

    - 500GB SATA II, 7200 RPM Hard Drive with NCQ, 16MB DataBurst Cache™ (backup drive)

    I tried looking online for any benchmarks, but am having some problems finding something specifically showing the relationship between hardware specifications and number of Virtual Machines.

    Thanks for any help! (Keep up the great work, I love opening my RSS reader and seeing updates from your site.)

  • The guest OSs used would limit the usefulness of running multiple VMs, based on speed.  For example, you will be able to run far more MS-DOS 6.22 VMs at full speed than Windows Vista Ultimate VMs.

  • I'm planning on running a combination of Win98, 2k (inc server), XP (32bit & 64bit) and Vista (32bit & 64bit).

    I would of course assign the appropriate amount of "acceptable" RAM based on the OS.

  • Honestly, running more than a couple VMs on IDE drives (SATA included) will not run effeciently, running 40 on it would be suicide.  It's not the throughput that is the problem, it is the amount of I/O operations required for IDE vs. SCSI.  If you change that system to use SAS instead of SATA, then your limitation would be the amount of RAM. 16G for 40 systems only allows about 256M per guest...

  • Opps, I forgot to add, Virtual Server 2005 does not allow for 64bit guests that you listed in your last post either. :(

  • You can generally assume 3-6 VM's per single cpu/core.  In your case, 40 VMs, you'd definitely want more than 1 quad core CPU.

    Also, as Mike mentioned, there's no way you'll get 40 VMs running on 1 hard drive.  Ideally, you want 1 VM per spindle, more than 3 or 4, you'll have so much drive contention, the VMs will be unusable.

  • I've come up with the following computer solution to host our 48 virtual machines. If we bought 2 of the following, do you guys think it would suffice? Or will I be stuck with a very expensive useless virtual server.

    - Quad Core Intel® Xeon® Processor E5320 (1.86GHz, 2 X 4MB L2,1066MHz),

    - Windows® XP Professional, x64 Edition SP2

    - 16GB, DDR2 SDRAM FBD Memory, 667MHz, ECC in Riser (8 DIMMS)

    - 128MB PCIe x16 nVidia Quadro NVS 285, Dual DVI or Dual VGA Capable

    - All SAS drives, Non-RAID, 3 drive total

    - 1st Hard Drive (Boot Drive / Windows): 73GB SAS Hard Drive, 1 inch (15,000 rpm)

    - 2nd Hard Drive: 146GB SAS Hard Drive, 1 inch (15,000 rpm) - 12 VMs hosted

    - 3rd Hard Drive: 146GB SAS Hard Drive, 1 inch (15,000 rpm) - 12 VMs hosted

    The whole point of this is to consolidate our QA environment and therefore the virtual machines will not be running at 100% all at the same time. They do however, need to be up and running and able to perform task immediately upon request.

    Thanks again, you guys are amazing.

  • I have a several HP ProLiant DL385/585 systems running Virtual Server on Windows 2003 x64 and it would be a stretch to run more than 40 instances of Windows 2000/XP/2003 on a 585G1 with 48GB of RAM and a 14 disk RAID 10 array. My 385G1 systems are all dual socket/dual core with 16GB of RAM and 2 x 36GB 15K SCSI disks for the OS and 4 x 146GB 10K drives (RAID 10) for the VHDs.

    If you really want to run 48 VMs I'd recommend you get several dual socket, dual or quad core systems (HP DL380G5 or equivalent from Dell etc.), load each one up with 16GB of RAM and attach a small SAN (NetApp StoreVault, HP MSA 1500 etc.) for back end storage.

  • When guest machines communicate with each other over a Microsoft Loopback virtual network adapter, are they constrained by limits of the physical host?  When the physical host's OS imposes a limit of 10 network connections, will that prevent connections from its own guests?

  • Virutal PC 2007 on vista x86, the "Virtual Machine Network Services" cause very high CPU usage. any suggestion?

  • Rob -

    Your second configuration will be tight - but it should work.  This is of course assuming load on all VMs at the same time - which you indicate is not the case.  If the load is low eneough you will have no problems at all.

    Dan -

    DOS is actually not a good guest for this example.  While DOS does use less memory - it has no support for idling the processor - so it uses a lot more CPU than Windows does.

    Norman -

    No - virtual machines are not limited by the host network adapter or limits when communicating with each other.

    Peter -

    Look for an updated network driver / try a different network adapter.

    Cheers,

    Ben

  • When are we going to see guest x64 virtual machines being available on either Virtual PC or Virtual Server?

  • May I have a clearler picture on the topic of:

    1. If Virtual Server 2005 R2 SP1 utilise only uni processor (assume Quad with no HT), does it utilise 1/4 of the processor? Or the full processor? Does 2 socket of Dual Core processors makes better performs under the same environment? Any recons on processor to buy?

    2. Is there a general calculation for Virtual Server 2005 R2 SP1 on prefer CPU vs RAM? E.g.  What kind of server spec I should expect for 8 "Window 2003 R2 standard edition" VMs with 3 users login per VM?

    3. As I new in Virtual Server 2005 R2 SP1, I would like to invest a server with minimum cost that provide above (point 2) performance. What kind of h/w specs you recons on the server (CPU, RAM, ATA type and etc.) for max utilization?

    Help most appreciated.

  • We are looking at running about 9 VMs on the below spec machine:

    2 * Quad Core Intel® Xeon E5335 2x4MB Cache 2.0GHz, 1333MHZ FSB

    10GB of RAM

    2 * SATA 7,000 250 GB Hard Drives - For the OS (Windows 2003 64-bit)

    4 * SATA 7,000 250 GB Hard Drives - For the VMs (Single drive - no RAID)

    We are planning to run 2000 Pro, 2000 Server, XP, 2003 Server and Vista as the VMs in a testing environment - On Windows Virtual Server 2005 64-bit.

    Do you think this would run the VMs well?

    Do you think we should RAID the VMs drive for better performance?

    Do you think we should pay a bit extra and get SAS instead of SATA?

    Do you think we should get faster drives? e.g. 10,000rpm or 15,000rpm

    Do you think we could run more VMs with a bit more memory?

    Any help would be greatly appreciated.

    Thank you.

  • Scaling the virtual machine is difficult because some applications use way more memory. If you run SQL on your guest machines and want fast response for a large volume of transactions you may need 3712m (max). A box used for application development with low transaction volume may be able to get by with 512m. Average in production is 1024m but that is most likely generous. More separate CPU's gives you more slots for memory so you can use cheaper sticks, that can save BIG $ When your talking about going to 64G. We use 4x2 cpu 585's with 32G and run 35+ XP boxes or 10-20 w2k3 servers and are only starting to hit hardware limits. We use 1 Nic port per server role (web, SQL, application, infrastructure...) The biggest issue that limits the number of servers is what a nightmare it is to schedule outages, when you need to for instance update the host.

    The most common limit in a non-SAN environment is hard drive access time. It does you no good to have a 500G hard drive that more than 2 Guests are writing to. Drive contention is the issue, faster is better but more spindles is the answer. Buy a SCSI attached disk array and fill it with 18g or 36g (cause they are cheap) drives and RAID it so you can use all the spindles for all the guest machines. With out n SAN or an external drive box a 385 will only run 4 or 5 at a time before it starts to bog down. If you don't use RAID you don't really get the speed advantage you could out of multiple drives

Page 1 of 1 (15 items)