Hyper-V in my House - 2013

Hyper-V in my House - 2013

Rate This
  • Comments 13

A while ago I talked about how I was using Hyper-V in my house.  These days I have a quite different configuration in place.  I updated my household deployment immediately after Windows Server 2012 was released.

I had a couple of goals with my new architecture:

  1. I wanted to minimize downtime due to hardware servicing.

    My Hyper-V server handles DHPC, DNS, Internet Connectivity, hosts the kids movies, etc…  All this means that if I need to turn it off for any reason – I have to do it after everyone else has gone to bed.  Not fun.

  2. I wanted to minimize the frequency with which I needed to service hardware.

    The leading cause for me needing to work on hardware has been hard drive failure.  So logically, more hard drives means more weekends working on servers.  Fewer hard drives means fewer weekends working on servers.

  3. I need high levels of data protection.

    My server has all the family photos on it – data loss is not an option!

  4. I need lots of storage.

    At this point in time I have about 5TB of data on my family file server.  So realistically I need 7-8 TB of storage for my file server and all other virtual machines.

  5. I want to keep the cost down.

    Hey, no one likes to waste money!

With all of this in mind – here is what I ended up deploying:


 
I setup two Hyper-V servers.  Each server has a single quad-core processor (I do not use a lot of CPU), 16 GB of ram and 3 1 gigabit network adapters.  Each server also has 6 hard disks.  The first two disks are configured in RAID1 using the onboard Intel RAID.  The next four disks are configured as a 6TB parity space to give me the most capacity possible (note – in practice these four disks are a mix of 2TB and 3TB disks).

I then run half of my virtual machines on each server, and use Hyper-V Replica to replicate them to the other server.

This configuration gives me a high level of data protection (both from a single disk failure and an entire server failure).  It also means that if I have to replace a physical disk / fix a hardware problem with one of the servers – I just move all the virtual machines to the working server, and get to take my time fixing the broken server.

I have been running this configuration for almost a year now – and for the most part it has worked just fine.  I have had three separate occasions where I needed to work on a server, and my family did not notice it (for the most part – other than the general cursing and complaining that I was making while working).  That said, there have been some lessons learned for me:

  1. Low storage IOPs can really hurt sometimes.

    When I built this system I knew that the storage throughput of the 6TB data disk would not be great, but I accepted this as a reasonable trade off in the space / price / performance matrix.  For 90% of the time it has also not been an issue – but there have been a couple of times when I have been surprised by how long operations took.

  2. I need more memory.

    My previous setup was a single server with 8GB of memory.  So two servers with 16GB should be huge – right?  This was my thinking when I built the system – but I was wrong.  First, I need to make sure that I do not oversubscribe my memory so that I can run all my virtual machines off of one server when I need too.  Thankfully dynamic memory makes this easy to do, and ensures that when my virtual machines are spread across both servers I get to use all the memory.  Second though, as soon as I had the memory available I started firing up new virtual machines simply because I could – and it was not long until I was at my limit again.

Anyway – now that I have gotten this all written down, I am planning to blog about some of the experience I have had with this setup over the last year, and the lessons learned in the process.

Cheers,
Ben 

Leave a Comment
  • Please add 5 and 7 and type the answer here:
  • Post
  • Can you ellaborate a bit about the thinking behind the 3xNIC setup?

    Are they all teamed together to boost bandwidth or are they divided between VM's?

    Happy Holidays

  • Hi Ben,

    I notice that your AD is on VM.

    How do you manage your physical machines? With local account?

    Regards

  • Ran -

    I actually do not team them at the moment.  One network is private between the two servers, one is for my household network and one is my public internet facing network.

    Azize -

    I have the hosts joined to the domain.  A bit risky I know - but as long as the DCs are on separate servers and are configured to start automatically it works most of the time.

    Cheers,

    Ben

  • Ben will you be talking about your Hardware that you have everything setup on. I'm interested in your configuration as I too have a memory issue and dealing with that memory gap price between 16 GB and 32 GB is pretty steep for my budget!

  • Hi Ben,

    Very nice article ! Like Art I'm curious about the hardware you are using (motherboard, cpu, memory mainly). Also did you consider using SSD at some point ?

  • This is similar to the setup I used to have.  I used ESXi Hypervisor as my virtualization platform, and didn't have the high availability configuration you do, but otherwise it was pretty similar (2x DCs, Fileserver, WSUS, WDS, VPN, Firewall).  I also have two servers at home, but the second is used exclusively for backups.

    I set this all up using software available through my Technet account.  I used this for testing many things before I implemented them for work, including running Windows Updates first, testing new server software, etc.  I spun up new Windows Server instances (thanks to WDS and WSUS) to run SQL Server, Exchange, etc. when I needed them, and just added them to my existing domain.

    Unfortunately, Microsoft decided to discontinue Technet.  In order to continue running my farm legitimately, I would have needed to purchase at least 3x Windows Server 2012 Standard licenses at over $800 CAD each.  I'd also need SA to get updates every time a new version came out.  There was no way I was going to do that, so my home services have been replaced by two Linux servers running Ubuntu 12.04.3.  I then replaced my laptop with a Macbook Air.  After I rebuild my two desktops with PUL licenses of Windows 8.1 (I already have the licenses, just waiting for the RTM bits which are now apparently delayed for Technet and Volume Licensing users), I will be 100% technet free.

    The lack of ability to spin up Windows VMs and test software and patches makes me less effective at my job.  I can no longer check things out at home first and show up to work as an expert.  I have do all of that testing and evaluation at work, leaving me less time to do everything else.  But, I guess that's the price we have to pay for progress.

  • Yup, with the mind-numbing decision to end TechNet - which is really, solely about money and MS wanting to wring every last penny out of each license they sell - those of us who actually run the systems have no real way to learn the systems in a reduced-scale real-world environment.  Maybe when we stop clamoring for the latest new version of whatever - because historically it was easy to acquire and use, but now, not so much - they'll re-launch what we commonly know as TechNet under some other name - because, as we all know, MS is never one to admit they got it wrong... unless it involves Mr. B running across the stage spraying pit-sweat from his shirt, and yelling the word "developers" over and over...

  • Hi Ben, I've got a similar setup to what you describe here. As I also like to make backups of my VM's I've written a powershell script that makes a backup of the replicated image. In order to do that correctly it pauses the replication. You can find it here:  www.servercare.nl/.../Post.aspx

  • Hi Ben,

     I've been listening to your videos from TechEd on the way to work.  Very enlightening stuff!

     Question for you: when are we Hyper-V and VMM fanboys going to get our own cool title like the VMware guys have?  I mean 'vExpert' is pretty cool!  

     We should have our own name, Microsoft Certified Virtual Engineer/Expert?  MCVE?  Sounds cool to me!

  • Seems risky joining the hosts to the domain.  There's a lot of "What Ifs" that are probably unlikely but possible that would cause you to be locked out.  In a "for real" implementation what exactly is the best practice for that?  Separate administrative domain?  Forego domains for the hosts entirely?

    I also call into question the loss of TechNet.  I've had a subscription (on and off but mostly on) for like 8 years now.  In that time it's allowed me to run sandbox domains at home and perfect cool ideas from smart guys (like Keith Combs) that I've used "for real".  Things I never in my right mind would have tried otherwise.  Seems like a good way to crush creativity and experimenting.  For instance, what would be the cost of running a slick setup like you have?

  • Thank you for sharing your setup. I had a simular setup to this with the addition of a VM on my laptop running an additional DC. My company allows our team to have the retired servers so I have 2 Dell PowerEdge 2950s for my environment. I used external USB drives connected directly to the servers for backups.

    I am currently running those same 2 servers connected to a Synology DiskStation DS1813+ via iSCSI. With the servers in a Failover Cluster it is more like my environment at work. The 2950s have 2 onboard NICs and these have 3 additional dual-port NICs for a total of 8 1 Gb ports. 2 are teamed for the management network, 2 are dedicated for iSCSI, 3 are teamed for the VMs and 1 is for Live Migration (cross over cable).

  • So as I understand it you have 1 NIC in each machine dedicated for replication/management, one for inernal LAN and one for an internet/DMZ, is this correct?

  • I am looking at setting up something similar myself, but was wondering if you could post a diagram of the layout with the virtual switches used. What type of OS are you using to do the different functions?

    Thanks

Page 1 of 1 (13 items)