Virtualization - Some of my hints and tips
Being on vacation this week I've had a chance to catch up on some of my home network maintenance I've been putting off for a while. One of the items on my list was to implement Hyper-V in my lab at home. Now before I get into some of my practices I thought I would spend a little bit of time framing why I use Virtualization. Too frequently today do I see Virtualization as being the silver bullet to solve all SMB IT problems.
- Consolidated Environment - I frequently move builds, migrate between products and test different scenarios. Previously I had up to 6-10 machines in my server room (yep, sad isn't it I have a server room in my house), my wife really didn't appreciate the noise, heat, etc it generated. Now I can consolidate down to 4.
- Dev and Test - As mentioned above I do lots of heavy migration and testing type work. The snapshot functionality allows me to test a function, then roll back should it fail or I want to try another scenario.
- I don't require high levels of resiliency - Something everyone should consider; how does one ensure that their environment meets the required up-time levels? Especially with workloads like Active Directory where snapshots can really upset environments in a multi-DC configuration. Does this mean that Hyper-V reduces up-time? No, but it does mean that as admin's we need to carefully think about what workload support practices we have in place.
- Portability - I regularly move hardware. Being virtualized means that I can rapidly move between each server, without having to rebuild my lab from one environment the the next.
Anyway, getting off of my soapbox :) There are a few tips and tricks that I use to ensure sound performance in my environment.
Disk Management - Well what a really mean is spindle management. With all of the flexibility one has with virtualization, this introduces complexity. VHDs exhibit similar properties to normal hard disks. When under load write functions can be fragmented, especially if you combine differing write profiles. The first trick is to isolate each core VHD onto an independent disk set. I like to RAID 1 my Hyper-V parent partition on 2x36GB drives, then create additional RAID 1 or RAID 1+0 sets for each Core OS in my environment. Finally I usually create a RAID 5 set for less active VHDs and pooled files etc. Over time this leads to better perf by reducing fragmentation, as well as reducing IO contention as the spindle level (only on the isolated sets that is, but I'm willing to take the hit on my RAID 5 set to enable more machines in my environment).
Understand VHD types - I usually use Dynamic expanding disks, to allow me to condense storage, although you'll get better perf from pre-allocating the entire VHD. I also use differencing disks when spawning large numbers of client machines that are sysprepped. Net net message here, understand the IO impact of each disk type and choose accordingly.
Don't join you Parent machine to a DC located in Hyper-V - Ok so I'm sure you'd never do this but, I've seen it happen. You lose your DC which is virtualized and then lose access to other machines in you environment. Maintaining a workgroup parent also allows easier access to other domains that might be running especially if you're running domain isolation within AD.
Don't scrimp on RAM - Forcing a VM to page when RAM constrained is one of the most common perf hits your environment will suffer. I like to give each machine the recommended or optimum RAM amount to enable efficient paging rather than creating a page file disk bind. Trust me, spend some extra money putting in more RAM and your virtual environment will perform well.
Synthetic Vs Legacy drivers - Not all OS and environments support the new synthetic drivers. If you're unsure, setup the VM child using the legacy adapters. You can always setup the better performing synthetic drivers post install, and save a lot of pain trying to inject drivers during setup.
Check your network setup - Be sure that each machine you create, you double check the network setup, especially with multiple DHCP servers. You don't to be responsible for rogue DHCP servers. On that note, always make sure you clearly name your environment, if you do create problems on the broader network, you want to ensure those troubleshooting can track down your machines and inform you of the impact your environment is having.
Simplicity - Another mistake I see being made is the over complication of the virtual environment. Creating 4 NICs, and multi-processor machines just because it's possible just doesn't make sense. Make sure you take the time to ensure you have the simplest architecture and this will pay dividends in future.
Get those additions in - With Hyper-V at RC and the support for some devices are still linked to Beta 2, the sooner you get all of the additions to RC the better. It'll be as simple as connecting to Windows update for WS2k8 or installing the integration services on WS2k3 or Vista.
Well there you have it, my simple formula to running a small virtual lab.