Hyper-V, Live Migration, and the upgrade to 10 gigabit Ethernet
<!--[if lt IE 9]>
<![endif]-->
Comments
Anonymous
January 01, 2003
The comment has been removedAnonymous
January 01, 2003
When looking for lab servers - one thing to keep in mind, is that memory expandability is often related to number of CPU's. For instance, the T7500 has the additional memory slots on the second CPU riser, and maps memory to that CPU. So in order to maximize the memory, you have to ensure you get a dual CPU config. Adding an additional CPU later can cost more than the whole server.Anonymous
January 01, 2003
@Merlus - My disks presented to the cluster in my lab are all on a single "storage server". It is essentially composed of 3 SSD drives connected to the Intel motherboard SATA controller, and then a RAID0 array composed of 4 15k SAS disks. Then I created an iSCSI disk on each of the 4 "drives" (3 SSD's, one spinning disk array). Each cluster node mounts the iSCSI volumes, which are presented as ClusterSharedVolumes. I put my critical VM's on the SSD's, and all the ancillary VM's on the larger RAID0 array. If I could afford some 512GB SSD drives, I'd do away with all the spinning disks. With Hyper-V storage migration, it makes adding/removing/changing/upgrading disks really easy. If you didn't want to use iSCSI, you could easily create a single node Scale Out File Server and do the same thing without the complexity of iSCSI, which I am planning on transitioning to once I upgrade everything to WS 2012 R2. Yes, gigabit is FINE for the storage connection. With 35 VM's I never come close to saturating it. The latency on the network is near zero. You'd have to start all 18 VM's at the same time to even see the network become a bottleneck.Anonymous
January 01, 2003
thanks for sharing.Anonymous
January 01, 2003
The comment has been removedAnonymous
January 01, 2003
I think Marnix summed it up pretty well. Sorry.... don't have better news: thoughtsonopsmgr.blogspot.com/.../a-farewell-to-old-friend.htmlAnonymous
January 01, 2003
Dell Broadcom RK-375Anonymous
January 01, 2003
Great content Kevin. I really need to switch to some Xeon lab servers, thanks for the heads up on the T7500's. Ebay search starts in 3,2,1.....Anonymous
July 02, 2013
Kevin: Now that MS has ditched the TechNet subscriptions, what options are there for testing out software. --TracyAnonymous
September 07, 2013
Thanks for the links, Kevin. I look forward to the day when we get vRSS for management OS vNICs. Then we might see the all-virtual converged network getting better Live Migration results.Anonymous
September 12, 2013
I too have lab envy. I am looking for the best option for the storage for a lab. How many disks and raid volumes do you have in the third workstation? How are you connecting your hosts to the storage? Is gigabit enough for 18 VM?Anonymous
October 16, 2013
Hi Kevin i am having one query that if particular service is running under specfic service accounts and if that service failed means whether SCOM will be able to restart the serviceAnonymous
October 16, 2013
it will start service which is running under local system or local service account ?Anonymous
March 12, 2014
@ Kevin have you re-created this lab setup with 2012 R2? With vRSS I'd like to hear how much it will saturate now.Anonymous
March 18, 2014
@ Kevin can you give some more specific info on the 10gb ethernet card that you bought?Anonymous
June 28, 2014
Hi there. I'm just wondering how you got the network cards so cheap? Were they second hand?Anonymous
August 29, 2014
This is one more reason why I like hardware based converged networking so much, like CIsco UCS. You then create converged network interfaces which each of them support RSS (Receive Side Scaling). Of course not something for in your lab, but good to know.Anonymous
January 05, 2015
Great walk through! Do you know if any progress been made to improve the ~3Gb/s speed limitation for Virtual Network Adapters on a 10Gb/s converged Virtual Switch? Thanks!Anonymous
April 10, 2017
Hi Kevin,It has been a few years since you did this testing....Is there a solution now? A quick Google search shows that vRSS is available and supported by MS now. Have you tried it and more importantly, have you seen it working?If not, I think I may want to use the CNA hardware to present virtual NICs instead of using HV's converged networking.PS: I am experiencing the same problem in a new HV cluster I setup using converged networking. It pushes less than 1 Gbps while LV.The main production HV cluster I built a few years back uses discrete 10GB NICs and is able to push 7-8 Gbps easily while doing LV.- Anonymous
April 10, 2017
Wow... haven't thought about this in a long time. Unfortunately I am no longer in a role where I would work with this stuff, and no longer have a lab with Hyper-V clusters and 10Gbe. So I am not current, nor is my equipment. Sorry.- Anonymous
April 10, 2017
Ok, no problem. I will continue your research and post on my own blog - rajdude.comBy the way:1. Just checked, vRSS and pRSS and VMQ is enabled by default on all my pNICs and vNICs on my servers2. I noticed that the send and receive numbers are different.725Mbps sending LMs from HV1 to HV2350Mbps receiving at LMs HV1 from HV2So something is really off in my Hyper-V Converged Networking setup.
- Anonymous
- Anonymous