Partilhar via


Networking configurations for Hyper-V over SMB in Windows Server 2012 and Windows Server 2012 R2

One of the questions regarding Hyper-V over SMB that I get the most relates to how the network should be configured. Networking is key to several aspects of the scenario, including performance, availability and scalability.

The main challenge is to provide a fault-tolerant and high-performance network for the two clusters typically involved: the Hyper-V cluster (also referred to as the Compute Cluster) and the Scale-out File Server Cluster (also referred to as the Storage Cluster).

Not too long ago, the typical configuration for virtualization deployments would call for up to 6 distinct networks for these two clusters:

  • Client (traffic between the outside and VMs running in the Compute Cluster)
  • Storage (main communications between the Compute and Storage clusters)
  • Cluster (communication between nodes in both clusters, including heartbeat)
  • Migration (used for moving VMs between nodes in the Compute Cluster)
  • Replication (used by Hyper-V replica to send changes to another site)
  • Management (used to configuring and monitoring the systems, typically also including DC and DNS traffic)

These days, it’s common to consolidate these different types of traffic, with the proper fault tolerance and Quality of Service (QoS) guarantees.

There are certainly many different ways to configure the network for your Hyper-V over SMB, but this blog post will focus on two of them:

  • A basic fault-tolerant solution using just two physical network ports per node
  • A high-end solution using RDMA networking for the highest throughput, highest density, lowest latency and low CPU utilization.

Both configurations presented here work with Windows Server 2012 and Windows Server 2012 R2, the two versions of Windows Server that support the Hyper-V over SMB scenario.

Configuration 1 – Basic fault-tolerant Hyper-V over SMB configuration with two non-RDMA port

 

The solution below using two network ports for each node of both the Compute Cluster and the Storage Cluster. NIC teaming is the main technology used for fault tolerance and load balancing.

image

Configuration 1: click on diagram to see a larger picture

Notes:

  • A single dual-port network adapter per host can be used. Network failures are usually related to cables and switches, not the NIC itself. It the NIC does fail, failover clustering on the Hyper-V or Storage side would kick in. Two network adapters each with one port is also an option.
  • The 2 VNICs on the Hyper-V host are used to provide additional throughput for the SMB client via SMB Multichannel, since the VNIC does not support RSS (Receive Side Scaling, which helps spread the CPU load of networking activity across multiple cores). Depending on configuration, increasing it up to 4 VNICs per Hyper-V host might be beneficial to increase throughput.
  • You can use additional VNICs that are dedicated for other kinds of traffic like migration, replication, cluster and management. In that case, you can optionally configure SMB Multichannel constraints to limit the SMB client to a specific subset of the VNICs. More details can be found in item 7 of the following article: The basics of SMB Multichannel, a feature of Windows Server 2012 and SMB 3.0
  • If RDMA NICs are used in this configuration, their RDMA capability will not be leveraged, since the physical port capabilities are hidden behind NIC teaming and the virtual switch.
  • Network QoS should be used to tame each individual type of traffic on the Hyper-V host. In this configuration, it’s recommended to implement the network QoS at the virtual switch level. See https://technet.microsoft.com/en-us/library/jj735302.aspx for details (the above configuration matches the second one described in the linked article).

Configuration 2 - High-performance fault-tolerant Hyper-V over SMB configuration with two RDMA ports and two non-RDMA ports

 

The solution below requires four network ports for each node of both the Compute Cluster and the Storage Cluster, two of them being RDMA-capable. NIC teaming is the main technology used for fault tolerance and load balancing on the two non-RDMA ports, but SMB Multichannel covers those capabilities for the two RDMA ports.

image

Configuration 2: click on diagram to see a larger picture

Notes:

  • Two dual-port network adapter per host can be used, one RDMA and one non-RDMA.
  • In this configuration, Storage, Migration and Clustering traffic should leverage the RDMA path. The client, replication and management traffic should use the teamed NIC path.
  • In this configuration, if using Windows Server 2012 R2, Hyper-V should be configured to use SMB for Live Migration. This is not the default setting.
  • The SMB client will naturally prefer the RDMA paths, so there is no need to specifically configure that preference via SMB Multichannel constraints.
  • There are three different types of RDMA NICs that can be used: iWARP, RoCE and InifiniBand. Below are links to step-by-step configuration instructions for each one:
  • Network QoS should be used to tame traffic flowing through the virtual switch on the Hyper-V host. If your NIC and switch support Data Center Bridging (DCB) and Priority Flow Control (PFC), there are additional options available as well. See https://technet.microsoft.com/en-us/library/jj735302.aspx for details (the above configuration matches the fourth one described in the linked article).
  • In most environments, RDMA provides enough bandwidth without the need of any traffic shaping. If using Windows Server 2012 R2, SMB Bandwidth Limits can optionally be used to shape the Storage and Live Migration traffic. More details can be found in item 4 of the following article: What’s new in SMB PowerShell in Windows Server 2012 R2. SMB Bandwidth Limits can also be used for configuration 1, but it's more common here.

 

I hope this blog posts helps with the network planning for your Private Cloud deployment. Feel free to ask questions via the comments below.

Comments

  • Anonymous
    January 01, 2003
    @Steve HouserThere are reasons to keep a separate set of switches for your RDMA traffic:If you use InfiniBand, you will need a different type of switch just for thatIf you use RoCE, you will need a switch with DCB/PFC, which is not necessarily required on the other setif you have a full rack of compute nodes and storage nodes, you would need a much more expensive switch to provide the required number of ports, so having a separate set will reduce your acquisition costs.Having said all that, there are situations where a single set of switches will do for everything.
  • Anonymous
    January 01, 2003
    Your timing is perfect. I have four new servers arriving any day now that I'm going to use for this exact scenario.  Getting the networking right and being able to leverage RDMA has been my main concern. You've had some very useful posts, but this one cuts right to the heart of the matter. Thanks.
  • Anonymous
    October 10, 2013
    Great post - nice to see in one spot the two primary network config options. A question on config two which is showing two additional switches. Is is not possible to connect the RDMA enabled ports into switch 1 and 2?
  • Anonymous
    March 30, 2014
    In this post, I'm providing a reference to the most relevant content related to Windows Server 2012
  • Anonymous
    July 07, 2014
    In this post, I'm providing a reference to the most relevant content related to Windows Server 2012