Share via


Hyper-V : Network Design, Configuration and Prioritization : Guidance

 

1. Network Design. How many nic's we need for production environment for High Availiability:

  • 1 for Management. Microsoft recommends a dedicated network adapter for Hyper-V server management.
  • 2 ( Teamed )  for Virtual machines. Virtual network configurations of the external type require a minimum of one network adapter.
  • 2 ( MPIO ) for iSCSI. Microsoft recommends that IP storage communication have a dedicated network, so one adapter is required and two or more are necessary to support multipathing.
  • 1 for Failover cluster. Windows® failover cluster requires a private network.
  • 1 for Live migration. This new Hyper-V R2 feature supports the migration of running virtual machines between Hyper-V servers. Microsoft recommends configuring a dedicated physical network adapter for live migration traffic. This network should be separate from the network for private communication between the cluster nodes, from the network for the virtual machine, and from the network for storage.
  • 1 for CSV. Microsoft recommends a dedicated network to support the communications traffic created by this new Hyper-V R2 feature. In the network adapter properties, Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks must be enabled to support SMB.

But how about production environments when the blades have only 4 Physical NIC's?

Option 1. If your vendor does support NPAR technology (Broadcom, QLogic), you will be able to create up to 4 "Virtual Logical NIC's" per physical NIC ( VLAN/QoS ). Although this solution is not supported by Microsoft, it's the best solution in terms of performance and it is supported by the vendors. This solution will provide you 100% HA as you can have up to 16 Logical NIC's.

Option 2. Supported by Microsoft. Allocate 2(two) NIC's for the iSCSI using MPIO and then :

Host configuration Virtual machine access Management Cluster and Cluster Shared Volumes Live migration Comments
2 network adapters with 10 Gbps Virtual network adapter 1 Virtual network adapter 1 with bandwidth capped at 1% Network adapter 2 Network adapter 2 with bandwidth capped at 50% Supported

Note that the QoS configuration is based on "per port" and Windows only allows you to specify caps – not reserves. This solution, although supported by Microsoft, does not give you 100% HA.

2. Network Configuration. What need to be enabled/disabled?

Usage Number of Network Cards Comments
Management Network(Parent Partition) 1 Network Card
  • Make sure this card is listed first in the Adapter and Bindings connection order.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network. This will act as a secondary connection for the Heartbeat.
Storage ISCSI 2 Network Cards - Not Teamed
  • Enable MPIO.
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing
  • Do not remove Client from Microsoft networks if using Netapp Snapdrive with RPC authentication
  • In Failover Cluster Manager select- Do not allow cluster network communication on this network
VM Network
(Parent Partition)
2 Network cards :
1 for Dynamic IP’s
1 for Reserved IP’s
  • Disable NetBIOS on these interfaces
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that each NIC is NOT set to register its connection in DNS
  • Remove File and Printer sharing and Client from Microsoft networks
  • In Failover Cluster Manager select - Do not allow cluster network communication on this network.
Cluster Heartbeat 1 Network Card
  • Disable NetBIOS on this interface
  • Do not configure a Gateway
  • Do not configure a DNS server
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager make sure that the NIC is configured to allow cluster network communication on this network.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed.
Cluster Shared Volume (CSV) 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS
  • Make sure that Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks are enabled to support Server Message Block (SMB), which is required for CSV.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM's themselves.
  • By default the cluster will automatically choose the NIC to be used for CSV communication. We will change this later.
  • This traffic is not routable and has to be on the same subnet for all nodes.
Live Migration 1 Network Card
  • Disable NetBIOS on this interface
  • Make sure that this NIC is NOT set to register its connection in DNS.
  • In Failover Cluster Manager remove the tick box for Allow Clients Connect through this network. This setting has nothing to do with the host/parent partition. This setting is used to control over what NICs the Cluster Resources can be accessed. This is more relevant for other workloads e.g. File Cluster. It has no impact on the communication with the host partition or for the VM's themselves.
  • By default the cluster will automatically choose the NIC to be used for Live-Migration. You can select multiple networks for LM and give them a preference.

 

2. Network Prioritization. What need to be enabled/disabled?

By default, all internal cluster network have a metric value starting at 1000 and incrementing by 100.  The first internal network which the cluster sees when it first comes online has a metric of 1000, the second has a metric of 1100, etc.

When you create CSV's,  the failover cluster automatically chooses the network that appears to be the best for CSV communication. The lowest Metric value designates the network for Cluster and CSV traffic. The second lowest value designates the network for live migration. Additional networks with a metric below 10000 will be used as backup networks if the “Cluster & CSV Traffic” or “Live Migration Traffic” networks fail.  The lowest network with a value of at least 10000 will be used for “Public Traffic”. Consider giving the highest possible values to the networks which you do not want any cluster or public traffic to go through, such as for “ISCSI Traffic”, so that they are never used, or only used when no other networks at all are available.

To view the networks, their metric values, and if they were automatically or manually configured, run the clustering PowerShell cmdlet:
PS > Get-ClusterNetwork | ft Name, Metric, AutoMetric

To change the value of a network metric, run:
PS >Get-ClusterNetwork "Live Migration" ).Metric =800

If you want the cluster to start automatically assigning the Metric setting again for the network named "Live Migration":
PS > Get-ClusterNetwork "Live Migration" ).AutoMetric = $true

How to override Network Prioritization Behavior?

Option 1. Change the network’s properties. If you select “Do not allow cluster network communication on this network”, then it will not be possible to send any “Cluster & CSV Traffic” or “Live Migration Traffic” through this network, even if the network has the lowest metric values.  The cluster will honor this override and find the network with the next lowest value to send this type of traffic :

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2.  Select Properties
  3. Change the radio buttons or checkboxes.

Option 2 (exclusively for “Live Migration Traffic”) :

To configure a cluster network for live migration:

  1. In the Failover Cluster Manager snap-in, if the cluster that you want to configure is not displayed, in the console tree, right-click Failover Cluster Manager, click Manage a Cluster, and then select or specify the cluster that you want.
  2. Expand Services and applications.
  3. In the console tree (on the left), select the clustered virtual machine for which you want to configure the network for live migration.
  4. Right-click the virtual machine resource displayed in the center pane (not on the left), and then click Properties.http://virtualisationandmanagement.files.wordpress.com/2011/07/virtualmachine-properties.png
  5. Click the Network for live migration tab, and select one or more cluster networks to use for live migration. Use the buttons on the right to move the cluster networks up or down to ensure that a private cluster network is the most preferred. The default preference order is as follows: networks that have no default gateway should be located first; networks that are used by cluster shared volumes and cluster traffic should be located last. Live migration will be attempted in the order of the networks specified in the list of cluster networks. If the connection to the destination node using the first network is not successful, the next network in the list is used until the complete list is exhausted, or there is a successful connection to the destination node using one of the networks.

Note : You don't need to perform this action as per VM basis. When you configure a network for live migration for a specific virtual machine, the setting is global and therefore applies to all virtual machines.
http://virtualisationandmanagement.files.wordpress.com/2011/07/livemigration.jpg

Some other interesting articles:

http://technet.microsoft.com/en-us/library/dd446679(WS.10).aspx

http://www.hyper-v.nu/archives/hvredevoort/2011/03/windows-server-2008-r2-sp1-and-hp-network-teaming-testing-results/

http://blogs.technet.com/b/vishwa/archive/2011/02/01/tuning-scvmm-for-vdi-deployments.aspx

http://blogs.msdn.com/b/clustering/archive/2011/06/17/10176338.aspx

http://technet.microsoft.com/en-us/library/dd446679.aspx