Hyperv3 Failover Cluster
This is a baseline document on how to configure a Hyperv3 cluster on Windows Server 2012 /Windows Server 2012 R2.
Few points before we start
- Make sure that the server firmware is updated fully with the latest available one. This is very important as my first installation ended up in issues which was pointing that firmware is not updated.
- The hardware I used was HP Proliant DL 580 G7 and I did the firmware updates before I started the installation of Windows 2012 server. One major reason was the standalone firmware instllaers whcih is available in HP download site will not run with Windows 2012 servers. It will popup an error that "The software is not supported for installation on this system. The OS is not supported". If there is an option for Firmware updates using a bootable cd, it should work irrespective of the OS version. To be on safer side, do the firmware updates prior to the installation of Windows 2012 Server OS. HP DL580 G7 drivers/firmwares for Windows 2012 is now available on HP Website. Please ensure all drivers and firmware is updated before starting with the HyperV configuration.
- Brocade 425/825 4G/8G FC HBA was used to conect with the servers with SAN storage. Though Windows 2012 Server installation includeds the drivers for Brocade, the latest version can be downloaded from Brocade website and the drivers can be updated by installing it.
- I was not able to upgrade NIC drivers but the default drivers which comes along with Windows 2012 OS seems to be good. With the OS, 2012 Q1-Q2 drivers is what I see on my server for Broadcom BCM5708S as well as NC375i.
Configuration
Once the OS installation is done, here is the sequence I followed.
- On each server, NICs are configured
- Configured 1 NIC for Management with a dedicated IP from management VLAN
- IP Address, Default Gateway, DNS
- Configured 1 NIC for Cluster Shared Volume with a dedicated ip from CSV VLAN
- IP Address and subnet only
- Configured 1 NIC for Heartbeat with a dedicated priviate IP
- IP Address and subnet only
- Configured 1 NIC for Management with a dedicated IP from management VLAN
- Disabled Netbios over TCP/IP on all interfaces except Management NIC.
- If IPV6 is not used, Disable IPV6 on each NIC.
- Unchecked DNS registraion - "Register this connections address in DNS" for all NICs except for the Management NIC.
- Renamed each network cards according to its role - I used ( Management, Data-1, Data-2, Data-3, Heartbeat and CSV). Labeling will help to select the right interface while configuring NIC teaming.
- I had three NIC Cards available for HyperV virtual machiens and hence configured a Team.
- Windows 2012 Server comes with inbuilt teaming feature.
- To configure teaming, Go to Server Manager -> Local Server -> Nic Teaming
- If Teaming is not enabled, Click on "Disabled" so that a new window will pop up for configuring teaming
- Teaming creation is very simple. Click on the drop down menu on "Tasks" and select "New Team"
- Chose the adapters which you want to include in the team
- Click on additional properties
- Chose the Teaming mode - Recommended to have "Switch independent mode"
- Choose Load balancing mode - Recommended to have "HyperV Port" if the host OS is Windows Server 2012
- Choose Load balancing mode - Recommended to have "Dynamic" if the host OS is Windows Server 2012 R2
- Refer http://social.technet.microsoft.com/wiki/contents/articles/14131.windows-2012-server-nic-teaming-for-hyperv.aspx if you are trying out "Switch independent mode" with Address Hash load balancing mode.
- Click on OK
- A new NIC with the Team name will be listed out along with the other interface
- Configure the Team NIC with an IP Address which will be used for HyperV virtual machines.
- IP Address and subnet only
- I used LACP was the Teaming Mode and Address Hash as the Load Balancing mode
- Disabled Netbios over TCP/IP on the Team NIC.
- Unchecked DNS registration - "Register this connections address in DNS" on Team NIC.
- Configure Priority for the Network Adapters
- Control Panel\Network and Internet\Network Connections
- Press on ALT Key -> Select Advanced from the Menu
- Click on Advanced settings
- Arrange the Network Adapters as below
- Management - Top of the list
- Heartbeat - Second on the list
- CSV - Third on the list
- Team Interface - Last on the list
- Allow ICMP on Windows firewall on all nodes.
- From one server, Ping to Management IP, Heartbeat IP, CSV IP and Team IP of all other servers and ensure that communication is fine within the conifgured Network.
- Install MPIO or Powerpath or any other multipathing utility. I have tested MPIO as well as Powerpath 5.5 (build 289).
- Go to Computer Management -> Disk Management.
- Ensure that Disk from SAN are listed properly.
- Initialize the disk.
- Make it online.
- Format the Disk as NTFS without assigning a drive letter if you are planning for configuring CSV disk.
- Install Failover Clustering feature on all nodes
- Make use of the new server management dash board to do this remotely
- From Server Management -> Dash Board, Select the 4th Option - Create a server group
- Name the group and Click on DNS
- Type the name of each server to be added in this group and click on search
- If the search was successful, the server will be listed down
- Select the server from the result and click on the > button to add the computer to the group
- Add each servers to the list one be one and finally click on OK
- Now you will be seeing this group on the Server Manager
- Select the new Group created.
- Manage -> Add Roles and Features -> Role based features installation -> Select the server from the pool -> Click on the server name one at a time -> Next.
- From Features selection page, Select Failover Clustering -> Next.
- For Failover Clustering feature addition, reboot is not required. So just click on Install.
- Once installation done on all nodes, Open Failover Cluster Manager MMC.
- Right Click on Failover Cluster Manager and select Create Cluster.
- Select the servers you want to add to this cluster.
- Run the Validation.
- On the first validation test, I noticed errors and warning. View report and go through the errors first.
- Error which listed from all nodes was
Validate Disk Failover
- From Technet Forum, I got an sugggestion to reformat the disk which fixed the issue for me
- Reran the validation, but this time selecting only the Storage tests and confirmed that this issue is no more existing
- Another warning which came up on the first validation was for Digital Signature
Validate All Drivers Signed
- Unsigned drivers was the Brocade HBA driver which I installaed and I was confident to ignore this warning
- Error which listed from all nodes was
- Make sure that you fix all errors and rerun the validation to confirm that its really fixed before proceeding to Cluster creation.
- On the next step, Give the cluster name. If your account dont have rights to create computer object, you can use a prestaged object with appropriate security permission assigned prior to the cluster creation.
- Once cluster is created, Click on Networks from Failover Cluster Manager.
- Network will be grouped based on subnets.
- Rename the Network groups if you think that its easy to identify.
- For Management Network Group - Enable "Allow cluster network communication on this network". This will act as a secondary connection for the Heartbeat.
- For Heartbeat Network Group - Enable "Allow cluster network communication on this network.
- For Heartbeat Network Group - Disable "Allow Clients Connect through this network.
- For Live Migration Network Configuration, Go to Fail-over Cluster Manager -> Cluster Name -> Networks. Right Click on "Networks" and chose Live Migration Settings. Select the Networks which should be used for Live Migration.
- Click on Failover Cluster Manger -> Storage -> Disks.
- Confirm that SAN disk is alread listed out.
- Right click on the disk name and select "Add to Cluster Shared Volume".
- Go to Computer Management -> Disk Management.
- Ensure that SAN Disk is now showing as "Reserved".
- Now, its the time of installing HyperV Role on each nodes.
- Manage -> Add Roles and Features -> Role based features installation -> Select the server from the pool -> Click on the server name one at a time -> Next.
- On the Server Role selection page, Select Hyper-V and proceed with installation.
- Select the Team Network interface which will be used for the HyperV Virtual Switch.
- Reboot is required for Hyper-V Role installation. So its better to select automatic reboot along with the installation.
- Configure Priority for the Network Adapters as mentioned on Step 10 with vEthernet (Microsoft Network Adapter Multiplexor Driver - Virtual Switch) as the least priority.
- Once Hyper-V role is installed, On each server Set the default location of VHD and Virtual Machines to C:\ClusterStorage\Volume (CSV Volume)
- Create a Virtual Machine from any of the node.
- After installation, Go to Failover Cluster Manager -> Roles .
- Right Click and Select Configure Role.
- Select Virtual Machine as the role and click Next.
- Select the Virtual Machines which require high availability and proceed with confirmation.
- Test Live Migration and Quick Migration between the nodes.
- My 3 Node HyperV Cluster works fine with the above configuration.
I am sure this is only a base line document. Please amend this document if required.
Good luck
Shaba
insidevirtualization.com