Compartilhar via


Private Cloud lab setup guide 3

This is the step-by-step guide for adding a free Hyper-V Server 2008 R2 SP1 or a paid Windows Server 2008 R2 Server Core to the Hyper-V cluster.

- On NODE1, shrink the existing drive so that we have the second partition to host the Hyper-V Server. To differentiate with the future Hyper-V Server on the boot screen, use this command (Run As Admin)

bcdedit /set {current} description "WS08R2 Full OS"

Check the new setting with bcdedit or Computer Properties/Advanced/Startup and Recovery

- Install the Hyper-V Server

- Change hostname to NODE3, set IP to 192.168.1.13, join domain

- Enable Remote Desktop

- Select 4: Configure Remote Management, then select 2: Enable Windows PowerShell, restart

- Select 4: Configure Remote Management, then select 3: "Allow Server Manager Remote Mgmt"

- Select 4: Configure Remote Management, then select 1: "Allow MMC Remote Mgmt" (firewall exceptions will be enabled, Virtual Disk Service allowed)

- Remote Desktop to NODE3

- Check installed roles/features: oclist --> Hyper-V role is already installed

- From HN-SRV-01, in Server Manager, Feature/Add Feature: Hyper-V Tools and Failover Clustering Tools (in Remote Server Admin Tools), then connect to NODE3 Hyper-V.

- From Server Manager (connected to NODE3), go to the Hyper-V node, create a Virtual Network connecting to the physical NIC of the NODE3. Name it ProdLAN.

- From Server Manager (connected to NODE3), go to Services node, set "Microsoft iSCSI" service to Automatic, and start it

- From Remote Desktop (connected to NODE3), run iscsicpl from the Command Prompt and connect to the SAN storage at 192.168.1.1.

- From Remote Desktop (connected to NODE3), select 11 to install Failover Clustering on NODE3

- From Server Manager (connected to NODE3), go to Disk Management node, after 2 minutes, this error is displayed "The RPC Server is unavailable". Resolution: check on both (managing and managed) servers to make sure all 03 "Remote Volume Management..." rules are enabled. (In my case, it is the managing server, HN-SRV-01) If not, run this in a CMD window: netsh advfirewall firewall set rule group="Remote Volume Management" new enable=yes. Close the Server Manager.

- From the SCVMM SSP portal, stop all running VMs then delete them. Double-check using the SCVMM console.

- From Server Manager (connected to NODE3), go to Disk Management node, after 4 minutes, the Disk Configuration on NODE3 will appear. Change the quorum disk to Q:  and remove drive letter of the Storage01. Note: all these 2 disks are in RAW format.

- HN-SRV-01: Launch Failover Cluster Manager (from Admin Tools), connect to PRIVATE-CLOUD cluster, right click, Add Node, select NODE3, choose to run All Tests. You may need to restart NODE3 if it cannot be accessed. The test will show that it is not suitable however we choose to go ahead and create the cluster.

- From Server Manager (connected to NODE3), add svcacct account to the Local Admins of NODE3 for SCOM agent push installation to work. Then go to Services node, set "MSI Installer" service of NODE3 to Automatic and start it (this is used for SCVMM agent installation)

- From SCVMM console, delete any existing PRIVATE-CLOUD host cluster, and use Add Host menu item again to add NODE2 and NODE3

- For troubleshooting purpose, you can disable the firewall on NODE3 using the Server Manager (connected to NODE3)

- Use SCVMM SSP portal, try to provision 2 VMs. Test Live Migration and PRO tips

Comments

  • Anonymous
    January 01, 2003
    Very good!