共用方式為


Private Cloud lab setup guide

* 03 physical machines:

  • 1 DC+System Center (SCVMM, SCVMM SSP 2.0, SCOM) + Storage Server (using Microsoft iSCSI Target 3.3, named DC-SC
  • 02 Hyper-V members, named NODE1 & NODE2

Note: the guide is to illustrate the concepts only so it may not follow the best practices & guidance.

Part 1. DC-SC domain promotion, VMM and SSP installation steps

1. Win 2008 R2 with SP1
- Activate
- Rename to DC-SC
- Set ip (192.168.1.1) and time zone
- Promote to DC
- install SQL 2008 DB, update to SP1

2. Install SCVMM 2008 R2 and console, choose to use a supported version of SQL, and choose to create a new database (ports 8100, 80, 443)

3. To install SSP 2.0 portal on the DC-SC

3.a. Prerequisites installation:

3.a.1. MSMQ server installation on the DC
AD Users & Computers, View/Advanced Features, select Domain Controller server, prop, Security, Advanced, Add, (type) NETWORK SERVICE (Check Names),
and tick Allow box for "Create MSMQ Configuration object", then in Server Manager, Features, install MSMQ server & MSMQ directory svc integration

3.a.2. Install IIS 7 role, check ASP.NET, Win Auth and IIS 6 MetaCompat

3.b. SSP installation:
- Database server: DC-SC
- account for server component: svcacct (needed to be a member of local admins, or username/pwd incorrect err msg will appear)
- list of data center admins: mycompany\administrator
- application pool's identity: svcacct

3.c.  To open SSP portal: https://DC-SC, and add this site to Trusted Zone

3.d. SSP intial config:
- Settings/DataCenter mgmt, Configure Data Center resources, VMMServer: DC-SC.mycompany.com.vn; click Add Network, enter ProdLAN in both “Network Name” and “Hyper-V Network Name” boxes, click Submit; AD domain: mycompany.com.vn; Env: My Demo Environment
- Settings/VM Templates, Import templates, will not see any VMM server to search. Remedy: in VMM 2008 R2 console, Administration tab, User Roles, Administrator, properties, Members: add svcacct to that role

4. Virtual storage on the DC-SC
- install MS iSCSI Software Target 3.3.16554
- right click iSCSI Targets, Create iSCSI Target, name PRIVATE-CLOUD. In iSCSI Initiators Identifiers screen, click Advanced, Add, choose IP Address, enter 192.168.1.11 then 192.168.1.12 and say Yes when asked to allow multiple initiators.
- right click Devices, Create Virtual Disk, File: c:\VHD\quorum.vhd, size 1000 MB (1G), desc: Quorum, Access: PRIVATE-CLOUD.
- repeat for storage01.vhd and storage02.vhd, size 45000 MB (45G) each

Part 2. Node1 & Node2 installation

1. WS08R2 wSP1
- activate
- rename to NODE1, NODE2
- Rename network card name to NIC, set IP ( 192.168.1.11 & 12) and Time Zone
- install HyperV role
- create a Virtual Network named “ProdLAN”, connect to External (a physical NIC), and remember to check “Allow management OS to share this NIC” (On real servers with multiple NICs, this box does not need to be checked)
- In “Network Connections”, switch to Detailed View, and rename the newly-created-connection to ProdLAN. Check the NIC properties (only Microsoft Virtual Network Switch is checked, and IPv4 is not checked). Check the ProdLAN properties (now IPv4 is 192.168.1.11 & 12)

2. Connect to the shared storage
- In NODE1, Control Panel/iSCSI initator, choose service auto start, Target: 192.168.1.1, click Quick Connect, status should be Connected. Click “Volume and Devices” tab, click “Auto Configure”, there should be 3 volumes listed.
- In NODE1, Server Manager, Storage, Disk Mgmt: bring online and initialize 03 new disks. Create and format volume named Quorum for the quorum disk and assign Q: dive letter. Create and format Storage01 and Storage02 but choose “Do not assign a driver letter…” option (new support in WS08R2)
- In NODE2, iSCSI initiator as above, bring Online, and Change to Q: drive letter for quorum device

3. Cluster installation
- NODE1: add Failover Clustering feature
- NODE2: add Failover Clustering feature

- NODE1: in Failover Cluster Manager, Validate a Configuration, Browse, select NODE1;NODE2, then choose Run All Tests, takes 5 min, click View Report. There is a Warning sign in Network (IPConfig warning: no Default gateway info & Network Comm: Nodes are reached by only one pair of interfaces due to only a single network card is used)
- NODE1: Create a Cluster, Name: PRIVATE-CLOUD, IP: 192.168.1.51, takes 1 min, View Report, should be no warning/error. Quorum type should be: Node and Disk Majority (Cluster Disk 1). (The Quorum device is auto selected as Cluster Disk 1 )
- NODE1: Enable Cluster Shared Volumes, the c:\ClusterStorage will be auto created on both nodes. Click CSV node, Add storage, add Storage01  & 02.  The Volume1 and Volume2 subfolders will be auto created in c:\ClusterStorage

Part 3. Live Migration testing - Using SCVMM to manage and deploy VMs

1. Create a VM template in SCVMM libary
- DC SC, in SCVMM console: Add Host

- NODE1: create or import a reference VM (the VM should be copied to C:\ClusterStorage\Volume1), 512 MB in memory, set Processor compatibility, networking, etc.. You can test the Live Migration if needed.
IMPORTANT: the reference VM (WS08R2) should use a fixed virtual disk of 15 GB. If the default dynamically expanding virtual disk (default size is 127 GB) is used, the portal will not be able to Create the VM due to insufficient storage.

- DC-SC: in SCVMM console: Virtual Machines tab, right click ref VM, choose “New template” command (the source VM will be generalized (sysprep’ed) and deleted), Browse to select “\\dc-sc.mycompany.com.vn\MSSCVMMLibrary” as the Path
- DC-SC: in SSP portal, Settings, Configurate VM templates, Import templates, select DC-SC as Library server, MSSCVMMLibrary, then click Search, select the listed VM template, “Add Selected”, Next and click “Submit Request”

2. Create infrastructure in SSP portal
- Requests/ Register business unit (sample data: CoreBankingUnit, CBU01, staff1@mycompany.com.vn, Administrators: mycompany\administrator, mycompany\staff1. (make sure to create staff1 and allow Domain Users to logon locally to DC-SC using Default Domain Controllers Policy Group Policy). Click Requests again, and Approve.
- Requests/Create Infrastructure Request: CoreBankingInfra, Expected Decommision Date, Memory: 1G, Storage: 45G, Next to “Service and Service Roles” page, CoreBankingService, My Demo Environment, Memory: 1G, Storage 45G, click “Request for Network”, select ProdLAN and click Add, click “Add Service Roles”, CoreBankingServiceRole, add ProdLAN, Save and Close, Next to “VM template” tab, select available VM template, Save and Close
- Requests, select the Infra Request, click CoreBankingService, in Template Library section, click “Assign Library”, select DC-SC as Library Server and MSSCVMMLibrary as Share, Submit, enter the same info for “Stored Virtual Machine Location” section, click Save and Close. Click CoreBankingService, click Save and Close. Click the selected VM template, click Save and Close, then click Approve.

3. Create BusinessUnitUser:
- DC-SC, in SSP portal, click User Roles tab,  select BUITAdmin, click View/Edit Member (both administrator and staff1 are included); Select BusinessUnitUser, View/Edit Members, select Business Unit, Infra, Service…, click Add Members, enter mycompany\staff2 (previously created), Save and Close

4. VM Provisioning: Request and Approve
- Close the SSP portal
- Shift + Right click IE, Run as different user, mycompany\staff1 (as BUIT admin), add https://DC-SC to Favorite Bar. Notice that the Settings tab is missing. Another way to change the user is to do it over Remote Desktop (need Enable Remote Desktop in Computer Properties as well as to add Domain Users to ‘Allow log on through Remote Desktop Services” Group Policy item and gpedit /force)
- Click Virtual Machines tab, click Create virtual machine, enter 2 as the number of VM, enter “CloudDemo” as Computer Name and 001 as Index suffix, then Under Template, choose the desired template, click “View Properties” to make sure the Storage is under the 45G limit, then click Create
- In Node1 HyperV Manager, CloudDemo001 will be created. In Node 2 HyperV Manager, CloudDemo002 will be created, and in Failover Cluster Manager/PRIVATE-CLOUD/Services and Apps node: SCVMM CloudDemo001 Resources and SCVMM CloudDemo002 Resources will be created.

Part 4. PRO Tips implemetation

- Install SCOM 2007 R2 with default options (SQL 2008 Std wSP1 with just Database & Analysis engines)

- IMPORTANT: Install the SCOM Agent on NODE1, and NODE2 (note: add mycompany\svcacct to Domain Admins for Agent Push Installation to work)

- Import the required MPs for SCVMM integration
 + Windows Server Internet Information Services 2003
 + Windows Server Internet Information Services 2008
 + Windows Server Internet Information Services Library
 + SQL Server Core Library

 To do that, download, install these files "Windows Server Base OS System Center Operations Manager 2007 MP.msi", "Internet Information Services MP.msi" & "SQL Server Operations Manager 2007 MP.msi" then import the above MPs.

- Insert SCVMM 2008 R2 media, select "Configure Operations Manager" option.
  This will ask to remove the SC VMM console. Once that is completed, select "Configure Operations Manager" option again. This will install the SC VMM console again and configure SCOM (add SCVMM MP to SCOM).

- Relaunch the SC VMM console, Administration tab, System Center, Operations Manager Server, and type SCOM server name

- In SC VMM console, click Diagram (2nd line from the top, right below the Menu bar) --> the respective SCOM Diagram View of the whole Private Cloud will be shown (Node1, Node2, VM1, VM2, etc...)

- In SC VMM console, right click Private-Cloud host, click PRO tab, deselect Inherit PRO settings... box, select "Enable PRO..." and "Automatically implement PRO tips"

- Open Admin Tools/Performance Monitor, delete all existing counters. Click Add, browse to select NODE1, then choose "Hyper-V Hypervisor Logical Processor - % Guest Run Time", click OK. Do the same for NODE2. Make the line bigger and of different colors.

- In VM1 and VM2, create and run cpubusy.vbs (Remeber to right click, Open with Command Prompt). In HyperV Manager, CPU Usage will be around 48%, but in Task Manager of the host, it is still 0%. In performance monitor, the Guest Run Time lines will be around 50%

- Use SC VMM console Live Migration to move all VM to NODE2 --> NODE2 HyperV will show 2 VMs, with CPU usage of each VM is 48% (Task Manager: still 0%), and Performance Monitor counter for NODE2 will be around 99%, and counter for NODE1 will be around 1%.

- Wait a little and a PRO Tip will be displayed in SC VMM console as well as SCOM alert view. The PRO Tip will be also executed to automatically balance the VM load.

Appendix. cpubusy.vbs file content:

Dim goal
Dim before
Dim x
Dim y
Dim i
goal = 2181818

Do While True
 before = Timer
 For i = 0 to goal
  x = 0.000001
  y = sin(x)
  y = y + 0.00001
 Next
 y = y + 0.01
 WScript.Echo "I did three million sines in " & Int(Timer - before + 0.5) & " seconds!"
Loop

Part 5. SCVMM SSP Dashboard installation

- server name: DASHBOARD

- install ms.com Windows SharePoint Services 3.0 x64 wSP2, using the Advanced option, then Stand-alone,

- setup sql 2008 w sp1

- dashboard setup process

+ VMM SSP Dashboard screen
. app pool identity mycompany\svcacct
. DB server name: DC-SC (which is SSP server name)
. VMM SSP dbname: DITSC (fixed)

+ WSS 30 info screen
. site owner: mycompany\administrator
. SharePoint DB server name: DASHBOARD ("Session Database Name" will be auto created)
. accept the default URL which is https://dashboard:12345/

References

- How to Integrate Operations Manager with VMM 2008 R2  https://technet.microsoft.com/en-us/library/ee236428.aspx

-------------- to be continued