共用方式為


Deployment–Installing SQL clusters with PDT

In the introductory post to PDT, I mentioned that PDT can deploy a highly available configuration. This is the first of three posts that will explain how PDT can be configured to deploy a highly available configuration. This post will address SQL clusters, the other posts will cover VMM clusters and scale out roles.

System Center 2012 SP1 supports traditional shared storage SQL clusters as well as SQL AlwaysOn.  PDT has the ability to install traditional shared storage SQL clusters with SQL Server 2012.  The shared storage used for SQL can be either SMB, or SAN based storage.

The cluster to which SQL will be installed must be pre-created, PDT does not do that for you.  You must have all nodes joined to a Windows failover cluster, and the storage to be used must be configured.  In the case of SMB, that just means the shares to be used must be created – preferably on a Windows Server 2012 continuously available scale out file server – and the SQL service account must have full control permissions to those shares.  For SAN storage, the storage has to be added as available storage to the cluster.

Once your cluster is created, you can enter the information about the cluster in the variable.xml.  The first example below shows the information needed for a SQL cluster using SMB.  Note that in the <Role> entry for the database server, we specify SQLCluster=”True”.  You can install multiple instances on the same cluster, just remember that the instance names need to be unique in the cluster.  Next, in the <SQL> section instead of specifying an <Instance>, you specify a <Cluster> and provide all required information as variables.  The variables in the example specify the SQL service account, the SQL Agent service account, the UNC locations for database files, log files, and tempDB files, the IP address and cluster network information, and finally the nodes in the cluster and which nodes should be set as preferred for the SQL cluster group being installed.  You can specify as many nodes and Windows and SQL clustering allows.

<Roles>
<Role Name="System Center 2012 SP1 Orchestrator Database Server" Server="ORDB.contoso.com" Instance="MSSQLSERVER" SQLCluster="True"></Role>
</Roles>
<SQL>
<Cluster Cluster="ORDB.contoso.com" Version="SQL Server 2012">
<Variable Name="SQLAdmins" Value="CONTOSOSQL Admins" />
<Variable Name="SQLAgtServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLAgtServiceAccountPassword" Value="password" />
<Variable Name="SQLServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLServiceAccountPassword" Value="password" />
<Variable Name="SQLInstallSQLDataDir" Value="\FS01CLDB1ORDB" />
<Variable Name="SQLUserDBDir" Value="\FS01CLDB1ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBLogDir" Value="\FS01CLDB2ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBDir" Value="\FS01CLDB3ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBLogDir" Value="\FS01CLDB3ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLClusterIPAddress" Value="192.168.1.226" />
<Variable Name="SQLClusterNetwork" Value="Cluster Network 1" />
<Variable Name="SQLClusterIPSubnet" Value="255.255.255.0" />
<Node Server="CLDB1A.contoso.com" Preferred="1"></Node>
<Node Server="CLDB1C.contoso.com" Preferred="2"></Node>
<Node Server="CLDB1B.contoso.com"></Node>
</Cluster>
</SQL>

Our next example is for SQL clusters using SAN storage.  The difference here from the SMB example above is that instead of UNC names we specify drive paths, and we also need to specify which cluster disks those drives correspond to.

<Roles>
<Role Name="System Center 2012 SP1 Orchestrator Database Server" Server="ORDB.contoso.com" Instance="MSSQLSERVER" SQLCluster="True"></Role>
</Roles>
<SQL>
<Cluster Cluster="ORDB.contoso.com" Version="SQL Server 2012">
<Variable Name="SQLAdmins" Value="CONTOSOSQL Admins" />
<Variable Name="SQLAgtServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLAgtServiceAccountPassword" Value="password" />
<Variable Name="SQLServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLServiceAccountPassword" Value="password" />
<Variable Name="SQLInstallSQLDataDir" Value="H:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBDir" Value="H:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBLogDir" Value="I:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBDir" Value="J:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBLogDir" Value="J:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLClusterIPAddress" Value="192.168.1.226" />
<Variable Name="SQLClusterNetwork" Value="Cluster Network 1" />
<Variable Name="SQLClusterIPSubnet" Value="255.255.255.0" />
<Variable Name="SQLClusterDisks">
<Value>`"Cluster Disk 1`" `"Cluster Disk 2`" `"Cluster Disk 3`"</Value>
</Variable>
<Node Server="CLDB1A.contoso.com" Preferred="1"></Node>
<Node Server="CLDB1C.contoso.com" Preferred="2"></Node>
<Node Server="CLDB1B.contoso.com"></Node>
</Cluster>
</SQL>

PDT uses the advanced cluster setup option of SQL Server – meaning that all nodes are installed simultaneously, then one node is used to complete the cluster configuration. In my lab when installing multiple clustered SQL instances, the first instance takes about 30 minutes then each additional instance takes about 15 minutes.

Clustering can be complex, mainly because of the network and storage configuration involved.  However, once that is configured correctly, PDT makes it simple to deploy a SQL cluster as part of a highly-available configuration of System Center 2012.

A couple of notes here, specific to PDT deployment of SQL and SQL clusters:

  • In PDT, Configuration Manager, App Controller and Service Provider Foundation must use a default SQL instance MSSQLSERVER.
  • The Data Protection Manager database cannot be clustered.
  • PDT validates that all nodes are members of the same cluster before deployment can begin.
  • All other information provided for a cluster is not validated pre-installation – if it is entered incorrectly, the SQL install will fail.  However, as with other failures with PDT, the information can be corrected and the deployment can be restarted.
  • Be sure to use the latest version of PDT – currently 2.4 – since clusters failed operating system version validation in previous versions.

Comments

  • Anonymous
    January 01, 2003
    This is due to an error in Workflow.xml.  We will release an updated Workflow.xml that resolves this and a couple of other issues this week.  In the meantime, you can delete lines 1309-1313 from Workflow.xml - this should be the <Switch> section under <Install Name="SQL Server 2012 Cluster Complete"> that specifies the PID switch.

  • Anonymous
    January 01, 2003
    never mind my previous post, i found it....

  • Anonymous
    January 01, 2003
    The comment has been removed

  • Anonymous
    January 01, 2003
    Im not sure what I did wrong.. SQL Server version for cluster... Failed SQL Server version not specified

  • Anonymous
    January 01, 2003
    Stanislav - Correct.  There is only one SQL instance per cluster resource group, so you don't need to specify the instance for the cluster.  For reporting services, when you specify a cluster for the DB and a standalone server for RS (since it can't be clustered), PDT will automatically configure the RS instance to put it's databases on the clustered DB instance.  It does this for both OM and SM - note that PDT does not install RS to CM.

  • Anonymous
    May 22, 2013
    HI - We are using PDT to deploy VMM and a supporting SQL Server Cluster with a standard license. When we add the license to variable.xml we get the following error during the "SQL Server 2012 Cluster Complete" phase: The setting 'PID' is not allowed when the value of setting 'ACTION' is 'CompleteFailoverCluster' If we do not specify a product key the installation finishes however it installed in evaluation mode with the edition set to Enterprise, problem is I cannot downgrade an enterprise license to standard when its already clustered. Any help would be greatly appreciated. -Nick

  • Anonymous
    May 22, 2013
    Thanks Rob - I am posting a new comment because it will not let me reply to the previous. A coworker of min helped me out this morning and that is exactly what we found out. Thank you for your response! -Nick

  • Anonymous
    September 20, 2013
    Hi Rob, Why when using Cluster syntax there is no instance name value? And what about if you want to deploy a separate SQL Server with different Reporting Services instances. I basically want to deploy all database on cluster and for reporting to use other server. This is a bit confusing how it will happen especially with SCCM.

  • Anonymous
    September 25, 2013
    One more clarification Rob. So in the role of SCOM database I state instance name MSSQLSCOMDB and point to cluster name for a server value. In the SQL section the installation of SQL on cluster will take MSSQLSCOMDB as name for instance, right? Also some feedback for future versions: If you can point different SQL instances for the SCSM Data Warehouse databases will be good. Example: One SQL instance for DWStagingAndConfig and DWRepository, one SQL instance for DWDataMart, DWASDataBase, OMDWDataMart and CMDWDataMart.

  • Anonymous
    October 06, 2014
    Hey Rob,

    Instead of assigning a static IP address and subnet (SQLClusterIPAddress, SQLClusterIPSubnet), how I would I configure it to use DHCP instead?


    Andreas