Partager via


SQL Server 2012 AlwaysOn – Part 4 – SAP configuration in Geo-Cluster configuration

[Edit] You can find the summary of the complete series here.

In the fourth part of the series, we are looking at a last reference configuration before moving to the actual doing part. In regards to terms and basics it might be very helpful to read the first part of the series here: https://blogs.msdn.com/b/saponsqlserver/archive/2012/02/07/sql-server-2012-alwayson-what-is-it.aspx

As a last reference architecture we want to look at the underlying WSFC aspects if we are looking into a configuration where we distribute the productive SAP ERP system over two sites that are close enough to allow AlwaysOn to run in Synchronous Mode. Current technologies easily allow for data centers more than 50 kilometers (30 miles) apart when using dedicated Dark Fibre (DWDM) links.

A typical case like this was resolved in a so called Geo-Cluster configuration which required storage replication or the usage of SQL Server Database Mirroring for the SQL Server side.

A typical configuration as we deployed with customers looked like:

image

Or with some customers who use Database Mirroring in such scenarios, we can find this type of configuration:

image

In both cases there is a WSFC configuration that spans the two sites and in most cases a third site to keep a quorum vote for the WSFC configuration. In some cases the WSFC was setup as documented in the SAP installation documentation of SAP and SQL Server sharing two nodes. We also have customer configurations where two independent WSFC configurations were used, one for the SAP single point of failure, another WSFC configuration for SQL Server. Sometimes also two different SAN frames are involved for SAP and SQL Server, in some customer cases data of SAP is located in the same SAN frame in each of the datacenters. In some cases as shown above, customers are also using Database Mirroring with the 3rd witness SQL Server instance ideally running in the third site.

What different customer configurations do have in common is that the two sites are both active sites with SAP single point of failures being clustered between both sites. SQL Server is either clustered as well or synchronous/asynchronous DBM is used to synchronize both sides. In case an automatic failover is desired, the ideal configuration for DBM would be to have a 3rd site with a witness instance as shown in the configuration before.

Application servers in both data centers are active/active. The application servers in the data center without SQL Server locally access the database via the high speed network. Customers have deployed this extensively with previous versions of SQL Server without any issues provided the links are highly performing

Considerations for using AlwaysOn in such a configuration

The usual implementations are more or less combinations or variations of what we discussed as first reference architecture, except that we look at two sites instead of one site. In order to have these configurations working the two SAP and SQL Server components are using a single subnet or using a virtual private network. As mentioned in Part1, WSFC and now SQL Server 2012 could handle different subnets, but the SAP ASCS and application servers cannot handle the Message Server IP address changing without completely restarting all application servers. Hence running SAP components, especially ASCS/SCS clustered with nodes in both sites, one subnet or one VLAN needs to be provided.

In multi-site configurations we recommend the WSFC configuration to leverage a node and file share majority cluster. Discussion often circles around where to place the file share quorum. Our response over years was to best place the file share or another node in a third site as shown in the first graphics. That is the best and safest way of handling quorums around such a cluster configuration. The same was true for Database Mirroring configurations where the Witness instance best ran out of a third site. All in all if having 3 sites, the configuration of the WSFC best contains an even # of nodes on both sides and a File share Witness in a third site.

To reduce such configurations down to two sites would require a decision where to place the quorum vote, independent of whether it is a file share or a whole node. Having that uneven vote one either one of the sides will result in a situation where one would need to interact manually when one of the sites go down. More about this can be read here: https://technet.microsoft.com/en-us/library/dd197575(WS.10).aspx

To move to such a configuration like shown above and have the ability to work out of both sites simultaneously, the distance between the sites is important. The SAP Netweaver Architecture is what we would call a chatty application from a DBMS point of view. Means there are no procedures executed in DBMS, but all logic runs in the SAP Netweaver application tier. In order to get the data into that layer, very often thousands of SQL Statements are getting issued against the DBMS layer per second. These days we are measuring the latency per statement execution in low single digit millisecond ranges. And we rely on getting such small response times in order to fulfill the SLAs towards the user community. Hence any millisecond latency added due to geographical distance between application tier and DBMS, has impact on the user experience. Therefore the distance should ideally be small or restricted to a few miles. Out of exercises we did with different customers, one for sure can tell that a distance of 20 miles does add measurable latency to any single query compared to application servers which are hosted in the same datacenter with the active DBMS server. Hence most customers using these kinds of configurations have the two sites in closer distance, either within the same campus or within the same city/town perimeter. Under such conditions, it is a given that for non-shared storage HA configurations we only will consider synchronous Availability Mode.

As in our first reference architecture, we would use the existing Windows Cluster configuration which we still need for the SAP side as Cluster to build our AlwaysOn configuration on. For geographically dispersed or multi-site clusters we usually recommend the following WSFC configurations:

  • Node Majority quorum
  • Node and File Share Majority quorum

In the first configuration shown in this article, the ideal scenario got accomplished with a Node majority cluster where the File Share Witness is located in a third location. In this case one site can vanish w/o any impact on the availability of the cluster configuration.

In case of a File Share Witness, we need to be aware of a possibility called ‘partition in time’. The cluster configuration is stored on each of the node participating in the cluster. The cluster configuration also gets stored on the shared disk for the case of Node and Disk Majority configuration. However this is not true for the file share witness. When using a Node and File Share Majority quorum, the cluster configuration is not stored on the file share. Means in such cases the cluster configuration is only persisted on the two nodes. However the file share stores data about which node keeps the most recent cluster configuration status. This can lead to a situation which is known as ‘partition in time’. Imagine the following scenario:

  • One of the nodes is down, e.g. node #2
  • The remaining node (#1) is running since the cluster configuration still has 2 votes out of three
  • Now we perform a change which affects the cluster configuration.
  • As next the remaining node (#1) is brought down before the other node (#2) was coming up again
  • Starting node #2 finally, the cluster will not come up since the file share stores data indicating that node #2 has not stored the most recent cluster configuration

Please be aware that such a change of the cluster configuration is not necessarily a change via the cluster administrator. But it can as well be a change in the properties/configurations of a SQL Server Availability Group. SQL Server AGs are using the mechanisms of the WSFC cluster to store some of its configuration data as well. When using File Share to achieve majority in cluster configurations, one should:

  • Try to keep downtimes on the nodes short in order not to encounter situations where cluster configurations could be changed while one node is down.

Moving to AlwaysOn with a non-shared disk configuration, we could leverage the WSFC configuration which already exists. So far SQL Server was mostly running clustered already on shared disks within the WSFC configuration. Instead we would install SQL Server 2012 in a non-clustered version and would need to move the database on non-shared and non-replicated storage on the nodes which so far run SQL Server and the SAP ASCS/SCS/CI.

All in all we look at 4 different possibilities:

  • Construct a separate WSFC configuration for using AlwaysOn. The File Share Witness would be placed on the same server as the File Share Witness of the configuration for ASCS/SCS runs on.
  • Stay with SQL Server on the same 2 nodes as the SAP ASCS/SCS/CI is running on. Like in part 2 of the series described, the changes would just apply to move from a clustered SQL Server 2008(R2) or SQL Server 2005 instance to 2 non-clustered SQL Server 2012 instances. Also storage would need to be changed from a storage configuration which is replicated with storage replication to independent LUNs between the two sites.
  • Add the two DBMS nodes to the existing WSFC configuration of SAP ASCS/SCS. This initially would require the DBMS nodes also being able to access the shared volumes used for the SAP side of the house. However this would have the advantage that one could use the additional DBMS nodes as potential manual failover nodes for the SAP ASCS/SCS instances (see OSS note: 1634991).
  • Another possibility when adding two additional nodes is to build an asymmetric storage cluster which allows excluding some node from being attached to the shared storage. The condition is that at least two nodes of a cluster need to be able to access the shared disks used for an application like SAP ASCS/SCS/CI. Means adding two nodes for SQL Server 2012 AlwaysOn will not require any changes to the storage infrastructure.

All in all we can end with these types of configurations:

image

In this case we got 2 cluster nodes which run SAP and SQL server components. As before we a can failover crosswise and run SAP and SQL Server on the very same node. The File Share Witness is located in the third site. Storage replication is only used the shared disk of the SAP components. Changes to the SAP databases are replicated with AlwaysOn.

Creating an asymmetric storage cluster

In order to have an asymmetric cluster working on Windows Server 2008 apply the latest Service Pack and the QFE described in the following KBA on all nodes of the cluster: https://support.microsoft.com/default.aspx?scid=kb;en-US;976097

For Windows Server 2008 R2, please install SP1 only. This Service Pack will include the necessary logic for asymmetric cluster configurations.

This QFE will allow cluster disk resources which can be connected from at least 2 nodes only. Means adding a 3rd or 4th node to the cluster which can’t access the shared disk(s) leveraged by existing two nodes. Or one could add storage to a cluster configuration of 3 or more nodes where the disk resources can’t be accessed or seen by some of the nodes as long as two of the nodes do see the storage.

NOTE: If you are running the Cluster Validation Report on such an asymmetric storage configuration, there will be a warning telling that the particular disk does not have all cluster nodes listed as possible owners. Another warning will be shown under ‘List Potential Cluster Disks’ where one will find a message that there is a disk which is not cluster-able by all cluster nodes. Again something which is normal in this case.

Such an asymmetric storage configuration would look like:

image

Above you see a WSFC configuration out of 4 nodes plus the File Share Witness as quorum. As displayed only two of the nodes do have access to the shared disks for the SAP ASCS/SCS/CI. Whereas the two nodes running the SQL Server 2012 AlwaysOn configuration doesn't have any access to those disks and do not need to have access. The advantage of this configuration is that one can sustain two quorum votes disappearing. E.g. If one of the servers is down for maintenance and another goes down unexpectedly, the configuration still would work, assuming that it is not both database nodes or both SAP nodes being down at the same time.

Using two independent WSFC configurations

There might be a preference to separate the SAP side WSFC configuration from the SQL Server 2012 AlwaysOn cluster configuration. It doesn't change a lot in terms of recommendations. There ideally should be a third site where one runs a File Share Witness. One certainly can combine the witness of the SAP side WSFC with the witness for the AlwaysOn WSFC configuration on the same server. However if one combines more and more file share witnesses on this one server in the third datacenter, one would need to think about getting this server clustered to have the file share on a shared disk of the cluster. The quorum model for such a WSFC configuration hosting the many file share witnesses (think along the line of 6-10 SAP applications with 2 x WSFC configuration resulting in 12 – 20 file shares), would be a node majority and disk quorum configuration.

As a result the configuration could look like:

image

Radically different is the way on how we deal with the File Share Witnesses in this case. Assuming that this is not the only productive SAP system we want to run between two datacenters. Also assuming that we separate out the WSFC configuration for the SAP Single Point of Failures and AlwaysOn, we need a larger number of File Share Witnesses on one side and also need deal with the fact that each of the cluster does have 3 quorum votes only. Hence we only can sustain one missing quorum component and still be fine. Due to the concentration of that many quorum votes of independent WSFC configuration in the third datacenter, just using one server there to keep all the shares is too risky. Therefore we establish a WSFC configuration with node and disk majority quorum to keep this larger number of File Share Witnesses.

That was it in regards to the configurations for now. In the next part we go into building our first Availability Group for a SAP ERP database.

Comments

  • Anonymous
    May 07, 2013
    Hi Jeurgen When moving to AlwaysOn - Does SAP need to be installed on it's own server, and not installed on the DB server?

  • Anonymous
    January 07, 2014
    Will site become the single point of failure in this case?