Partager via


Example, Clustered Service or Application in a Multi-Site Failover Cluster

Applies To: Windows Server 2008

In this example, the fictitious company A. Datum needs to make files available to thousands of employees doing critical work for the company. These files need to be available 99.99% of the time, that is, with no more than 1 hour of downtime per year. In addition, A. Datum has investigated creating a disaster recovery option in case their main datacenter becomes unavailable for a significant amount of time. A. Datum decides that the cost of maintaining the servers needed to provide the disaster recovery option is a good investment for the company to make.

Important

  • This topic outlines the process of failover and failback occurring in a multi-site cluster with a specific configuration. For additional information, see Requirements and Recommendations for a Multi-Site Failover Cluster and the links at the end of this topic.

  • The four-node design shown in this topic does not apply to multi-site failover clusters running Exchange Server 2007. These clusters use the Cluster Continuous Replication (CCR) feature of Microsoft Exchange Server 2007, and have a maximum of two nodes. For information about CCR and clustering, see the CCR topics at https://go.microsoft.com/fwlink/?Linkid=129111 and https://go.microsoft.com/fwlink/?Linkid=129112.

  • A. Datum starts with a design for a two-node failover cluster that provides shared folders for clients, as shown in Example, Clustered File or Print Server. It expands the design to create a multi-site cluster with four nodes, as described in this topic.

    This topic illustrates the following:

    Four-node, multi-site cluster under normal conditions

    Four-node, multi-site cluster when main site is unavailable

    Four-node, multi-site cluster returning to normal operation

    Four-node, multi-site cluster when communication between sites is lost

    For examples that illustrate other designs, see Evaluating Failover Cluster Design Examples.

    Four-node, multi-site cluster under normal conditions

    For this example, the A. Datum company creates an expanded design for a multi-site cluster. As in the simple failover cluster design, the expanded design starts with Node 1, Node 2, and a clustered file server called FileServer1. The expanded design also includes Node 3 and Node 4, located in a secondary site 100 miles from where Node 1 and Node 2 are located. The following diagram shows the multi-site cluster used by A. Datum when there are no problems with any of the servers or networks.

    To keep data consistent between the two sites, A. Datum has established an appropriate file replication method, and most of the time, the main site is the source of data that is copied to the secondary site. In addition, the expanded design explicitly calls for the quorum option called Node and File Share Majority, which uses a witness file share as part of the configuration, as shown in the diagram. This quorum option simply means that when determining whether a majority of elements is running and in communication (which is necessary for the cluster to continue functioning), the elements that are counted include the nodes plus a file share witness that has been created by the cluster administrator.

    The Node and File Share Majority quorum option works well for multi-site designs, because these designs work best with an even number of nodes, and therefore require a tie-breaker for situations where network communication is lost and a subset of the nodes must determine whether it has quorum and can therefore function as a cluster. The witness file share is a good tie-breaker for the multi-site design. For more information and diagrams illustrating quorum options, see Four-node, multi-site cluster when communication between sites is lost, later in this topic, and Appendix F: Reviewing Quorum Configuration Options for a Failover Cluster, later in this guide.

    In most situations, failover and failback happen only between Node 1 and Node 2, just as in the simpler design in Example, Clustered File or Print Server. Node 1 and Node 2 are in the same subnet, and Node 3 and Node 4 might use that same subnet or might use a different one. In either case, failover from Node 1 to Node 2 or back again (the common failover pattern) would be within one subnet, which would mean that failover has no effect on the IP address, which in turn means failover has no effect on the DNS information for the clustered service or application. In other words, for any failover within one subnet, DNS servers, plus the clients using the clustered services or applications, do not need to adjust the DNS information. This means that the common failover pattern is simpler and usually does not have dependencies on the replication or refreshing of DNS information.

    This multi-site cluster differs from a single-site, two-node cluster, which means it can respond to situations where the main site (Nodes 1 and 2) is affected by serious problems that leave only the secondary site (Nodes 3 and 4) available to clients. The multi-site cluster must also be able to respond appropriately to situations when most or all nodes are functioning correctly but some part of the networks are not—for example, when the network connecting the main site to the secondary site stops functioning. For more information, see Four-node, multi-site cluster when communication between sites is lost, later in this topic.

    Four-node, multi-site cluster when main site is unavailable

    As shown in the previous section, most of the time, FileServer1 is owned by a cluster node at the main site (either Node 1 or Node 2). The nodes at the secondary site, Node 3 and Node 4, are usually in a passive state, not in use but ready if needed. However, if the main site becomes unavailable, a failover process can begin so that Node 3 or Node 4 can begin supporting FileServer1. The following diagram shows the cluster after failover to the secondary site.

    The main difference between this failover and any other failover is that action must be taken, either by an administrator or automatically, so that the storage at the secondary site shifts to read-write, instead of being a read-only copy of the stored data FileServer1. After the failover, it is a good idea for an administrator to check on the exact state that the replicated data was in when failover occurred. Assuming that the replication process was working correctly when failover occurred, the set of data would match the data from the main site as it existed at some recent moment in time (no data corruption). However, depending on how replication was carried out, the data might not be the exact set of data as it existed at failover, in which case users should probably be notified so that they can review the data that they changed most recently.

    Four-node, multi-site cluster returning to normal operation

    When a multi-site cluster performs the failover operation shown in the previous section, it changes the storage at the secondary site from read-only to read-write. To return to normal operation (that is, fail back to the main site), the cluster must return to using the main site and the storage at the main site. Before this can happen, any changes in the data must be replicated back to the main site, and the storage must return to its initial mode of operation: read-write at the main site and read-only at the secondary site. The following diagram illustrates this part of the failback process.

    Four-node, multi-site cluster when communication between sites is lost

    When network problems interfere with communication in a multi-site cluster, the nodes at one site might be able to communicate together, but might not be able to communicate with the nodes at the other site. This can cause serious issues. In this “split” situation, at least one of the sets of nodes must stop running as a cluster.

    To prevent the issues that are caused by a split in the cluster, the cluster software requires that any set of nodes running as a cluster use a voting algorithm to determine whether, at a given time, that set has quorum. Because a given cluster has a specific set of nodes and a specific quorum configuration, the cluster will know how many “votes” constitutes a majority (that is, a quorum). If the number drops below the majority, the cluster stops running. Nodes will still listen for the presence of other nodes, in case another node appears again on the network, but the nodes will not begin to function as a cluster until the quorum exists again.

    The following diagram shows how a multi-site failover cluster design uses the Node and File Share Majority quorum option to avoid a “split” situation. With this quorum option, each node can potentially vote, and a file share witness can also vote to break a tie. As the diagram shows, at the main site, there are a majority of votes (three out of five), with Node 1, Node 2, and the witness file share all in communication. At the secondary site, there are a minority of votes, with just Node 3 and Node 4 in communication. Nodes 1 and 2 will therefore function as a cluster, and Nodes 3 and 4 will not.

    Important

    The four-node design shown in this topic does not apply to multi-site failover clusters running Exchange Server 2007. These clusters use the Cluster Continuous Replication (CCR) feature of Microsoft Exchange Server 2007, and have a maximum of two nodes. For information about CCR and clustering, see the CCR topics at https://go.microsoft.com/fwlink/?Linkid=129111 and https://go.microsoft.com/fwlink/?Linkid=129112.

    Additional references