次の方法で共有


Clustering Technologies

Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Clustering Technologies

The clustering technologies in products in the Microsoft Windows Server 2003 operating system are designed to help you achieve high availability and scalability for applications that are critically important to your business. These applications include corporate databases, e-mail, and Web-based services such as retail Web sites. By using appropriate clustering technologies and carefully implementing good design and operational practices (for example, configuration management and capacity management), you can scale your installation appropriately and ensure that your applications and services are available whenever customers and employees need them.

High availability is the ability to provide user access to a service or application for a high percentage of scheduled time by attempting to reduce unscheduled outages and mitigate the impact of scheduled downtime for particular servers. Scalability is the ability to easily increase or decrease computing capacity. A cluster consists of two or more computers working together to provide a higher level of availability, scalability, or both than can be obtained by using a single computer. Availability is increased in a cluster because a failure in one computer results in the workload being redistributed to another computer. Scalability tends to be increased, because in many situations it is easy to change the number of computers in the cluster.

Windows Server 2003 provides two clustering technologies: server clusters and Network Load Balancing (NLB). Server clusters primarily provide high availability; Network Load Balancing provides scalability and at the same time helps increase availability of Web-based services.

Your choice of cluster technologies (server clusters or Network Load Balancing) depends primarily on whether the applications you run have long-running in-memory state:

  • Server clusters are designed for applications that have long-running in-memory state or frequently updated data. These are called stateful applications. Examples of stateful applications include database applications such as Microsoft SQL Server 2000 and messaging applications such as Microsoft Exchange Server 2003.

    Server clusters can combine up to eight servers.

  • Network Load Balancing is intended for applications that do not have long-running in-memory state. These are called stateless applications. A stateless application treats each client request as an independent operation, and therefore it can load-balance each request independently. Stateless applications often have read-only data or data that changes infrequently. Web front-end servers, virtual private networks (VPNs), and File Transfer Protocol (FTP) servers typically use Network Load Balancing. Network Load Balancing clusters can also support other TCP- or UDP-based services and applications.

    Network Load Balancing can combine up to 32 servers.

In addition, with Microsoft Application Center 2000 Service Pack 2, you can create another type of cluster, a Component Load Balancing cluster. Component Load Balancing clusters balance the load between Web-based applications distributed across multiple servers and simplify the management of those applications. Application Center 2000 Service Pack 2 can be used with Web applications built on either the Microsoft Windows 2000 or Windows Server 2003 operating systems.

Multitiered Approach for Deployment of Multiple Clustering Technologies

Microsoft does not support the configuration of server clusters and Network Load Balancing clusters on the same server. Instead, use these technologies in a multitiered approach.

Clustering Technologies Architecture

A cluster consists of two or more computers (servers) working together. For server clusters, the individual servers are called nodes. For Network Load Balancing clusters, the individual servers are called hosts.

Basic Architecture for Server Clusters

The following diagram shows a four-node server cluster of the most common type, called a single quorum device cluster. In this type of server cluster, there are multiple nodes with one or more cluster disk arrays (often called the cluster storage) and a connection device (bus). Each of the disks in the disk array are owned and managed by only one node at a time. The quorum resource on the cluster disk array provides node-independent storage for cluster configuration and state data, so that each node can obtain that data even if one or more other nodes are down.

Four-Node Server Cluster Using a Single Quorum Device

4-Node Server Cluster Using a Single Quorum Device

Basic Architecture for Network Load Balancing Clusters

The following diagram shows a Network Load Balancing cluster with eight hosts. Incoming client requests are distributed across the hosts. Each host runs a separate copy of the desired server application, for example, Internet Information Services. If a host failed, incoming client requests would be directed to other hosts in the cluster. If the load increased and additional hosts were needed, you could add them dynamically to the cluster.

Network Load Balancing Cluster with Eight Hosts

Network Load Balancing Cluster with Eight Hosts

Clustering Technologies Scenarios

This section describes the most common scenarios for using server clusters and Network Load Balancing.

Scenarios for Server Clusters

This section provides brief descriptions of some of the scenarios for server cluster deployment. The scenarios cover three different aspects of server cluster deployment:

  • The applications or services on the server cluster.

  • The type of storage option: SCSI, Fibre Channel arbitrated loops, or Fibre Channel switched fabric.

  • The number of nodes and the ways that the nodes can fail over to each other.

Applications or Services on a Server Cluster

Server clusters are usually used for services, applications, or other resources that need high availability. Some of the most common resources deployed on a server cluster include:

  • Printing

  • File sharing

  • Network infrastructure services. These include the DHCP service and the WINS service.

  • Services that support transaction processing and distributed applications. These services include the Distributed Transaction Coordinator (DTC) and Message Queuing.

  • Messaging applications. An example of a messaging application is Microsoft Exchange Server 2003.

  • Database applications. An example of a database application is Microsoft SQL Server 2000.

Types of Storage Options

A variety of storage solutions are currently available for use with server clusters. As with all hardware that you use in a cluster, be sure to choose solutions that are listed as compatible with Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition. Also be sure to follow the vendor’s instructions closely.

The following table provides an overview of the three types of storage options available for server clusters running Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition:

Storage Options for Server Clusters

Storage Option Maximum Number of Supported Nodes

SCSI

Two

Fibre Channel arbitrated loop

Two

Fibre Channel switched fabric

Eight

Number of Nodes and Failover Plan

Another aspect of server cluster design is the number of nodes used and the plan for application failover:

  • N-node Failover Pairs. In this mode of operation, each application is set to fail over between two specified nodes.

  • Hot-Standby Server /N+I. Hot-standby server operation mode reduces the overhead of failover pairs by consolidating the “spare” (idle) node for each pair into a single node, providing a server that is capable of running the applications from each node pair in the event of a failure. This mode of operation is also referred to as active/passive.

    For larger clusters, N+I mode provides an extension of the hot-standby server mode where N cluster nodes host applications and I cluster nodes are spare nodes.

  • Failover Ring. In this mode of operation, each node in the cluster runs an application instance. In the event of a failure, the application on the failed node is moved to the next node in sequence.

  • Random. For large clusters running multiple applications, the best policy in some cases is to allow the server cluster to choose the fail over node at random.

Scenarios for Network Load Balancing

This section provides brief descriptions of some of the scenarios for deployment of Network Load Balancing. The scenarios cover three different aspects of Network Load Balancing deployment:

  • The types of servers or services in Network Load Balancing clusters.

  • The number and mode of network adapters on each host.

Types of Servers or Services in Network Load Balancing Clusters

In Network Load Balancing clusters, some of the most common types of servers or services are as follows:

  • Web and File Transfer Protocol (FTP) servers.

  • ISA servers (for proxy servers and firewall services).

  • Virtual private network (VPN) servers.

  • Windows Media servers.

  • Terminal servers.

Number and Mode of Network Adapters on Each Network Load Balancing Host

Another aspect of the design of a Network Load Balancing cluster is the number and mode of the network adapter or adapters on each of the hosts:

Number and Mode of Network Adapters on Each Host Use

Single network adapter in unicast mode

A cluster in which ordinary network communication among cluster hosts is not required and in which there is limited dedicated traffic from outside the cluster subnet to specific cluster hosts.

Multiple network adapters in unicast mode

A cluster in which ordinary network communication among cluster hosts is necessary or desirable. It is also appropriate when you want to separate the traffic used to manage the cluster from the traffic occurring between the cluster and client computers.

Single network adapter in multicast mode

A cluster in which ordinary network communication among cluster hosts is necessary or desirable but in which there is limited dedicated traffic from outside the cluster subnet to specific cluster hosts.

Multiple network adapters in multicast mode

A cluster in which ordinary network communication among cluster hosts is necessary and in which there is heavy dedicated traffic from outside the cluster subnet to specific cluster hosts.