共用方式為


Appendix 1: Cluster Network Topologies for Workstation Nodes

 

Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2

Workstation nodes can be added to any HPC cluster network topology that is supported by HPC Pack (topologies 1-5). In network topologies 1-4 there are two options for adding workstation nodes to a cluster that has dedicated compute nodes:

  • Add workstation nodes that have the same network connections as the dedicated compute nodes in the cluster

  • Add workstation nodes that have only an enterprise network connection

This appendix summarizes the impacts to cluster performance and functionality of adding workstation nodes to each cluster network topology and recommends additional configuration steps that may be needed in each environment. Some topologies may reduce application performance on the workstation nodes or may limit network connectivity between the workstation nodes and other nodes in the cluster.

Important

Topology 1 and topology 3 are generally not recommended for adding workstation nodes that have only an enterprise network connection to a cluster with dedicated compute nodes. In these scenarios, topology 2, 4, and 5 provide better network connectivity between the cluster nodes.

For an overview of the HPC cluster network topologies that are supported by HPC Pack and general considerations for selecting each topology, see Appendix 1: HPC Cluster Networking.

Topology 1: Compute nodes isolated on a private network

Adding workstation nodes to a topology 1 cluster is generally not recommended, because the compute nodes are not connected to the enterprise network whereas the workstation nodes in many environments connect only to the enterprise network. However, you may want to use a topology 1 cluster for evaluation purposes to add workstation nodes to the private network. Workstation nodes that connect only to the enterprise network can also be used in this topology if the compute nodes and the workstation nodes do not need to communicate.

Add workstation nodes with a private network connection

Topology 1 - Workstations same as compute nodes

In this topology, the workstation nodes have the same private network connection as the compute nodes. Neither the workstation nodes nor the compute nodes connect to the enterprise network.

The HPC Management Service adds all discovered addresses for the private network, including the private network addresses of workstation nodes, to the hosts file for each compute node. Therefore, the compute nodes can communicate with workstations over the private network. However, because the HPC Management Service does not maintain the hosts files for workstation nodes, additional configuration is needed to allow the reciprocal communication of workstation nodes with compute nodes. NetBIOS can be enabled on the private network to allow communication between workstation nodes and compute nodes. Alternatively, the cluster administrator can set up a DNS server on the private network to enable routing among all nodes.

Add workstation nodes with only an enterprise network connection

Topology 1 - Workstations on enterprise network

In this topology, the compute nodes are isolated from the enterprise network and communication between workstation nodes and compute nodes is generally not possible unless a router is added to route traffic between the private and enterprise networks. However, doing so will expose the compute nodes to all entities on the enterprise network.

One approach for evaluating workstation nodes using this topology is to schedule jobs separately on the compute nodes and on the workstation nodes. To enable this, separate job templates must be created to target either the compute nodes or the workstation nodes.

To run service-oriented architecture (SOA) services, the broker node must have a network path defined for all services. Since the network routes to the compute nodes and the workstation nodes differ, SOA services must be configured to run on either the workstation nodes or the compute nodes. One approach to manage this is to create job templates for SOA sessions that target the chosen nodes. If the workstation nodes are chosen for SOA services, then the WCF_NETWORKPREFIX environment variable must be set to the enterprise network. This can be configured by running the following cluscfg command at an elevated command prompt:

cluscfg setenvs WCF_NETWORKPREFIX=Enterprise

To run Message Passing Interface (MPI) jobs on the workstation nodes, the MPI network mask must be set appropriately by using the CCP_MPI_NETMASK environment variable, since workstation nodes and compute nodes do not have the same network connections. The recommendation is to set the subnet mask to 0.0.0.0. For more information about configuring the MPI network subnet mask, see Review or Adjust the Network That Is Used for MPI Messages.

Note

Moving MPI traffic may impact cluster performance, because communication between the nodes will occur over a slower enterprise network rather than the private network. You can choose instead to isolate MPI jobs to the dedicated compute nodes. To do this, create job templates for MPI jobs that target only the compute nodes.

Topology 2: All nodes on enterprise and private networks

Add workstation nodes with both private and enterprise connections

Topology 2 - Workstations same as compute nodes

In this topology, the workstation nodes have the same private network connection and enterprise network connection as the compute nodes. The HPC Management service adds all discovered addresses for the private network to the host file for each compute node. Therefore, the compute nodes are able to communicate with workstations over the private network or the enterprise network. Because the Windows HPC Server services do not maintain the host files on the workstation nodes, communication from workstation nodes to compute nodes will occur only over the enterprise network, which may be slower.

Add workstation nodes with only an enterprise network connection

Topology 2 - Workstations on enterprise network

In this topology, all communication between the workstation nodes and the dedicated compute nodes occurs over the enterprise network.

For SOA services, the broker node must have a network path defined for all services. Because the network route to the compute nodes is not the same as the route to the workstations nodes, the cluster administrator has two options for running SOA services:

  • Route SOA services over the enterprise network   This will allow SOA services to run on both the workstation nodes and dedicated compute nodes, but performance may be impacted because traffic is no longer routing over the private network.

    To run SOA services over the enterprise network, the WCF_NETWORKPREFIX environment variable must be set to the enterprise network. This can be configured by running the following cluscfg command at an elevated command prompt:

    cluscfg setenvs WCF_NETWORKPREFIX=Enterprise
    
  • Run SOA services only on the dedicated compute nodes   This may offer better network performance for SOA services, because all network traffic for SOA jobs will route over the private network. To enable this option, one approach is to create job templates for SOA sessions that target only the dedicated compute nodes.

To run Message Passing Interface (MPI) jobs on the workstation nodes, the MPI network mask must be set appropriately by using the CCP_MPI_NETMASK environment variable, since workstation nodes and compute nodes do not have the same network connections. The recommendation is to set the subnet mask to 0.0.0.0. For more information about configuring the MPI network mask, see Review or Adjust the Network That Is Used for MPI Messages.

Note

Moving MPI traffic may impact cluster performance, because communication between the nodes will occur over a slower enterprise network rather than the private network. You can choose instead to isolate MPI jobs to the dedicated compute nodes. To do so, create job templates for MPI jobs that target only the compute nodes.

Topology 3: Compute nodes isolated on private and application networks

Topology 3 - Workstations same as compute nodes

Topology 3 - Workstations on enterprise network

The considerations for adding workstation nodes in topology 3 are the same as those in Topology 1: Compute nodes isolated on a private network. Topology 3 differs only in the presence of an additional application network that may have higher bandwidth and lower latency than the private network.

Note

Because topology 3 includes a higher performing application network and topology 1 does not, the performance impact of routing SOA services to run on workstation nodes in the enterprise network instead of on the dedicated compute nodes in may be proportionately greater.

Topology 4: All nodes on enterprise, private, and application networks

Topology 4 - Workstations same as compute nodes

Topology 4 - Workstations on enterprise network

The considerations for adding workstation nodes in topology 4 are the same as those in Topology 2: All nodes on enterprise and private networks. Topology 4 differs only in the presence of an additional application network that may have higher bandwidth and lower latency than the private network.

Note

Because topology 4 includes a higher performing application network and topology 2 does not, the performance impact of routing SOA services to run on workstation nodes in the enterprise network instead of on the dedicated compute nodes may be proportionately greater.

Topology 5: All nodes on the enterprise network

Topology 5 - Workstations on enterprise network

In topology 5, the considerations for adding workstation compute nodes are the same as those for adding compute nodes. For more information, see Appendix 1: HPC Cluster Networking.