共用方式為


CCS v1 - How does MPI network actually work without name resolution?

Technorati Tags: Compute Cluster Server,CCS

As you may already know, with the release of Windows Compute Cluster Server 2003 (CCS) we included Microsoft Message Passing Interface (MS‑MPI) implementation which is fully compatible with the reference MPICH2. This allows integration with Active Di­rectory and enables role based security for administra­tors and users, and the use of Microsoft Manage­ment Console (MMC) which provides a familiar adminis­trative and scheduling interface.

The Microsoft CCS can use GbE, InfiniBand (IB), Myrinet, Quadrics, or legacy high-speed fabrics as interconnects for high performance computing. The majority of high performance computing clustered systems use GbE, but more and more customers these days prefer the high speed and low la­tency of interconnects such as InfiniBand or legacy specialty hardware. Our implementation of CCS supports all WSD-compatible fabrics.

This is one of those things that you wake up some days wondering “How does this thing actually work? ”  Which seems to be a simple question, but then after couple of discussions with the developer you realize that “Hmm, actually it is not very clear or you say some magic is happening somewhere! ”  If you are trying to find out answers for the following questions, then listen up…

  • What magic happens during MPI initialization?
  • What are business cards, and how do MPI apps get these for other nodes?
  • How MPI network works without name resolution?

What’s more interesting that when we checked the test clusters with IB cards, we found that DNS and Default Gateway settings are not configured on IB network interface cards (NICs). There was no name resolution mechanism, on the MPI network at all. So how we force the MPI traffic using mpich subnet mask without name resolution…..

After thoroughly discussing this with Mr MPI clip_image001, here is a brief summary on how magic happens…

Myth: Subnet manager running on IB switch does name resolution on MPI network?   
Wrongclip_image002

  1. User submits an mpi job
  1. Job Scheduler allocates number of nodes (or processors) requested for mpi job.
  1. First allocated node runs the mpiexec with all required parameters that are passed by job scheduler (ccp_nodes; ccp_mpi_network …etc)
  1. mpiexec kicks off and forms a tree by talking first to the msmpi service running on the same node, which spawns the smpd manager talking to msmpi services running on other (allocated) compute nodes that’s where we need name resolution. Because smpd manager on the first node needs to talk to other msmpi service/smpds on allocated nodes
  1. Each mpi application starts up and queries all the LOCAL addresses for that node. Then they register this information in a “business card” in a shared database inside the smpd tree business card has all available interfaces on the node.
  1. When MPI app rank x running on node X needs to connect to MPI app rank y running on node Y, it get y’s bizcard from the smpd tree and connects directly to x, using the address list in the business card.
  1. MPI app x filters y addresses using MPIHC_NETMASK environment variable. This environment variable is set by mpiexec by reading CCP_MPI_NETMASK; which in turn is set as a cluster variable by ccp management services.  This var is set when you select the networks in the ToDoList; it set to the MPI network if selected or to the Private network if MPI network is not present.

So bottom line is we do not need to have a name resolution on MPI network as long as node can resolve their names through private or public network.

For additional information regarding Microsoft Compute Cluster Server, please visit our Windows HPC (High Performance Computing) Community forums

RELATED RESOURCE REFERENCE(S):

· Message Passing Interface (MPI) Documentation

· Using Microsoft Message Passing Interface (MS-MPI) Documentation

Mike Rosado
Senior Support Engineer
Microsoft Enterprise Platforms Support