共用方式為


Get started with on-premises Linux compute nodes

 

Applies To: Microsoft HPC Pack 2012 R2

Starting in HPC Pack 2012 R2 Update 3, you can add on-premises Linux compute nodes to an HPC Pack cluster. This article shows you how to set up an on-premises Linux cluster consisting of an HPC Pack head node and several Linux compute nodes. You can use this cluster to run Linux HPC workloads.

In this article:

  • Prerequisites

  • Deploy the head node

  • Prepare to install Linux nodes

  • Install Linux compute nodes

  • Verify the configuration

  • Considerations and limitations

Prerequisites

  • One computer with Windows Server installed - To deploy an HPC Pack cluster, you need to install a supported edition of Windows Server 2012 R2 or Windows Server 2012 on the computer (or computers) that will act as the head node.

  • Active Directory domain - The head node of the HPC Pack cluster must be domain joined. Prepare the Active Directory domain and choose an existing domain account with enough privileges to add head node to the cluster. For how to choose the Active Directory domain for your cluster and choose a domain account for adding nodes, refer to sections 1.6 and 1.7 in Step 1: Prepare for Your Deployment in the Getting Started guide.

    For steps to deploy a new Active Directory domain service, refer to Deploy Active Directory Domain Services (AD DS) in Your Enterprise.

  • HPC Pack 2012 R2 Update 3 - The HPC Pack 2012 R2 Update 3 installation package contains installation files for on-premises Linux compute nodes. For the location of Linux node installation binaries and steps to install them, see the remaining sections in this article.

  • Computers running a supported Linux OS distribution - HPC Pack currently validates and supports the following Linux distributions: CentOS 6.6, CentOS 7.0, Red Hat Enterprise Linux 6.6, Red Hat Enterprise Linux 7.1, and Ubuntu 14.04.2 on x64 platforms.

Deploy the head node

To deploy and install the head node, see Step 2: Deploy the Head Node in the Getting Started guide.

To configure the head node, follow the procedures in Step 3: Configure the Head Node in the Getting Started guide.

Note

Currently, when you select the cluster network topology in Configure your network in the Deployment To-do List, we recommend Topology 5: All nodes only on an enterprise network.

Prepare to install Linux nodes

In this section, we introduce the steps to prepare to install Linux compute nodes.

  1. Fetch Linux compute node installation binaries

  2. Set up a file share to share installation binaries to Linux compute nodes

  3. Prepare the certificate used for communication between the head node and Linux compute nodes

Step 1. Fetch Linux compute node installation binaries

After deploying the head node, find the on-premises Linux node installation binaries in the following folder: %CCP_DATA%InstallShare\LinuxNodeAgent.

The files hpcnodeagent.tar.gz and setup.py are the binaries required to install on-premises Linux compute nodes.

Step 2. Set up a file share to share installation binaries to Linux compute nodes

You have several choices, including an SMB share or an NFS share, to move data between the head node and the Linux compute nodes. The following steps set up an SMB share on the head node to share the binaries with Linux compute nodes.

You can also copy the binaries to a centralized share. Just make sure the installation files are accessible from Linux compute nodes, and are executable by the users that need to log in to the Linux compute nodes.

Tip

You can skip the following instructions if you know how to deploy the binaries to a file share that is accessible from the Linux compute nodes.

To set up an SMB share on the head node

  1. Create a folder on the head node and share it to Everyone with the Read/Write permission level. For example, share C:\SmbShare on the head node as \\<HeadNodeName>\SmbShare. Example: \\LN15-UB14-HN1\SmbShare.

  2. Mount the SMB share on each Linux node. For example, use the following commands to mount the share on the path /smbshare:

    mkdir –p /smbshare
    
    mount -t cifs //LN15-UB14-HN1/SmbShare /smbshare -o vers=2.1,domain=<domainname>,username=<username>,password='<password>’,dir_mode=0777,file_mode=0777
    

    Note

    You must use cifs-utils to mount the SMB share from the Linux compute nodes. On CentOS and Red Hat distributions, install the package cifs-utils by running yum install.

  3. Copy the binaries hpcnodeagent.tar.gz and setup.py into \\LN15-UB14-HN1\SmbShare in the head node, and check that the files can be seen in the path /smbshare from the Linux compute nodes.

Step 3. Prepare the certificate used for communication between the head node and Linux compute nodes

For security reasons, HPC Pack uses HTTPS to communicate between Linux compute nodes and the head node. Use the following steps to prepare the certificate used for communication. During head node installation, HPC Pack generates a self-signed certificate in the Local Computer\Personal store named Microsoft HPC Linux Communication, which you can use for test purposes. You can replace it with your own certificate in a production environment.

The certificate used for communication must have the following attributes:

  • The Subject Name is the same as the FQDN of the head node, or the Subject Alternative Name contains the FQDN of head node

  • The certificate contains a private key

  • The certificate is exportable

  • If the certificate is self-signed, it must contain Key Usage: Digital Signature, Non-Repudiation, Key Encipherment, Data Encipherment, and Certificate Signing; and it must contain Enhanced Key Usage (also expressed as extendedKeyUsage in openssl): Sever Authentication and Client Authentication

To configure your own certificate, run the following commands in a Windows PowerShell window:

PS > add-pssnapin Microsoft.HPC

PS > Set-HpcLinuxCertificate –FilePath <My.pfx>

Note

Because a password is not specified in the previous command, you are prompted to enter the password for the certificate. For more information about Set-HpcLinuxCertificate, type get-help Set-HpcLinuxCertificate.

To use the certificate generated by HPC Pack, run the following commands in a Windows PowerShell window to export the certificate to the share that is accessible from Linux compute nodes. For example, to export the certificate to the SMB path C:\SmbShare\hpclinuxagent.pfx, type:

PS > add-pssnapin Microsoft.HPC

PS > Export-HpcLinuxCertificate –FilePath C:\SmbShare\hpclinuxagent.pfx

Note

Because a password is not specified in the previous command, you are prompted to enter the password for the certificate. For more information about Export-HpcLinuxCertificate, type get-help Export-HpcLinuxCertificate.

The PFX file (hpclinuxagent.pfx in this example) can now be seen in the path /smbshare from the Linux compute nodes.

Install Linux compute nodes

Install the Linux compute nodes by executing the Python script setup.py. Ensure that Python is installed on the Linux nodes, and install it if not. For the detailed usage of setup.py, type python setup.py –help.

For example, to add a Linux node to the cluster, type a command similar to the following command in a Bash shell on each Linux node:

python setup.py -install -clusname:<FQDN of head node> -certfile:'<path to PFX certificate>'

Verify the configuration

After you successfully install the Linux nodes, open HPC Cluster Manager to check the status of the HPC Pack cluster. You manage and monitor Linux compute nodes in many of the ways that you work with Windows nodes:

  • In Resource Management, list Linux nodes by clicking By Node Template > LinuxNode Template.

  • View a heat map of the Linux nodes by switching to the Heat Map view in Resource Management.

  • Submit jobs to the Linux nodes by using the actions in Job Management.

To submit a test parametric sweep job to Linux nodes

  1. After selecting the Linux nodes in Resource Management, pivot to Job Management, and click New Parametric Sweep Job.

  2. In the New Parametric Sweep Job dialog box, specify a simple command line, such as hostname. Accept default values for the remaining settings, and then click Submit.

  3. After the job finishes, double-click the item to view the output of each task. In this example, each Linux node returns its hostname.

More information on how to move data and submit jobs to the cluster, see Get started with Linux compute nodes in an HPC Pack cluster in Azure. The general procedures are identical in an on-premises cluster with Linux compute nodes.

Considerations and limitations

  • Linux distributions - See Prerequisites for Linux distributions that are currently tested for compatibility with HPC Pack.

  • Single head node configuration - Currently HPC Pack supports only a single head node in a cluster with Linux compute nodes. A head node configured for high availability can't be used.

  • MPI - To run MPI applications on the Linux nodes, you must install your own MPI distribution on the nodes. Microsoft MPI (MS-MPI), which is included with HPC Pack, runs only on Windows nodes. The scheduler must also set up mutual trust between the Linux nodes. For an example, see Run NAMD with Microsoft HPC Pack on Linux compute nodes in Azure.

  • GPU and SOA workloads not supported - At this time HPC Pack does not support scheduling to GPGPUs or running SOA workloads on the Linux nodes .

 

See Also

Microsoft HPC Pack: Node Deployment
Get started with Linux compute nodes in an HPC Pack cluster in Azure
Run NAMD with Microsoft HPC Pack on Linux compute nodes in Azure
Run OpenFoam with Microsoft HPC Pack on a Linux RDMA cluster in Azure