Partager via


Configure HPC Pack with Azure Files

This article explains how to configure HPC Pack with Azure Files, configure identity and authentication, and achieve your performance goals.

Because Azure file shares are serverless, deploying for production scenarios doesn't require managing a file server or network-attached storage (NAS) device. Azure Files also has built-in redundancy for high availability. This means organizations don't have to apply software patches or stripe multiple disks to meet cost and performance needs for their high-performance computing (HPC) clusters.

Azure Files premium file shares satisfy typical customer performance and identity requirements, enabling easily configured, cost-effective, performant lift-and-shift scenarios. Azure Files supports different identity configurations according to customers' needs.

This article focuses on how to bring your existing on-premises HPC Pack workload to Azure. It focuses on commonly reported configurations for this scenario, which is premium file shares with an on-premises Azure Active Directory Domain Services (Azure AD DS) instance configured with default share-level permissions.

For example, if you're using HPC Pack for financial services, your company might have a policy not to synchronize your identity to the cloud. In that case, default permissions would likely fulfill your needs. The default share-level access control lists (ACLs) allow adding a default share-level permission on your storage account for all AD DS authenticated users. You can then apply fine-grained access control at the file and directory level by using Windows ACLs (also known as NTFS permissions).

A default share-level permission assigned to your storage account applies to all file shares contained in the storage account. You can then use on-premises Active Directory for file-level and directory-level permissions without having to sync Active Directory to the cloud.

Planning for using Azure Files with HPC Pack

The following sections describe how to plan and execute your lift and shift of an on-premises HPC Pack solution by using Azure Files as storage.

Calculate performance targets

Azure Files premium file shares mounted using Server Message Block (SMB) are ideal for Windows-based applications that:

After you calculate the performance needs of your HPC Pack environment, you can calculate the performance targets for various sizes of file shares:

  • To calculate baseline input/output per second (IOPS), use this formula:

    3,000 + 1 IOPS per GiB

    For example, for a 10 TiB premium file share, the calculation is 3,000 + 10,240 GiB = 13,240 IOPS.

  • To calculate throughput (total ingress and egress), use this formula. Use the CEILING function, because that will impact the result depending on the input provisioned size.

    100 + CEILING(.04 * GiB) + CEILING(.06 * GiB)

    For example, for a 10 TiB premium file share, the calculation is 100 + CEILING(.04 * GiB) + CEILING(.06 * GiB) = 1,125 MiB/sec.

After you know your target share size that provides expected IOPS and throughput values, you can:

Choose an identity scheme

Next, you need to decide whether to use Azure AD DS or on-premises AD DS as an identity scheme. You also need to decide if you'll apply a default share-level permission. For more information, see Overview of Azure Files identity-based authentication options for SMB access.

A common pattern with HPC Pack is that an organization doesn't want synchronization of Active Directory to the cloud. If this is the case, and you can't sync your on-premises AD DS instance to Azure AD, use default share-level permissions to set the default access level for all authenticated identities, irrespective of their sync status. Then you can use Windows ACLs for granular permission enforcement on your files and directories.

Configure Azure Files for HPC Pack

  1. Create and configure Azure file shares:

    1. Create a storage account. To create a FileStorage storage account, ensure that the Performance option is set to Premium and Fileshares is selected in the Premium account type dropdown list. The storage account name must be 15 characters or fewer.
    2. Create a file share with the size that meets your performance needs. as indicated in the earlier calculation.
    3. Enable SMB Multichannel. You'll learn about SMB Multichannel benefits later in this article.
    4. Configure identity by enabling Azure AD DS authentication on Azure Files or enabling AD DS authentication for Azure Files on the storage account.
    5. Set a default share-level permission.
    6. Mount the Azure file share using your storage account key.
    7. Configure Windows ACLs.
  2. Configure and use HPC Pack file shares. For a list of default HPC Pack file shares, see Build a high-availability HPC Pack cluster in Azure. Note that the default shares are required only for certain user scenarios. To move the default shares to Azure file shares, follow these steps:

    1. Create Azure file shares and configure Windows ACLs according to the original file share (for example, SOA runtime share).
    2. Change the related cluster setting (for example, cluscfg setenvs CCP_SERVICEREGISTRATION_PATH=\\<AzureFiles>\HpcServiceRegistration).

High performance with Azure Files

SMB Multichannel

Azure Files supports SMB Multichannel on premium file shares (file shares in the FileStorage type of storage account). There is no additional cost for enabling SMB Multichannel in Azure Files. SMB Multichannel is disabled by default on the FileStorage resource.

Maximum performance of a single VM client is still bound to VM limits. For example, Standard_D32s_v3 can support a maximum bandwidth of 16,000 MBps (or 2 GBps). Egress from the VM (writes to storage) is metered, but ingress (reads from storage) is not. File share performance is subject to machine network limits, CPUs, internal storage available, network bandwidth, I/O sizes, parallelism, and other factors. For more information, see SMB Multichannel performance.

Typical HPC Pack performance

Typical HPC Pack usage is several large files being read from and written to (60 percent read and 40 percent write, on average) with large block sizes, stored in Azure Files. This kind of usage should experience the best performance in line with published I/O and throughput expectations, based on identity configurations.

Atypical usage might be millions of small files and small block sizes. In those cases, organizations need to test other configurations to evaluate optimal performance.

Measure performance

To test performance, you can use DiskSpd.exe. It's a configurable tool that emulates various workloads and measures latency, read and write I/O, latency, and throughput.

Optimizing and troubleshooting performance

Here are some resources and tips:

  • Optimizing performance
  • Troubleshoot Azure file share performance issues
  • Indications of poor storage performance in an HPC Pack environment:
    • Long startup times for nodes accepting tasks and beginning calculations.
    • Windows performance counters (Avg. Disk sec/Read, Avg. Disk sec/Transfer, Avg. Disk sec/Write, and especially Avg Disk Queue Lengths) show high or capped values on compute nodes.
    • Latency on output locations. In some calculations, the designated output location RUNTIME$ Share or other SMB shares might indicate saturation.
  • Advanced SMB client troubleshooting