Partager via


What's New in HPC Pack 2012 R2 Update 3

 

Updated: November 14, 2015

Applies To: Microsoft HPC Pack 2012 R2

This document lists the new features and changes that are available in Microsoft® HPC Pack 2012 R2 Update 3. For late-breaking issues and getting started guidance, see Release Notes for HPC Pack 2012 R2 Update 3.

In this topic:

  • Support for Linux nodes on-premises

  • GPU support

  • Burst to Azure Batch

  • Scheduler improvements

  • SOA improvements

  • Other improvements

Support for Linux nodes on-premises

In Update 2, we introduced Azure Linux VM support for HPC Pack. With this update, HPC Pack supports Linux for on-premises compute nodes. Customers running HPC clusters on Linux can now use HPC Pack deployment, management, and scheduling capabilities, and the user experience is very similar to the Windows nodes.

  • Deployment – In Update 3, we include the Linux agent binaries with HPC Pack. After the head node installation, you can install these binaries from the head node share with the provided setup script. In this release, we support CentOS (versions 6.6 and 7.0), Red Hat Enterprise Linux (versions 6.6 and 7.1) and Ubuntu (Version 14.04.2),

  • Management and scheduling – The management and scheduling experience of Linux nodes in HPC Cluster Manager is similar to the Windows nodes we already support. You can see the Linux nodes' heat maps, create jobs with Linux node specific cmdlets, monitor job, task, and node status, etc. Additionally, the handy tool clusrun is supported with Linux nodes.

  • Support for MPI and SOA workloads – Please note that Microsoft (MS-MPI) is only available on Windows computes nodes. Thus you need to install an MPI version by yourself on the Linux nodes. We provide guidance on how to submit Linux MPI jobs in HPC Pack. To run SOA workloads on Linux nodes ,you need our Java Service Host, which is released as an open source project on GitHub. We will publish more details on this later.

    For more information about Linux support in Update 3, see Get Started with Linux compute nodes with HPC Pack.

GPU support

GPUs are becoming more popular in technical computing. With this update, we offer the initial support of GPUs with HPC Pack. In this update, we support NVidia GPUs with CUDA capability only (for example, Tesla K40).

  • Management and monitoring – Our management service detects whether a Windows compute node has a GPU installed and configured. If so, we collect GPU metrics so that it can be monitored on the heat map..

  • Scheduling – In this release, you can specify unit type as GPU in addition to Core/Socket/Node for your job or task. Additionally, the job scheduler will provide the assigned GPU index information as an environment variable for your task to use exclusively.

For more information about this GPU support in Update 3, see Get started with HPC Pack GPU support.  

Burst to Azure Batch

The Azure Batch service is a cloud platform service that provides highly scalable job scheduling and compute management. Starting with this update, HPC Pack is able to deploy an Azure Batch pool from the head node and treat the pool nodes as a “fat” node in the system. Batch jobs and tasks can be scheduled on the pool nodes.

For more information, see Burst to Azure Batch from Microsoft HPC Pack.

Scheduler improvements

  • Azure auto grow shrink with SOA workloads – Now the auto grow shrink service can grow nodes based on the outstanding calls in a SOA job instead of only task numbers. And you can set the new SoaJobGrowThreshold and SoaRequestsPerCore properties for auto grow shrink:.

  • Customizable idle detection logic – Until this release the workstation nodes and unmanaged server nodes have been treated as idle based on keyboard or mouse detection or CPU usage for processes other than those for HPC Pack. Now we add these capabilities:

    • You can whitelist processes you want to exclude from calculating the node CPU usage by adding below registry key values (Type: REG_MULTI_SZ) HKLM\Software\Microsoft\HPC\CpuUsageProcessWhiteList.

    • When you don’t specify keyboard or mouse detection or a CPU usage threshold, you can provide your own node idleness logic by creating a file with the name IdleDetector.notidle in the %CCP_HOME%Bin folder. HPC Pack checks whether this file exists and reports to the scheduler every 5 seconds.

  • Previous task exit code – We provide the environment variable CCP_TASK_PREV_EXITCODE to record the previous exit code of the task if it is retried.

  • Scheduler REST API improvements.

    • Added the new task properties ExecutionFailureRetryCount and AutoRequeueCount

    • Added OData filter and sort-by ability on GetNodeList, GetJobList, and GetTaskList. For example, for GetJobList:

      Specify OData filters to use for this request in the format "$filter=<filter1>%20and%20<filter2>…". To filter the jobs by a valid job state, use the filter "JobState%20eq%20<JobState>". To retrieve jobs whose state changed after certain date time, use the filter "ChangeTimeFrom%20eq%20<DataTime>". A null value is ignored. The minimum version that supports this URI parameter is Microsoft HPC Pack 2012. To use this parameter, also specify the Render URI parameter set to RestPropRender and a minimum api-version value of 2012-11-01. Example: "$filter=JobState%20eq%20queued%20and%20 ChangeTimeFrom%20eq%202015-10-16&Render=RestPropRender&api-version=2012-11-01".

  • Unlimited number of parametric sweep tasks in a job – Before this release the limit was 100.

  • CCP_NEW_JOB_ID environment variable for the job new command. – With this variable you no longer need to parse the command output in your batch scripts.

     

SOA improvements

  • Better mapping for broker worker logs with sessions - To enable the per session broker logs, add the attribute PerSessionLogging="1" for the shared listener “SoaListener” in HpcBrokerWorker.exe.config on the broker nodes.

  • SessionStartInfo supports %CCP_Scheduler% as the head node name if it is not specified – It also accepts any predefined environment variable, such as %HPC_IaaSHN%.

  • Support for Excel running under console session by default - If you have many cores on a compute node and you want to start many instances of Excel, run Excel under a console session to prevent possible issues. From HPC Cluster Manager, click Node Management. In the node list, choose all the compute nodes for Excel, right-click, and choose Remote Desktop from the context menu. RDP to all the nodes using the user credentials under which the Excel workbook will run. This creates an active interactive session for the Excel work to launch and it can be observed when the Excel job runs on the nodes.

  • SOA job view in web portal - The HPC web portal now shows additional list views for SOA jobs and My SOA jobs with default columns for request counters including total and outstanding requests. In the job details view, progress bars are added to show the progress of the tasks and requests.

  • EchoClient.exe for the built-in Echo service - The EchoClient.exe is located in the %CCP_HOME%Bin folder. It can be used as a simple SOA client for the built-in Echo service named CcpEchoSvc. For example, EchoClient.exe –h <headnode> -n 100 creates an interactive SOA session with 100 echo requests. Type EchoClient.exe -? for more help info.

Other improvements

  • HPC version tool - We introduce a Windows PowerShell tool to list the HPC Pack version and installed updates. Run Get-Help Get-HPCPatchStatus.ps1 –Full on any computer where HPC Pack is installed to get detailed help and examples..

  • Per instance heat map - Until this release HPC Pack has only provided an aggregated instance heat map, but now you can view an individual heat map of each instance through the overlay view.

  • MS-MPI v7 - We integrate with the latest Microsoft MPI version, MS-MPI v7 (download here).