共用方式為


What's New in Microsoft HPC Pack 2012 R2

 

Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2

This document lists the new features and changes that are available in Microsoft® HPC Pack 2012 R2.

In this topic:

  • Deployment

  • Windows Azure integration

  • Job scheduling

  • Runtimes and development

Deployment

  • Operating system requirements are updated.   HPC Pack 2012 R2 is supported to run on Windows Server® 2012 R2 and Windows® 8.1 for certain node roles, as shown in the following table.

    Role

    Operating system requirement

    Head node

    Windows Server 2012 R2, Windows Server 2012

    Compute node

    Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

    WCF broker node

    Windows Server 2012 R2, Windows Server 2012

    Workstation node

    Windows 8.1, Windows® 8, Windows® 7

    Unmanaged server node

    Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

    Windows Azure node

    Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2

    Client (compute with only client utilities installed)

    Windows Server 2012 R2, Windows 8.1, Windows Server 2012, Windows 8, Windows Server 2008 R2, Windows 7, Windows Server® 2008, Windows Vista®

    For more information, see System Requirements for Microsoft HPC Pack 2012 R2 and HPC Pack 2012.

Windows Azure integration

  • Stop or remove selected Windows Azure nodes.   HPC Pack 2012 R2 allows you to stop or remove selected nodes from Windows Azure, giving you finer control over the size and cost of your Windows Azure node deployments. You can use HPC Cluster Manager or the new Stop-HpcAzureNode and Remove-HpcAzureNode Windows HPC PowerShell cmdlets. With this feature you can now scale down the number of deployed nodes to match the workloads in the job queue. You can also remove selected Windows Azure nodes in a deployment (or across deployments) that are idle for long periods, or in an error state. In previous versions of HPC Pack, you can only stop or remove an entire set of nodes deployed with a particular node template.

  • Additional compute instance sizes are supported in Windows Azure node deployments.    HPC Pack 2012 R2 introduces support for the A5 compute instance (virtual machine) size in Windows Azure node deployments.

    To run parallel MPI applications in Windows Azure, HPC Pack 2012 R2 will also support the A8 and A9 compute instances that will be released to general availability in selected geographic regions in early 2014. These compute instances provide high performance CPU and memory configurations and connect to a low-latency and high-throughput network in Windows Azure that uses remote direct memory access (RDMA) technology. For more information about running MPI jobs on the A8 and A9 instances in your Windows Azure burst deployments, see https://go.microsoft.com/fwlink/?LinkID=389594.

    For details about the supported instance sizes, see Virtual Machine and Cloud Service Sizes for Windows Azure and Azure Feature Compatibility with Microsoft HPC Pack.

Job scheduling

  • The performance of graceful preemption in Balanced scheduling mode is improved for HPC SOA jobs.   HPC Pack 2012 R2 has improved the “waiting-for-task-finishing” mechanism in Balanced job scheduling, which previously was not optimized for HPC service-oriented architecture (SOA) jobs. This change improves the performance of graceful preemption of tasks in Balanced job scheduling mode for HPC SOA jobs.

Runtimes and development

  • MS-MPI adds mechanisms to start processes dynamically.   MS-MPI now supports a “connect-accept” scenario in which independent MPI processes establish communication channels without sharing a communicator. This functionality may be useful in MPI applications consisting of a master scheduler launching independent MPI worker processes, or a client-server model in which the server and clients are launched independently.

    Additionally, MS-MPI introduces interfaces that allow job schedulers other than the one in HPC Pack to start MS-MPI processes.

  • MS-MPI can be updated independently of HPC Pack.   Starting in HPC Pack 2012 R2, HPC Pack will allow future updates to Microsoft MPI (MS-MPI) without requiring you to update the HPC Pack services. This change will allow Windows HPC cluster administrators to update MS-MPI functionality simply by installing an updated MS-MPI redistributable package on the cluster.

    As in previous versions, MS-MPI is automatically installed when HPC Pack 2012 R2 is installed on the cluster nodes, and it is updated when you upgrade your existing HPC Pack 2012 with SP1 cluster to HPC Pack 2012 R2. However, HPC Pack 2012 R2 now installs MS-MPI files in different locations than in previous versions, as follows:

    • MS-MPI executable files are installed by default in the %PROGRAMFILES%\Microsoft MPI\Bin folder, not in the %CCP_HOME%\Bin folder. The new environment variable MSMPI_BIN is set to the new installation location.

    • MS-MPI setup files for cluster node deployment are organized separately from setup files for other HPC Pack components in the remote installation (REMINST) share on the head node.

    The new locations may affect existing compute node templates or existing MPI applications. For more information, see Release Notes for Microsoft HPC Pack 2012 R2.

See also