共用方式為


Release Notes for Microsoft HPC Pack 2012

 

Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2

These release notes address late-breaking issues and information about Microsoft® HPC Pack 2012.

In this topic:

  • Download and install Microsoft HPC Pack 2012

  • Install the Microsoft HPC Pack 2012 web components

  • Install the HPC soft card key storage provider

  • Uninstall HPC Pack 2012

  • Known issues

Download and install Microsoft HPC Pack 2012

HPC Pack 2012 and updates are available for download from the Microsoft Download Center. The latest update is Microsoft HPC Pack 2012 SP1. After the download, save the installation files to installation media or to a network location.

For general steps to plan and create a new HPC cluster by installing HPC Pack 2012, see the Getting Started Guide for Microsoft HPC Pack 2012 R2 and HPC Pack 2012.

Install the Microsoft HPC Pack 2012 web components

To install the HPC Pack 2012 web components, you must separately run an installation program (HpcWebComponents.msi) and the included configuration script (Set-HPCWebComponents.ps1). The installation program is included in the HPC Pack 2012 download.

Important

The HPC Pack 2012 web components can only be installed on the head node or head nodes of the cluster.

For additional information and step-by-step procedures, see Install the Microsoft HPC Pack Web Components.

Install the HPC soft card key storage provider

To enable soft card authentication when submitting jobs to the HPC Pack 2012 cluster, you must separately install the HPC soft card key storage provider (KSP) on the following computers:

  • The head node of your cluster

  • The compute nodes, workstation nodes, and unmanaged server nodes of your cluster

To install the KSP, you must separately run the version of the installation program that is appropriate for the operating system on each computer: HpcKsp_x64.msi or HpcKsp_x86.msi. The installation programs are included in the HPC Pack 2012 installation files.

For more information about installing the KSP, see Enable and Configure Soft Card Authentication on a Windows HPC Cluster.

Uninstall HPC Pack 2012

To completely uninstall HPC Pack 2012, uninstall the features in the following order:

  1. HPC Pack 2012 Web Components (if they are installed)

  2. HPC Pack 2012 Key Storage Provider (if it is installed)

  3. HPC Pack 2012 Services for Excel 2010

  4. HPC Pack 2012 Server Components

  5. HPC Pack 2012 Client Components

  6. HPC Pack 2012 MS-MPI Redistributable Pack

Important

Not all features are installed on all computers. For example, HPC Pack 2012 Server Components is not installed when you choose to install the client components only.

When HPC Pack 2012 is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2012, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2010 SP1

  • Microsoft SQL Server 2012 Express

    Note

    This program also includes Microsoft SQL Server 2008 Setup Support Files.

  • Microsoft SQL Server 2012 Native Client

Additionally, the following server roles and features might have been added when HPC Pack 2012 was installed, and they can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • File Server Resource Manager role service

  • Routing and Remote Access Service server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

Known issues

The following issues are known to affect this release of HPC Pack 2012:

  • Adding a head node configured for high availability may cause loss of security accounts from the HPCUsers and Administrators groups

  • iSCSI deployment is not currently enabled

  • Setup requires .NET Framework 3.5

  • Cluster management tools are not supported on Windows XP or Windows Server 2003

  • English (United States) locale must be set for remote SQL Server database instance

  • Bare metal deployments with a large number of nodes may fail

  • The Windows Azure VM role is retired

  • Windows Azure Connect is not supported in Microsoft HPC Pack 2012

  • Windows Azure HPC Scheduler Web Portal is not available immediately after deployment

  • Heat map and metrics may stop updating

  • Cluster metrics collection may affect performance-sensitive applications

  • Task output shows most recent 4000 characters of output

  • Capture Image operation may cause HPC Cluster Manager to hang

  • Capture Image operation may incorrectly report errors

  • Activation and submission filters must be accessible to all head nodes

  • Custom HPC Pack 2008 R2 diagnostic tests must be updated to work with HPC Pack 2012

  • Apostrophe cannot be used in a node template name

  • Remote connection credentials to cluster nodes may not be stored

  • Node preparation task may run repeatedly 

  • SOA diagnostics can fail after installation of HPC Pack 2012 using nondefault folders

Adding a head node configured for high availability may cause loss of security accounts from the HPCUsers and Administrators groups

Adding a head node to a HPC cluster in which the head node is configured for high availability can cause loss of security accounts in the HPC cluster user and administrator groups that are configured on the cluster. Unless precautions are taken before disconnecting from an HPC cluster management tool session (HPC Cluster Manager or Windows HPC PowerShell) on a high availability head node, in the event of account removal, a cluster administrator may be unable to reconnect to the cluster by using the HPC cluster management tools.

To work around or avoid this problem, do the following in your high availability configuration:

  1. As a best practice, create and maintain domain user groups for the HPC cluster users and administrators, and use standard processes in your organization to manage the user groups.

  2. Use the HPC cluster management tools to add these domain user groups to the HPC cluster users and HPC administrators groups (HpcUsers and local Administrators group) on each head node.

  3. When you add a head node to the HPC cluster that is configured for high availability, before running Setup for HPC Pack 2012 on the failover cluster node, use the local user and group management tools on the computer to add these domain groups to the HpcUsers and the local Administrators groups. (You may need to create the HpcUsers group on a new head node.) Failure to do so will result in the removal of the domain groups from the HPC cluster configuration.

  4. If you are unable to connect to the HPC cluster with HPC Cluster Manager or Windows HPC PowerShell as an HPC administrator to a high availability head node, log in locally to the computer. Then, use the local user and group management tools to add the appropriate domain groups to the HpcUsers and the local Administrators groups. After you do this, you can connect to the head node using the HPC cluster management tools.

  5. As an additional best practice, it is recommended that only one HPC cluster administrator works on the cluster at one time when adding or removing head nodes.

iSCSI deployment is not currently enabled

Because of a known issue, in this HPC Pack 2012 release, the deployment of iSCSI boot notes on your HPC cluster is not supported. The iSCSI deployment features in the HPC Pack 2012 cluster management tools (such as iSCSI Deployment, under Configuration, in HPC Cluster Management), are not currently enabled.

iSCSI deployment features are enabled in clusters that are created with HPC Pack 2008 R2.

Setup requires .NET Framework 3.5

SQL Server 2012 Express requires .NET Framework 3.5 to successfully install. If you do not have a database already set up for use with HPC Pack 2012, or if your head node does not have Internet access during setup, install .NET Framework 3.5 manually before you attempt to install HPC Pack 2012. You can install .NET Framework 3.5 by using the Add Roles and Feature Wizard GUI or a command-line tool. For more information, see Install .NET Framework 3.5 and other features on-demand.

Cluster management tools are not supported on Windows XP or Windows Server 2003

You cannot run Setup for HPC Pack 2012 to install the client utilities on a computer that is running the Windows XP or the Windows Server 2003 operating system. The HPC Pack 2012 management tools are not supported on those operating systems.

You can use job submission APIs to submit jobs to a HPC Pack 2012 cluster from a computer that is running the Windows XP or the Windows Server 2003 operating system. To enable this, install the HPC Pack 2012 Client Redistributable (HpcClient_x64.msi or HpcClient_x86.msi, depending on the operating system) on the computer. The installation files are available from the Microsoft Download Center, and they are installed in the Reminst file share on an HPC Pack 2012 head node.

Note

A prerequisite for installing the HPC Pack 2012 Client Redistributable is that the Visual C++ 2010 SP1 runtime is installed on the client computer. Download and install the Visual C++ runtime from the Microsoft Download Center.

English (United States) locale must be set for remote SQL Server database instance

When preparing the databases for a HPC Pack 2012 head node on a remote server that is running Microsoft® SQL Server, the account for the HPC cluster database instance must be configured to use English (United States) as the system locale. If a different locale is configured (such as an English locale other than the United States), the deployment of the head node can fail.

To configure the English (United States) locale for the SQL Server login for the HPC cluster database instance, use the SQL Server management tools. The SQL Server login should be a domain account that you will use for the installation of HPC Pack 2012 on the head node.

Bare metal deployments with a large number of nodes may fail

Under certain conditions when deploying compute nodes from bare metal, the deployment of some nodes can fail. This is more likely to occur if you are deploying a large number of nodes at one time. You might see an error message similar to “The data is invalid,” or “The file or directory is corrupted and unreadable.”

To work around this problem, redeploy the affected nodes, or try deploying a smaller number of nodes at one time.

The Windows Azure VM role is retired

The VM Role feature (beta) in Microsoft Azure is being retired on May 15, 2013. Also now deprecated are the settings in Microsoft HPC Pack 2008 R2 and Microsoft HPC Pack 2012 to deploy a custom VHD to VM role nodes from a Windows HPC cluster. After the retirement date, VM role deployments from an HPC cluster will fail or be inaccessible. To add Azure nodes to an HPC cluster, use the Azure worker role.

Windows Azure Connect is not supported in Microsoft HPC Pack 2012

Windows Azure Connect features are not supported on Windows Azure node deployments with HPC Pack 2012. The Windows Azure Connect page has been removed from the Create Node Template Wizard and Node Template Editor. Windows Azure Connect settings will be removed from Windows Azure node templates that are imported from HPC Pack 2008 R2.

HPC Pack 2012 supports Windows Azure Virtual Network in deployments where Windows Azure Virtual Network is available.

Windows Azure HPC Scheduler Web Portal is not available immediately after deployment

After the initial deployment of the Windows Azure HPC Scheduler is completed, if you immediately attempt to reach the Web Portal, you might see a “Permission denied” error message. The Web Portal can take up to 10 minutes to start functioning after the deployment is completed.

Heat map and metrics may stop updating

If a network using IPsec has Connection Security Rules enabled in Windows Firewall, communication between a head node and a compute node may be blocked after the HPC Monitoring Server Service is restarted on the head node. Because of this, the heat map in HPC Cluster Manager and other cluster metrics may stop updating.

To work around this problem, restart the HPC Monitoring Client Service on each affected compute node. To restart the service on all nodes, you can run the following clusrun commands:

clusrun net stop HpcMonitoringClient
clusrun net start HpcMonitoringClient

Cluster metrics collection may affect performance-sensitive applications

Collection of cluster metrics, in particular the counters for HpcNetwork, can negatively affect job throughput for performance-sensitive applications, such as compute-intensive MPI applications.

To avoid this problem, disable the HpcNetwork usage counter, if it is in use. To do this, first export the current metric definition list to an XML file, in case you want to add it back later, by running the following HPC PowerShell cmdlet:

Export-HpcMetric -name HpcNetwork -path hpcnetwork.xml

Then, remove the HpcNetwork metric, as follows:

Remove-HpcMetric -name HpcNetwork

Task output shows most recent 4000 characters of output

HPC Pack 2012 caches and shows the most recent 4000 characters per task, not the first 4000 characters, as in previous versions of HPC Pack. This can affect or break existing scripts that you have implemented that monitor task output. The feature in HPC Pack 2012 makes it easier for you to see the current or exit status of tasks in your HPC jobs.

Capture Image operation may cause HPC Cluster Manager to hang

The Capture Image operation in HPC Cluster Manager may continue running without completing, and HPC Cluster Manager may appear to hang, if there is not sufficient disk space on the head node to store the disk image. The operations log may show errors or warnings that are related to the Robocopy command and the .wim file.

To work around this problem, use Task Manager to exit HPC Cluster Manager. Ensure that there is sufficient hard disk space on both the head node and the computer on which HPC Cluster Manager is running. If the head node is configured for high availability, check that there is sufficient space in the shared disk in the failover cluster. Then, try the operation again.

Capture Image operation may incorrectly report errors

On a cluster with a head node configured for high availability in the context of a failover cluster, the Capture Image operation may complete successfully but report one or more errors. For example, you might see a message similar to the following:

“[Error] Could not find a part of the path '<WIMFilePath>'. [Error] The operation failed and will not be retried. [Warning] The operation failed due to errors during execution.”

To determine if the image was captured successfully, review the list of images in the image store.

Activation and submission filters must be accessible to all head nodes

In an HPC cluster configured for high availability of the head node in the context of a failover cluster, any activation or submission filter program files (.exe files and scripts) must be accessible to all of the head nodes in the cluster. It is recommended to install the filters to a folder in the shared storage of the failover cluster. Alternatively, you can create a local folder on each head node to store the folders. If you use duplicated local folders, ensure that you synchronize them when you make changes to the filters.

Custom HPC Pack 2008 R2 diagnostic tests must be updated to work with HPC Pack 2012

Because of a change in the supported .NET Framework version, any custom diagnostic test binaries that you created for HPC Pack 2008 R2 and that reference HPC assemblies need to be updated for HPC Pack 2012.

To update a HPC Pack 2008 R2 diagnostic test to work with HPC Pack 2012, recompile the diagnostic test source code by using the HPC Pack 2012 API and .NET Framework 4. Then, the diagnostic test should be retested to ensure full functionality.

Alternatively, a workaround for each HPC Pack 2008 R2 diagnostic test executable file is to create a new application configuration file, or update the existing configuration file, to enable HPC Pack 2012 to recognize and run the existing test. Each configuration file has a name of the form TestName.exe.config, where TestName.exe is the name of the diagnostic test executable file, and is located in the same folder as the test executable. For example, the following sample configuration file enables an HPC Pack 2008 R2 diagnostic test to support .NET Framework 4 and to bind the HPC Pack 2012 dependent assembly microsoft.hpc.diagnostics.helpers.

<?xml version="1.0"?>
<configuration>
    <startup>
        <supportedRuntime version="v4.0"/>
        <requiredRuntime version="v4.0" safemode="true"/>
    </startup>
    <runtime>
        <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
            <dependentAssembly>
                <assemblyIdentity name="microsoft.hpc.diagnostics.helpers"
                                  publicKeyToken="31bf3856ad364e35"
                                  culture="neutral" />
                <bindingRedirect oldVersion="3.0.0.0"
                                 newVersion="4.0.0.0"/>
            </dependentAssembly>
        </assemblyBinding>
    </runtime>
</configuration>

Note

Ensure that you add an additional <dependentAssembly> element for each additional HPC assembly that the diagnostic test binary references.

Apostrophe cannot be used in a node template name

The apostrophe (single quote) character cannot be used in the name of a node template. Using the apostrophe character in a template name can cause HPC Cluster Manager to crash or may lead to unexpected results when attempting to filter nodes by template name.

Remote connection credentials to cluster nodes may not be stored

If you previously stored domain credentials for remote desktop connections to cluster nodes from HPC Cluster Manager and those credentials expired or changed recently, you will be prompted to change credentials the next time you start a remote connection to cluster nodes by using the Remote Desktop in HPC Cluster Manager. However, if you enter the new credentials, and select the Remember my credentials option, the new credentials are not stored by HPC Cluster Manager. If you attempt later to make a remote desktop connection to another node by using HPC Cluster Manager, you will be prompted to enter credentials again. You will also be prompted to enter credentials again to make a later connection to the original node.

To update the remote connection credentials to the cluster nodes so that they are stored by HPC Cluster Manager, click Change password in the HPC Cluster Manager dialog box that appears when you use the Remote Desktop action. Then, in the Windows Security dialog box that appears, enter your remote connection credentials, and select Remember my credentials.

Node preparation task may run repeatedly 

A job may repeatedly attempt to run a node preparation task and continue to run, instead of failing, if certain types of errors occur. If you notice that a job continues to create a node preparation task and the task fails immediately, you should manually cancel the job. Verify that you have specified a valid working directory for the job. If the job runs on graphics processor units (GPUs), or is another job type that runs in a console session that is created on the compute nodes, you should also ensure that there is not currently a user logged in to the console session of the nodes on which you are attempting to run the job.

SOA diagnostics can fail after installation of HPC Pack 2012 using nondefault folders

The HPC SOA Diagnostics Monitoring Service (HpcSoaDiagMon) can fail to start if the data directory configured on the head node during installation of HPC Pack 2012 is set to a folder that is not under the installation folder for HPC Pack. In this case, the path for the CCP_DATA environment variable is not under the path for the CCP_HOME environment variable. This problem will prevent the monitoring of SOA jobs by HPC Pack 2012.

If the data directory is not located under the default installation folder for HPC Pack, you make a file system link to resolve the problem. From an elevated command prompt, type the following command:

mklink /j "%CCP_HOME%\Data" "%CCP_DATA%"

See also