Partager via


What's New in Windows HPC Server 2008 R2

Updated: December 2010

Applies To: Windows HPC Server 2008 R2

This document lists the new features that are available in Windows® HPC Server 2008 R2. For what’s new in Service Pack 1, see What's New in Windows HPC Server 2008 R2 Service Pack 1.

Note
If you want to test some of the new features listed in this document, see the New Feature Evaluation Guide for Windows HPC Server 2008 R2, and the Windows HPC Server 2008 R2: Developer Resources.

In this topic

  • Deployment

  • Cluster management

  • Job scheduling and runtime

  • SOA scheduling and runtime

  • HPC Services for Microsoft Excel 2010

Deployment

The following features are new for deployment:

  • Upgrade from Windows HPC Server 2008 to Windows HPC Server 2008 R2. Upgrading from Windows HPC Server 2008 is now supported, including the upgrade of a head node that is configured in a failover cluster. For more information, see Upgrade Guide for Windows HPC Server 2008 R2 and Migrating a Failover Cluster Running Windows HPC Server 2008 to Windows HPC Server 2008 R2 Step-by-Step Guide.

  • Editions of Microsoft® HPC Pack 2008 R2. There are two editions of HPC Pack 2008 R2:

    • HPC Pack 2008 R2 Express: Includes the core features that you need to deploy and run an HPC cluster, without requiring you to acquire an additional license. Features include: deployment and management tools, a job scheduler that supports service-oriented architecture (SOA) and Message Passing Interface (MPI) jobs, and support for high-speed networking.

    • HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation: Includes a superset of the features in the Express edition, with additional features for your HPC cluster. This edition requires an additional license, but can be evaluated in RC 1 without a license. Additional features include: the ability to run Microsoft Excel workbooks and User Defined Functions (UDFs) on your cluster, and the ability to add workstation notes to your cluster.

    During the installation of HPC Pack 2008 R2, you can select the edition that you want to install on the installation wizard. If you are performing an unattended installation, you can run setup.exe with the –express parameter to install the HPC Pack 2008 R2 Express edition. If you do not specify the –express parameter, the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition is installed by default.

  • Deployment Environment Validator. In HPC Cluster Manager, from the Deployment To-do List in Configuration, and from Diagnostics, you can run a new set of diagnostic tests that help you to find common problems that can affect node deployment. This new set of diagnostic tests, among other things, verifies connectivity with the Active Directory® doMayn controller, availability of the DHCP server and the DNS server, determines if IPsec is enabled on the enterprise network, and verifies that the provided installation credentials have the appropriate privileges to perform the node deployment tasks.

  • Workstation nodes. You can now add to your HPC cluster computers that are running the Windows 7 operating system. These computers are added as workstation nodes, and you can use them to run cluster jobs. Workstation nodes do not need to be dedicated cluster computers, and can be used for other tasks. They can automatically become available to run cluster jobs according to a weekly availability policy that you configure (for example, every night on weekdays and all day on weekends), or they can be brought online manually, depending on the configuration that you choose.

  • iSCSI deployment. In Windows HPC Server 2008 R2, you can deploy nodes on your cluster that boot over the network by using an iSCSI connection. This new feature helps you to centralize your storage, and to deploy diskless nodes (that is, computers that do not run the operating system from a local hard disk drive or that do not have a hard disk drive installed).

  • Capture node image. You can now create a WIM file of an existing node, which can in turn be used to deploy other nodes in your cluster.

  • Deployment at scale. This release provides efficient deployment of up to 1,000 nodes.

  • Remote database configuration. During the installation process of HPC Pack 2008 R2, you can configure Microsoft® SQL Server™ 2008 SP1 databases on computers that are not the head node of your HPC cluster, and then use them to host cluster management, job scheduling, reporting, and diagnostics information.

^ Top of page

Cluster management

The following features are new for cluster management:

  • New wizard for software updates. The new Add Software Updates Wizard helps you to search for software updates based on a node template, update the template with any available updates, and optionally install the software updates on the nodes that require them. To start the Add Software Updates Wizard: in HPC Cluster Manager, in Configuration, in Node Templates, right-click a node template that has already been used to add nodes to your cluster, and then click Add Software Updates.

  • Dynamic node groups. In this release, changes to node groups immediately impact the jobs that are queued. This change enables the creation of tools that automatically move nodes between groups in order to handle computational loads differently, or that move nodes based on the time of day.

  • Enhanced node state and node health view. In this release, node state and node health are represented separately in Node Management, using different sets of icons. Also, nodes that are in the Offline state are no longer marked with a warning by default.

  • Location-based node management. You can quickly view compute nodes by using HPC Cluster Manager based on their location information, both in the list and heat map views. In the heat map view, you can now select to display nodes grouped by their location information. For more efficient viewing options, you can specify up to three levels of location detail for each node: primary, secondary, and tertiary. To see the new location-based view: in Node Management, in the Navigation Pane, under Nodes, click By Location. To see nodes in the heat map view grouped by their location information, click the Group by location icon at the bottom of the heat map tab.

  • Heat map enhancements for large clusters. This release includes improvements to the heat map view in HPC Cluster Management, to support the display of large clusters. You can see the status of up to 1000 nodes without scrolling. Additionally, you can configure color-coded overlays of information in the heat map view, and have a prioritized metric display.

  • Tabbed views. You can create multiple customizable tabs in HPC Cluster Manager. This enables you to have several list and heat map views available at the same time, each one displaying the metrics that you select.

  • Diagnostics extensibility. You can create and add your own custom diagnostic tests to HPC Pack 2008 R2. These tests are run by the HPC Job Scheduler Service, and you can create and manage them by using HPC Cluster Manager and HPC PowerShell. You can also specify parameters for tests. For more information about creating and adding custom diagnostic tests, see the Diagnostics Extensibility in Windows HPC Server 2008 R2 Step-by-Step Guide on MSDN.

  • Reporting extensibility. This release includes a rich reporting database and the ability to create and add custom reports to HPC Pack 2008 R2. For more information about creating custom reports, see the Reporting Extensibility in Windows HPC Server 2008 R2 Guide on MSDN.

^ Top of page

Job scheduling and runtime

The following features are new for job scheduling and runtime:

  • Service tasks. A new method to conclude service tasks is now available. This method tells the job scheduler to stop creating new instances of the service task. Additionally, the method includes an option to indicate if existing instances of the task should be allowed to complete or should be canceled.

  • More granular priority levels for jobs. Cluster administrators that need a finer control for job priorities can now specify job priority values between 0 (lowest) and 4000 (highest). The 5 priority levels that were available with previous releases are still available in this release, and map to the new priority values: Lowest (0), BelowNormal (1000), Normal (2000), AboveNormal (3000), Highest (4000). You can specify job priority in terms of a priority level, a priority value, or a combination of the two. To specify job priority based on the new priority values, use the –priority parameter for the Set-HpcJob cmdlet in HPC PowerShell. For example: Set-HpcJob –id 123 -priority 2550.

  • E-Mayl notifications. Cluster users can now select to receive e-Mayl notifications when their job starts or completes.

  • Enhanced activation filters. You can now implement additional exit codes in your activation filters, to block the queue until the job can start, reserve resources for the job without blocking the queue, put the job on hold, or reject the job.

  • Node exclusion listing. In this release, specific nodes can be excluded from running a job. This new feature can help avoid nodes that have a particular configuration or other characteristics that are not appropriate to run the job, or that are known to have intermittent problems running specific types of jobs. The list of excluded nodes is defined in the ExcludedNodes property of each job, and is empty by default. The exclusion list can be modified while the job is running. To modify the list of excluded nodes for an existing job, modify the job by running the Set-HpcJob cmdlet in HPC PowerShell using the new –AddExcludedNodes, -ClearExcludedNodes, or -RemoveExcludedNodes parameters. You can also modify the job by running the job modify command-line tool with the new /addexludednodes, /clearexcludednodes, or /removeexcludednodes parameters. The HPC API also includes new interfaces for node exclusion listing.

  • Service-balanced scheduling. This release includes a new scheduling mode that optimizes the process of starting jobs, and balances in real time the resources that are assigned to jobs, according to their priority. To enable this functionality, run the following command on the head node of your HPC cluster: cluscfg setparams schedulingmode=balanced.

  • Dynamic parametric tasks. This feature improves the performance of large parametric task sweeps by creating tasks when they are needed, instead of creating them when the job is submitted.

  • Job progress viewing and reporting. A redesigned user interface on HPC Cluster Manager displays job progress. Also, changes to the application programming interface (API) help you to report more detailed job progress information for your HPC applications.

  • Improved job and task troubleshooting. The user interface for job and task troubleshooting has been improved to make it easier to review errors. To see detailed information about a failed job, double-click that job on HPC Cluster Manager or HPC Job Manager.

  • Task cancellation grace period. In this release, when you cancel a task that is running, the HPC Node Manager Service stops the task by sending a CTRL_BREAK event to the application. The application then has a preset period of time to exit gracefully. The application can use this time to save state information, write a log message, create or delete files, or finish the current service call. You can configure the task cancellation grace period by changing the TaskCancelGracePeriod cluster property.

  • New node preparation and node release tasks for jobs. These new job tasks are available by using HPC Cluster Manager, the API, HPC PowerShell, and the command-line tools. A node preparation task runs on a node before any other tasks, and you can use it for creating local folders, copying files onto the node, or performing other initialization steps. A node release task runs on a node when the node is ready to be released from the job, and you can use it to delete temporary files, copy local files to a central shared folder, or perform other cleanup steps.

  • Scheduling at scale and better task throughput at scale. This release includes support for larger clusters, more jobs, and larger jobs—including improved scheduling and task throughput at scale.

  • Update user credentials once for all queued jobs. This release includes improvements to the way that user credentials are handled. You can now update your user credentials once for all your jobs that are in the queue.

^ Top of page

SOA scheduling and runtime

The following features are new for service-oriented architecture (SOA) scheduling and runtime:

  • SOA service versioning support. SOA service versioning allows multiple versions of the same Windows Communication Foundation (WCF) service to be installed independently on your HPC cluster. SOA clients can now query the runtime for the available versions of a given service, and then specify which version of the service should be used when creating a SOA session. Alternatively, a SOA client can create a session without specifying a service version, and the latest version of the service is used for the session.

  • Custom binding support. By default, the SOA broker in Windows HPC Server 2008 R2 exposes client-facing endpoints that use standard WCF NetTcpBinding and BasicHttpBinding bindings. In Windows HPC Server 2008 R2, a SOA broker can also be configured to expose endpoints using a custom binding. This enables clients to interact with your HPC cluster by using a variety of protocols.

  • Windows Web Services API. The Windows Web Services application programming interface (WWSAPI) is a new framework in Windows HPC Server 2008 R2 that enables you to develop native SOA services. Windows HPC Server 2008 R2 supports the ability to build fully native WSSAPI services that run across your HPC cluster.

  • Ability to cancel running SOA requests. In this release, a new Cancel() interface gives you the ability to cancel service requests without canceling the current session, saving calculation resources.

  • Cleanup interface for SOA services. In this release, a new OnExiting() interface is available for services. Your code can now register this event and clean up resources before the calculation is cancelled (for example, release a remote file, database connection, or COM object).

  • Exclusion of compute nodes that have failed SOA tasks. With this release, SOA sessions will exclude compute nodes that keep failing SOA service tasks or requests. This new functionality is based on the new node exclusion listing feature for job scheduling.

  • Improved broker node failover. In this release, when a broker node is configured in a failover cluster, a SOA session can continue to run if the broker node fails because it will be migrated to the failover broker node.

  • SOA Service Loading Test. In HPC Cluster Manager, from Services in Configuration, and from Diagnostics, you can run a new diagnostic test that attempts to load a SOA service and verify that it can be initialized and started. This helps in detecting common configuration and environment issues that can cause errors (for example, incorrect firewall configuration, network issues, msvcrt errors, or problems with the service registration file).

  • Enhanced tracing. This release offers a new interface for service code to write a user-level trace. HPC Cluster Manager includes a new user interface to configure service tracing. Also, there is a new user interface and new PowerShell cmdlets to collect and remove traces. Trace output has been modified so that it is easier to review in Service Trace Viewer.

  • Support for multiple clients in SOA sessions. In HPC Pack 2008 R2, by improving the session API, every batch of computation is tagged with a GUID so that multiple client applications can share the same SOA session by identifying its own GUID when sending computation requests and retrieving results.

  • Fire and recollect programming model. This release supports offline SOA applications, including client disconnect (SOA batch) and client resilience.

  • Single-job SOA sessions. In this release, it is no longer necessary to have two jobs running for each SOA session (one job for the SOA session broker, and one for the service). This one-to-one mapping between jobs and sessions can make monitoring and reporting simpler.

  • Built-in flow control. The new SOA session API and the new broker Web service interface in this release include a built-in flow control. With this feature, you do not need to implement your own throttling behavior, as was necessary in previous releases.

  • Better Java interoperability. This release provides improved interoperability with Java client applications.

^ Top of page

HPC Services for Microsoft Excel 2010

The following features are new for the Microsoft HPC Pack 2008 R2 Services for Microsoft Excel 2010:

  • Expanded macros. In this release, macros can be created to partition iterative calculations into a fork-join pattern. Built-in macros have been added to make implementation simpler and more efficient. Built-in macros include: HPC_GetVersion (for future compatibility with previous versions), HPC_Initialize, and HPC_Finalize.

  • Performance and scale enhancements. In this release, 300 million calculations running on 5,000 cores can be completed in approximately 12 days, or 288 hours. That is the equivalent of approximately 1 million calculations per hour, or about 1,700 calculations per minute.

  • Status windows. In this release, you can select to display a status window while user-defined functions (UDF) are being calculated on your HPC cluster. The status window shows the number of requests and responses in progress, as well as any errors that are returned.

  • Diagnostic tests for Microsoft Excel 2010. Two new diagnostic tests for Microsoft Excel 2010 are available in this release. By running these diagnostic tests, you can determine if Microsoft Excel 2010 is installed and properly licensed on your HPC cluster, as well as verify that the UDF container service is loaded and ready on the nodes. For more information, see Windows HPC Server 2008 R2: HPC Services for Excel.

  • Support for UDFs. This release includes support for UDFs in Microsoft Excel 2010. For more information, see Use the Excel Cluster Connector to offload UDFs.

^ Top of page

See Also

What's New in Windows HPC Server 2008 R2 Service Pack 1 Windows HPC Server 2008 R2