When distributing jobs using HPC Pack 2019, is there a criterion for selecting nodes in HPC Pack?

junmin park 81 Reputation points
2023-04-28T05:04:38.0033333+00:00

Hi team.

We were using 2 Head Nodes configured as HA and 5 Compute Nodes.

Recently we add 5 more Compute Nodes.

The curious thing here is that the task is first assigned to the added Compute node.

For Example

We use job template set to unit type is Socket and a Min/Max 10.

When there were only 5 Compute Nodes in the beginning, all 5 were used.

As 5 more Compute Nodes are added, only 5 out of 10 are used when one job is performed.

The strange thing is that if there is no job, I think that the jobs from CN #1 to CN #5 should go in first.

but the job always from CN #6 to CN #10 go in.

If CN #6 to CN #10 Job is running, then it is distributed to the remaining nodes.

The added 5 servers have the same CPU, memory, and storage as the existing 5 servers.

Is there a criterion for selecting nodes when distributing jobs in HPC Pack?

If such criteria are stored in MS-SQL Table, where can I check them?

Thanks.

Azure Storage Accounts
Azure Storage Accounts
Globally unique resources that provide access to data management services and serve as the parent namespace for the services.
3,257 questions
Azure CycleCloud
Azure CycleCloud
A Microsoft tool for creating, managing, operating, and optimizing high-performance computing (HPC) and big compute clusters in Azure.
66 questions
Azure HPC Cache
Azure HPC Cache
An Azure service that provides file caching for high-performance computing.
27 questions
{count} votes

Accepted answer
  1. Prrudram-MSFT 26,506 Reputation points
    2023-04-28T08:53:18.66+00:00

    Hello @junmin park

    It seems like you are using HPC Pack to distribute jobs across your compute nodes. When you add new compute nodes to your cluster, HPC Pack will automatically start using them to distribute jobs.

    Regarding the order in which the jobs are assigned to the compute nodes, HPC Pack uses a load balancing algorithm to distribute the jobs across the available compute nodes. This algorithm considers the current load on each node, as well as the resources required by the job.

    If you want to check the criteria used by HPC Pack to distribute jobs, you can look at the job scheduler logs. These logs contain detailed information about how HPC Pack schedules jobs, including the criteria used to select compute nodes. You can find the logs in the HPC Pack Management Console under the "Monitoring" tab.

    I hope this helps! Let me know if you have any further questions.

    Please accept answer and upvote if the above information is helpful for the benefit of the community.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.