The following tables provide sample commands that can be used to submit a variety of jobs, and they explain how these jobs are run by the HPC Job Scheduler Service.
Sample commands for simple and parametric jobs
Command
Result
Job submit myapp.exe
Runs myapp.exe on a single processor. The undirected output is available in the Output property of the job task.
Runs myapp.exe on a single processor by using a head node file share as the working directory. The input and output are redirected to and from files in that directory.
Runs between four and eight simultaneous instances of the myapp.exe command. myapp.exe is executed 100 times on files input1.dat, input2.dat, input3.dat, through input4.dat.
Runs myapp.exe on six processors with the standard output directed to the file myOutput.txt in the user’s home directory on the head node (as defined in the %USERPROFILE% environment variable on the head node).
job submit /numnodes:2 mpiexec myapp.exe
Runs one myapp.exe process on two compute nodes. The standard output is stored in the HPC Job Scheduler Service database in the Output field for the task.
Runs 128 myapp.exe processes where the Message Passing Interface (MPI) data traffic is routed to a specific network in the cluster (157.59.x.x/255.255.0.0 in this example) by setting an environment variable (MPICH_NETMASK) on the MPI processes.
The Windows operating system optimizes the use of multicore systems by dynamically shifting workload to underutilized cores. However, this refinement can be detrimental to the performance of some HPC applications. The -affinity flag prevents the operating system from moving the MPI processes between cores. They run exclusively on the core that they started on.