All user processes must be run as cluster jobs in the cluster job queuing system, slurm.
Slurm provides an easy and transparent way to streamline calculations on the cluster and make cluster use more efficient. Slurm offers several user friendly features:
sinfo reports the state of partitions and nodes managed by Slurm. Example output:
$ sinfo PARTITION AVAIL TIMELIMIT NODES STATE NODELIST all* up infinite 9 down* rb2u[3,5-12] all* up infinite 5 idle dn[1-2],rb2u[1-2,4] xeon-6136-256G up infinite 2 idle dn[1-2] xeon-e5-2620-32G up infinite 9 down* rb2u[3,5-12] xeon-e5-2620-32G up infinite 3 idle rb2u[1-2,4] |
In the above, the partition "all" contains all the nodes. There are also partitions for the differing specifications of nodes. Nodes will be listed both in "all" and their individual spec class partition.
The above eample shows that nodes dn1 and dn2 are idle – up and no jobs are running. Nodes rb2u3, rb2u5, rb2u6, rb2u7.... through rb2u12 are all down. If a node is allocated to a job, the status will be "alloc" . If a node is set to run its current jobs and allow no more jobs in preparation of downtime, its status will be set to "drain".
sview is a graphical user interface to get state information for nodes (and jobs).
sview is a graphical user interface to get job state information on nodes.
squeue reports the state of jobs or job steps. By default, it reports the running jobs in priority order and then the pending jobs in priority order.
$ squeue JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 65646 batch chem mike R 24:19 2 adev[7-8] 65647 batch bio joan R 0:09 1 adev14 65648 batch math phil PD 0:00 6 (Resources) |
Each calculation is given a JOBID. This can be used to cancel the job if necessary. The PARTITION field references the node class spec partitions as mentioned above in the "sinfo" documentation. The NAME field gives the name of the program being used for the calculation. The NODELIST field shows which node each calculation is running on. And the NODES field shows the number of nodes in use for that job.
You can start a calculation/job directly from the commandprompt by using srun. This command submits jobs to the slurm job submission system and can also be used to start the same command on multiple nodes. srun has a wide variety of options to specify resource requirements, including: minimum and maximum node count, processor count, specific nodes to use or not use, and specific node characteristics such as memory and disk space.
sbatch is used to submit a job script for later execution. The script will typically contain one or more srun commands to launch parallel tasks.
$ srun -N4 /bin/hostname rb2u4 dn1 rb2u1 rb2u2 |
In the example above, we use srun to start the command hostname on 4 nodes in the cluster. The option -N4 tells slurm to run the job on four nodes of its choice. And we see the output printed by the hostname command of each node that was used.
With many calculations it is important to pipe data in (<) from an input file and pipe date out (>) to an output file. The program may also have command line options as well:
$program [options] < input.dat > output.dat |
One of the nice features of srun is that it preserves this ability to redirect input and output. Just remember that any options directly after srun such as –N will be used by srun. However, any options or piping commands after your program name will be used by the program only.
scancel is used to cancel a pending or running job or job step. To do this we need the JOB ID for the calculation and the command scancel. The JOB ID can be determined using the squeue command described above. To cancel the job with ID=84, just type:
$ scancel 84 |
If you rerun squeue you will see that the job is gone.