You are here

Single Node Jobs


There are three types of single node jobs. 

  1. A job requesting an exclusive use of a single node.
  2. A multi-threaded (OpenMP or pthread or Java threads) job requesting a subset of the CPUs of a single node.
  3. A multi-process job demanding a subset of the CPUs of a single node.
Exclusive Single Node Job:

To request a whole node for a job, use --exclusive option, without requesting any memory (slurm will assign all available memory to the job). Combine with --constraint option, user can request a specify type of single node to run a job. This is a sample sbtach script which request a farm18 node to run a single node job.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --exclusive
#SBATCH --job-name=xxx
#SBATCH --partition=production
#SBATCH --account=xxx
#SBATCH --constraint=farm18
path_to_executable


Multi-threaded Job on a single node:


#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --cpus-per-tasks=numcores
#SBATCH --job-name=xxx
#SBATCH --partition=production
#SBATCH --account=xxx
#SBATCH --constraint=farm18

path_to_userexecutable
Note: if the above script specifies --ntasks = numcores, slurm may allocate multiple nodes to satisfy the request.

Multi-process Job on a single node:

The following job requests total of 12 CPUs consists of three processes: one uses 8 CPUs, the other twos each uses 2 CPUs.


#!/bin/bash
#SBATCH --ntasks=12
#SBATCH --nodes=1
#SBATCH --cpu-per-task=1
#SBATCH --job-name=xxx
#SBATCH --partition=production
#SBATCH --account=xxx
#SBATCH --constraint=farm18
srun --nodes=1 --ntasks=8 --cpus-per-task=1 --mem-per-cpu=1000M --exclusive large_prog &
srun --nodes=1 --ntasks=2 --cpus-per-task=1 --mem-per-cpu=500M --exclusive small_prog1 &
srun --nodes=1 --ntasks=2 --cpus-per-task=1 --mem-per-cpu=200M --exclusive small_prog2 &
wait

Note: --exclusive enables each run step to be scheduled independently. The last wait has to be there otherwise the job will be finished right away and all steps are killed.