Batch System (Slurm)

Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. It has been used in many supercomputing sites and data centers across the world. JLab farm deployed Slurm in early 2019, but it was hidden from users by Auger system. Users now can access Slurm directly from farm interactive nodes.

Submitting Batch Jobs
You can submit jobs from one of the interactive nodes or from within a running batch script. Batch jobs are submitted using slurm sbatch commands with a valid project account.  You can specify options on the command line, or (recommended) put all of them into your batch script file.  See examples below.  In your batch script, please specify at least the following, plus other options useful to your workflow.
  1. account (your project short name), using -A, --account=<account>
  2. partition (like pbs queue) , using -p, --partition=<partition_names>
  3. resources needed (number of nodes, mode of nodes, etc), using  -C, --constraint=<list>, -N, --nodes=<num_node> or -n, --ntasks=<num_core>
  4. wall time (specifying this more tightly than the default will improve your throughput), using  -t, --time=<time>
Right now there are three partitions, general, production and priority, are configured. Please use Scicomp portal Job page for the status of active and recently finished jobs, as well as the most current partition information.

For detailed slurm documentation, please checkout slurm official documentation