To submit a job to slurm, one can call "sbatch" with all necessary options, or put all options into a submission-script, then call "sbatch submission-script". Here are few samples of submission-script.
1) A simplest one core job.
Note: if this line "#SBATCH --account=admin" is omitted, job will run with user's default account if there is one, otherwise sbatch will fail with "sbatch: error: Batch job submission failed: Invalid account or account/partition combination specified".
#!/bin/bash
#SBATCH --partition=priority
#SBATCH --account=admin
#SBATCH --mem-per-cpu=512
printenv; date;
2) A 16 cores halld job which requests 4G (250MBx16) memory, 1GB (1000MB) disk space and 2 hours walltime. --chdir option set batch job working directory to /scratch/slurm/slurm-job-id.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --ntasks=16
#SBATCH --mem-per-cpu=250
#SBATCH --job-name=test
#SBATCH --partition=priority
#SBATCH --account=halld
#SBATCH --mail-user=xxx@jlab.org
#SBATCH --time=2:00:00
#SBATCH --gres=disk:1000
path_to_executable
3) A whole node clas12 job which requests a farm18 centos77 node, 24 hours walltime. The error and output files will go to /farm_out/<userName> directory and named as <jobName>-<jobId>-<hostname>.out. If --output and --error options are omitted, slurm will write the log files to the directory where sbatch is called.
Note: use --exclusive to request a whole node, without requesting any memory.
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --exclusive
#SBATCH --job-name=rec-clas12-1-rgbv16dst6311r20_003
#SBATCH --output=/farm_out/%u/%x-%j-%N.out
#SBATCH --error=/farm_out/%u/%x-%j-%N.err
#SBATCH --partition=production
#SBATCH --account=clas12
#SBATCH --constraint=centos77,farm18
#SBATCH --gres=disk:5120
#SBATCH --time=24:00:00
path_to_executable
4) Parameter Sweep or Array of Jobs. Slurm allows a submission script to be expanded into array of jobs using option of --array=<index> where index can be specified using a comma separated list and/or a range of values with a "-" separator. For example, "--array=0-15" or "--array=0,6,16-32". The following is an example to execute myanalysis program on 200 files in the directory of /path/to/data. Once this script is submitted to slurm, there will be 200 independent jobs each of which has the same job id but different array id. The squeue will command will display these 200 jobs in the form of jobid-arrayid.
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --job-name=jobname
#SBATCH --output=/farm_out/%u/%x-%j-%N.out
#SBATCH --error=/farm_out/%u/%x-%j-%N.err
#SBATCH --partition=production
#SBATCH --account=clas12
#SBATCH --constraint=centos77,farm18
#SBATCH --gres=disk:5120
#SBATCH --mem-per-cpu=1000
#SBATCH --time=24:00:00
#SBATCH --array=0-199
FILES=(/path/to/data/*)
srun myanalysis ${FILES[$SLURM_ARRAY_TASK_ID]}