Parallel Jobs (MPI)

MPI Libraries:

There is an MPI development and runtime library supported on the farm. This MPI library and development tool is openmpi. The current version is 3 for openmpi. The default path is  /usr/lib64/openmpi3 for openmpi. One can access the MPI  by using module utility on any ifarm and farm nodes after specifying module use /apps/modulefiles.
  • module load mpi/openmpi3-x86_64

Compile MPI applications:

Single Node Jobs


Account, Partitions and Resources

Slurm Accounts:
 
A slurm account is used to charge farm CPU time to a correct computing project. Every user is a member of at least one slurm account if the user works with the slurm system . The account for a job must be included in a job submission script using "#SBATCH --account=account_name". The list of accounts and the users who are in those accounts can be found in this web page Slurm Accounts.
 

Batch System (Slurm)

Note: Most experimental physics users should be using SWIF2 for optimal interactions with the tape system and to access improved job/workflow management features.

/volatile disk pool

Please reference Sciomp <b>Volatile Disk Pool Policy </b> page.

SlurmJobs


Name
   slurmJobs - Slurm batch job status query

Syntax  

   slurmJobs [-h] [-u username] [-j job_id] [-s stat ] [-a account] [-q queue]

Description

slurmHosts


Name
    slurmHosts - displays hosts and their static and dynamic resources/features.

Syntax

    slurmHosts

Description

Auger-Slurm

How to use this system?
 

Deletion algorithm


Pages