You are here

Parallel Jobs (MPI)

MPI Libraries:

There is an MPI development and runtime library supported on the farm. This MPI library and development tool is openmpi. The current version is 3 for openmpi. The default path is  /usr/lib64/openmpi3 for openmpi. One can access the MPI  by using module utility on any ifarm and farm nodes after specifying module use /apps/modulefiles.
  • module load mpi/openmpi3-x86_64

Compile MPI applications:

Once the above module is loaded,  various mpi commands are in the path. Use mpicc, mpic++ to compile C or C++ applications, or use mpifort to compile Fortran applications. The default inter-node communication fabrics of MPI on the farm clusters are Infiniband. Please use mpicc --show or mpifort --show to see how an application is compiled and linked in detail.
 

Submit MPI jobs:

On the farm, MPI jobs have to be submitted and run under control of slurm. Using mpirun command with explicit host names or a machine file is  unnecessary. The following sample sbatch script shows how to submit a MPI job to the farm.
 
openmpi:
#!/bin/bash -l
#SBATCH -A youraccount
#SBATCH -p production (or general)
#SBATCH -N numnodes
#SBATCH --exclusive
#SBATCH -t hour:min:seconds
#SBATCH -J jobname
#SBATCH -C special features (like farm19)
 
/usr/lib64/openmpi3/bin/mpirun  yourexec arg0 arg1
 
Notes: 
An alternative way to request a number of nodes by specifying the following sbatch requests:
e.g. One needs to have 64 mpi processes on a single node (e.g. farm19), but needs total of 64 nodes.
#SBATCH -n 4096
#SBATCH --ntasks-per-node 64
 
This version of openmpi does not support lauching MPI jobs using slurm srun command. A newer version of openmpi supporting srun will be available in the near future.