Output of Batch Job


After a job finishes, auger-pbs will copy the JOB_NAME.JOB_ID.out file the
user's $HOME/.farm_out directory. This file will contains any stdout from
user's script (not captured by user) and the outputs from auger's preamble and postamble scripts.
On the bottom of this file, there is a job resource usage summary (see


(information on MPI libraries available & recommended)

MPI Considerations in Multi-Core and Heterogeneous environments

Multi-Core and Heterogeneous Environments

Current systems are comprised of multi-core nodes and contain accelerators. One effect of this is that users may desire to run in so called hybrid threaded MPI mode. Typically this results in fewer MPI processes per node than there are cores in the node. Examples of this are as follows:

MPI Considerations



jvolatile - query the volatile disk project information

jcache-old (no longer available)


jcache - manage files on the read-only cache disk


Auger Commands

  • jobstat - A summary of jobs status.
  • jsub - Submit jobs to the batch farm.
  • jkill - Delete queued or stop executing jobs.
  • farmhosts - Query the status of the batch farm nodes.

Batch System (Auger) - will be decommissioned on March 1st

The batch system provides a large computing resource to the JLab community.  It is a high throughput system, and not primarily an interactive system, although there are interactive nodes.  It is tuned to get as much work done per day as possible. This sometimes means compromising turn around time for a single user so as to achieve highest overall throughput.  The batch queuing system is configured to achieve some level of balance among all the competing demands upon the system, and is re-tuned on major changes in configuration or in science programs (e.g.

Physics Software Community Support

Physics maintains and supports the scientific data software, including ROOT, CERNlib, GEANT4, CLHEP, EVIO, CCDB, and GEMC. See full documentation at https://data.jlab.org/drupal/

/work disk areas

Moved to ServiceNow.