Output of Batch Job

Stdout/Stderr

After a job finishes, auger-pbs will copy the JOB_NAME.JOB_ID.out file the
user's $HOME/.farm_out directory. This file will contains any stdout from
user's script (not captured by user) and the outputs from auger's preamble and postamble scripts.
On the bottom of this file, there is a job resource usage summary (see
below).

MPI

(information on MPI libraries available & recommended)

MPI Considerations in Multi-Core and Heterogeneous environments

Multi-Core and Heterogeneous Environments

Current systems are comprised of multi-core nodes and contain accelerators. One effect of this is that users may desire to run in so called hybrid threaded MPI mode. Typically this results in fewer MPI processes per node than there are cores in the node. Examples of this are as follows:

MPI Considerations

jvolatile

Name

jvolatile - query the volatile disk project information

Auger Commands

  • jobstat - A summary of jobs status.
  • jsub - Submit jobs to the batch farm.
  • jkill - Delete queued or stop executing jobs.
  • farmhosts - Query the status of the batch farm nodes.

Batch System (Auger) - will be decommissioned on March 1st

The batch system provides a large computing resource to the JLab community.  It is a high throughput system, and not primarily an interactive system, although there are interactive nodes.  It is tuned to get as much work done per day as possible. This sometimes means compromising turn around time for a single user so as to achieve highest overall throughput.  The batch queuing system is configured to achieve some level of balance among all the competing demands upon the system, and is re-tuned on major changes in configuration or in science programs (e.g.

Physics Software Community Support

Physics maintains and supports the scientific data software, including ROOT, CERNlib, GEANT4, CLHEP, EVIO, CCDB, and GEMC. See additional documentation at https://data.jlab.org/.

/work disk areas

Moved to ServiceNow.

Interactive login machines

For Alma 9 access, "ssh ifarm.jlab.org" to reach the interactive nodes ifarm2401 and ifarm2402:

  • AMD EPYC 9554 (Zen 4 "Genoa")
  • 256 threads (2 sockets × 64 cores × 2 threads/core)
  • 3.1 GHz base / 3.75 GHz max
  • 1.5 TB memory
  • 28 TB striped NVMe /scratch
  • HDR (200 Gb/s) IB

 

Pages