Getting Started


visiting_hero_web.jpg


Steps to getting started using the Physics Farm system at Jefferson Lab are listed below:

BASICS on using the Farm resources

Batch system
Auger-Slurm
Login to SciComp GPUs
Create custom kernel for Jupyter Hub
Start using JLab Jupyter Hub
Network certificate

Auger Configuration Files

To submit a job to the farm you need to create a configuration script that describes your job(s). There are two different formats for the configuration file. The first is a flat file format (TXT) which has been supported for years. The second format is an XML based format, which allows some additional description.

Auger Batch System

The Auger batch system provides a large computing resource to the JLab community.  It is a high throughput, not primarily an interactive system, although there are interactive nodes.  It is tuned to get as much work done per day as possible.

SLURM Batch Scheduler (for Computing Coordinators)

The management of SLURM accounts has been delegated to computing coordinators. This is useful for:

Getting Started


visiting_hero_web.jpg


Steps to getting started using the Physics Farm system at Jefferson Lab are listed below:

Scientific Computing Resources

Summary of resources at JLab:

  • High Performance Computing (HPC) for LQCD, ~20000 cores, ~180 GPUs
  • Batch Computing for Experimental Physics (the "farm"), ~25000 CPUs
  • Multiple Disk Systems (online storage), ~7 Petabytes
  • The Tape Library for offline storage, ~30 Petabytes
  • Interactive nodes, wide area gateway nodes and infrastructure support nodes

Details are found in the following subsections.

Pages