Wide Area Networking

Jefferson Lab has a 10g wide area networking connection to a MAN (metropolitan area network) with a 10g connection up to ESnet in Washington D.C. and a redundant 10g connection to ESnet in Atlanta.  JLab can reasonably use 5 Gbps of this, and Scientific Computing can reasonably use 4 Gpbs.  Thus each of CLAS, GlueX, A+C+misc, and LQCD can on average use 1 Gbps, although may on occasion find they can sustain 5-6 Gbps.

Interactive Nodes

All interactive user systems are available from offsite through the JLab user login gateways:  login.jlab.org.
JLab resources are upgrading in 2019, to use the Slurm (SchedMD.com) manager.  The JLab Slurm testbed environment runs CentOS 7, and is available via the systems
  • islurm1201
  • hpci12k01 2012 dual 8-core Sandy Bridge, 128 GB memory, 1 K20m GPU, CUDA 10, CentOS 7; front end to JLab Slurm12k cluster

Tape Library (offline storage)

IBM TS3500 Tape Library

The JLab Mass Storage System (MSS) is an IBM TS3500 tape library with LTO drives,  installed in 2008 to replace JLab's original StorageTek silo with Redwood technology. The TS3500 is a modular system, with an expandable number of frames for tape slots and drives, and an expandable number of tape drives. The lab's JASMine software provides the user interface to the MSS.

Our current configuration consists of

Experimental Physics File System Layout

Experimental Physics users see a file system layout with many parts:

/home: is a file system accessible from all CUE nodes, and is the user's normal home directory, held on central file servers.

/group: is a file system accessible from all CUE nodes, and is a shared space for a group such as an experiment, held on central file servers.

HPC / LQCD File System Layout

LQCD / HPC users see a file system layout with 5 parts:

/home: Every user will have a home directory after he/she gets an account. Note that for performance and fault tolerance, this is a different home directory from the general lab computing home directory. With a default user quota of 2 GB, /home is not a large file system and is backed up daily. It is designed to store non-data files, such as scripts and executables. The home directory is mounted on interactive nodes and compute nodes. This disk space is managed by individual users.

Disk Servers (online storage)

Scientific Computing currently has 2 (physical) types of file servers:

Experimental Physics Computing

The batch farm contains ~300 CentOS 7.7 nodes, with 8, 16, 24, 32, or 36 cores. Each core is run with two hardware threads, for two job slots per core, for a total of ~24000 job slots

HPC / LQCD Computing Systems

The HPC / LQCD computing resources includes a Xeon Phi (Knights Landing) + OmniPath cluster, a Xeon + Infiniband cluster, an NVIDIA GeForce RTX 2080 + Infiniband cluster, and an NVIDIA Kepler