All interactive user systems are available from offsite through the JLab user login gateways: login.jlab.org.
JLab resources are upgrading in 2019, to use the Slurm (SchedMD.com) manager. The JLab Slurm testbed environment runs CentOS 7, and is available via the systems
- islurm1201
- hpci12k01 2012 dual 8-core Sandy Bridge, 128 GB memory, 1 K20m GPU, CUDA 10, CentOS 7; front end to JLab Slurm12k cluster
The LQCD / HPC systems for USQCD are accessed through the Jefferson Lab interactive gateway, login.jab.org. From that node (or from any node on the JLab internal network) you can log in to one of these interactive machines:
- qcdi1402 2014 dual quadcore Haswell, 64 GB memory, CentOS 7, front end for KNL 16p cluster
- qcdi1401 2014 dual quad core Haswell, 64 GB memory, CentOS 7, front end for KNL 16p cluster
- qcd12kmi 2012 dual 8-core Sandy Bridge, 128 GB memory, with one K20m GPU for K20 GPU software development; this node has the NVIDIA development environment installed, CentOS 6, front end to 12k clusters
These nodes may be accessed through the following shorter aliases:
- qcdi = both machines (useful if you are not using accelerators)
- qcdkmi = Kepler nteractive, points to the second machine
The Experimental Nuclear Physics data analysis systems are also accessed from offsite through the Jefferson Lab interactive gateway, login.jab.org. Then you can login to one of these interactive machines to start using "the farm":
- ifarm1402 dual 12 core 2.5 GHz Xeon Haswell, 32 GB memory, dual 1 TB striped disks, /scratch, DDR IB, CentOS 7
- ifarm1401 dual 12 core 2.5 GHz Xeon Haswell, 32 GB memory, dual 1 TB striped disks, /scratch, DDR IB, CentOS 7
The interactive nodes may be accessed through the shorter aliases