LQCD User's Guide

This guide is meant to cover everything related to using the LQCD Computing Facilities at Jefferson Lab, from batch systems to file storage and management.  More detailed information on the hardware available at Jefferson Lab is contained in the companion book, Scientific Computing Resources.

Create custom kernel for Jupyter Hub

This is how you can setup your JLab jupyterhub notebook to run inside a python virtual environment. This is useful for installing python packages via pip from your own account without needing admin privliges (which you don't have). The following will need to be run from a terminal on a CUE computer. I recommend doing this via a terminal launched via jupyterhub since you can use a python version available there that should have all necessary system libraries installed.

Start using JLab Jupyter Hub

  • To start using Jupyter hub you need a jlab CUE account with Farm access and to be added to slurm (scicomp batch system).
  • Go to https://jupyterhub.jlab.org/ and log in with your CUE username and password.
  • When you log in for the first time it will show a QR code. You need to download any one of the following applications: Google Authenticator, Microsoft Authenticator, or FreeOTP and scan that QR code to set up the future authentication.

Login to SciComp GPUs

The following is how to use one of the ML scicomp machines that has 4 Titan RTX GPU cards installed. Steps:

conda activate tf-gpu

Other Intel KNL Resources


Slurm User Commands


Interactive Jobs


GPU Jobs



GPUs are available on the batch farm for both interactive
and batch processing using slurm commands. GPU use from Auger is not supported.
The slurm commands (sbatch, salloc, etc) are available on the ifarm machines.

 

The following GPU resources are available:

 

Sample Scripts

To submit a job to slurm, one can call "sbatch" with all necessary options, or put all options into a submission-script, then call "sbatch submission-script". Here are few samples of submission-script.
 
1) A simplest one core job.
 

Parallel Jobs (MPI)

MPI Libraries:

There is an MPI development and runtime library supported on the farm. This MPI library and development tool is openmpi. The current version is 3 for openmpi. The default path is  /usr/lib64/openmpi3 for openmpi. One can access the MPI  by using module utility on any ifarm and farm nodes after specifying module use /apps/modulefiles.
  • module load mpi/openmpi3-x86_64

Compile MPI applications:

Pages