This guide is meant to cover everything related to using the LQCD Computing Facilities at Jefferson Lab, from batch systems to file storage and management. More detailed information on the hardware available at Jefferson Lab is contained in the companion book, Scientific Computing Resources.
This is how you can setup your JLab jupyterhub notebook to run inside a python virtual environment. This is useful for installing python packages via pip from your own account without needing admin privliges (which you don't have). The following will need to be run from a terminal on a CUE computer. I recommend doing this via a terminal launched via jupyterhub since you can use a python version available there that should have all necessary system libraries installed.
When you log in for the first time it will show a QR code. You need to download any one of the following applications: Google Authenticator, Microsoft Authenticator, or FreeOTP and scan that QR code to set up the future authentication.
GPUs are available on the batch farm for both interactive
and batch processing using slurm commands. GPU use from Auger is not supported.
The slurm commands (sbatch, salloc, etc) are available on the ifarm machines.
To submit a job to slurm, one can call "sbatch" with all necessary options, or put all options into a submission-script, then call "sbatch submission-script". Here are few samples of submission-script.
There is an MPI development and runtime library supported on the farm. This MPI library and development tool is openmpi. The current version is 3 for openmpi. The default path is /usr/lib64/openmpi3 for openmpi. One can access the MPI by using module utility on any ifarm and farm nodes after specifying module use /apps/modulefiles.