You are here

File system layout

/home: Every user has a home directory. Note that this is a different home directory from the general lab computing home directory. /home is not a large file system and the default quota is 2 GB and is backed up daily. It is intended to store non-data files, such as scripts and executables. The home directory is mounted on both interactive nodes and compute nodes. This disk space is managed by individual users.

/work: This is an area designed for project or group use. It is organized by project and managed by individual users in the project. This area is NOT backed up. /work/project-name is intended to store software distributions and small data sets which don't need to be backed up. The software distributions are assumed to be backed up in external source control systems. Data sets are assumed to be easily re-generated or obtained from elsewhere. The detailed information on /work is described in a later chapter.

/volatile: This is a large global scratch space to hold files for some moderately long period time. This area is NOT backed up. It may be used to hold the output of one job that will later be consumed by a subsequent job. It may also serve as an area to pack and unpack tar balls. In some cases, users work with tens of thousands or hundreds of thousands of small files in one directory, and this is the best place for that type of data. (Note that if the files need to persist on disk for a long time, /work is a good alternative location.) /volatile is implemented above Lustre, a high performance, high capacity distributed file system. Hence, /volatile is the highest performance (and largest) file system at Jefferson Lab.

/cache: This is designed to be a cache front end to the tape library. It is most frequently used to store data files (input and output) for batch jobs. /cache is also implemented above Lustre file system and is mounted on both the interactive nodes and the compute nodes. /cache is semi-managed with automatic file migration to tape after some period of time, with eventual automatic file removal from disk. Check Cache Manager Policy page for updated backup and deletion policy.

/scratch: is a transient local storage area available on each compute and interactive node, located on that node's local disk. The /scratch space on compute nodes will be cleaned up after each job finishes. PLEASE NOTE: Each job node has its own /scratch area. One node cannot see the scratch area of another. In your PBS script when you refer to /scratch you are in fact only referring to the /scratch space for the node which executes the batch script (the head node in a multi-node job). This directory is not the same as /scratch on the interactive nodes. Since the /scratch directory on all compute nodes is automatically cleaned up by the job epilogue, you should make sure to put all the relevant file copy commands to save your data from the /scratch space to something more permanent, in your job script. Trying to find your file in the /scratch directory on the interactive nodes during or after your job is running will not work.