Experimental Physics users see a file system layout with many parts:
/home: is a file system accessible from all CUE nodes, and is the user's normal home directory, held on central file servers.
/group: is a file system accessible from all CUE nodes, and is a shared space for a group such as an experiment, held on central file servers.
/work: is an area designed for project or group use and is held on the Scientific Computing ZFS systems. It is organized by project and managed by individual users in the project. This area is NOT backed up. /work/project is designed to store software distributions and small data sets which don't need to be backed up. The software distributions are assumed to be backed up in external source control systems. Data sets are assumed to be easily re-generated or obtained from elsewhere.
/volatile: is a large scratch space to hold files for some moderately long period time. It may be used to hold the output of one job that will later be consumed by a subsequent job.
/cache: is a read-only cache of the tape library. It is most frequently used to store data files for batch jobs. /cache is mounted under on the interactive nodes and the compute nodes. This disk area is semi-managed with automatic file reads from tape for batch job input files, and automatic removal. This will soon be upgraded / replaced with a write thru cache nearly identical to the one used for the LQCD / HPC resources (the 2 systems have different tunable parameters).
/scratch: is a transient storage area available on each compute and interactive node, and is on that node's local disk. This /scratch space on compute nodes will be cleaned up after each job finishes. PLEASE NOTE: Each job node has its own /scratch area. One node cannot see the scratch area of another. In your PBS script when you refer to /scratch you are in fact only referring to the /scratch space for the node which executes the batch script. This directory is not the same as /scratch on the interactive nodes nor on the other compute nodes in a multi-node batch job. Since the /scratch directory on all compute nodes is automatically cleaned by the job epilogue, make sure to save your data from the /scratch space to something more permanent in your job script.