You are here

Write-through Cache

Migrated to ServiceNow.  Text below may be out of date.


The write-through cache system is a front end disk cache system to the tape library system. It enables a user with appropriate permission to deposit new files into the cache. The new files will be migrated into the tape library after a short period of time within which the files can be modified or deleted.

Note: this cache file system serves neither as a backup area for users' home directories nor as a large disk area for code development. The cache system is intended for large experimental configuration, calibration,  and raw data as well as simulation data. Users are especially discouraged to put extremely small or rapid changing files into the system.

This write-through cache file system is mounted on all farm nodes under /cache. Each experimental hall or large project has a disk area that can be accessed as /cache/<project>. Each area has a configurable quota and reservation size. In the write-through cache system a directory is writable by a user using permissions just like a normal Unix file system (it is in fact a Lustre file system). A custom process (cacheManager) manages the disk space to ensure the newly created files are migrated to the tape library in a timely fashion and to represent the tape library as a file system to users.

The user-created new files under /cache/<project> are backed up to the tape library under /mss/<project> within a short period of time.  The mapping between a tape volume set and a stub file directory is the same as the read-only cache system including experimental raw data. Users can fetch any file under /mss/<project> to the write-through cache disk using the jcache command. The CacheManager will automatically delete files based on a least-recently-used (LRU) algorithm when disk space is needed. Before any file is deleted, the cacheManager makes sure that the file on  disk has a copy in the tape library. If a user wants to replace the copy in tape system with the new one on the disk, he/she mush use jcache tapeRemove to delete the old copy in tape library fist, otherwise the new copy on disk will not be saved to the tape library.

Permission and Ownership Settings:
-
The tape library software (JasMine) maintains permission and ownership information for all /mss stub directories and files. The permission and ownership settings on all /cache directories and files are the reflections of the settings inside JasMine. More specifically,

  • If a user creates a new directory /cache/project/a/b/c which is not inside JasMine, the permission and ownership setting of the directory inside JasMine will adopt the permission and ownership setting of the user created directory.
  • If a user uses chmod or chgrp command to change the permission or ownership of an existing directory /cache/project/a/b/c on the disk file system, the related setting inside JasMine for the directory will not change. Storing and retrieving any file to and from this directory using JasMine may result errors because the difference in permission and ownership setting between the disk file system and JasMine. We urge users to use the command jcache chmod/chgrp to change permission on a directory or a file to ensure the consistency of the permission settings between cache system and the tape library. Moreover, only the owner of a directory or a super user can change the permission and ownership settings of a directory or a file.
  • If a user wants to create a new top directory that does not have a matching volume set, the user has to submit a request (ccpr) to ask administrators to create the directory and a corresponding volume set.
  • Raw data directories and files are owned by a special user 'halldata'. No one can modify those files and no one can delete those files from the tape library.

Backup policy:

  • A newly created file will be backed up to the tape library if its size is in the window of "1 MB < size < 1000 GB", and the file is older than the backup threshold (12 days). User can back up any no-zero file file by call 'jcache put file-path'.

Deletion policy:

  • Any files that larger than 100 TB will be deleted from /cache file system, and a email will send to <user>@jlab.org.
  • When disk space is needed, the files pull from tape by a farm job with size larger than 3 MB and not pinned by any user will be deleted first.
  • If disk space is still needed, the least recently used files that also satisfy the criterion "pin count = 0 AND backed up" will be deleted.
  • Files have not been accessed for 2 year will be deleted.

Pin policy:

  • Each project has a pin quota and it is same as project reservation.
  • There are three types of pin: 1) pinned by regular user when "jcache get" or "jcache pin" is called; 2) pinned by auger using user farm for the farm job (unpin the file when job completes); 3) pinned by cacheManager using user manager when a duplicated file is found.
  • If total pin amount by all users (includes farm and manager) under a project exceeds the project's pin quota, the pin command from a user will fail and get request from user will be hold. But the pin and get by farm (from a farm job) and manager will always success no matter the pin quota status.
  • When a project's pin size excess the pin quota, most near expired pin will be removed (not necessary the oldest pin).