You are here

HPC / LQCD Computing Systems

The HPC / LQCD computing resources includes a Xeon Phi (Knights Landing) + OmniPath cluster, a Xeon + Infiniband cluster, an NVIDIA GeForce RTX 2080 + Infiniband cluster, and an NVIDIA Kepler GPU + Infiniband cluster.

 

 

Xeon Phi (Knights Landing) + OmniPath Cluster

 

 

  • 18p (2018 Phi ) -- 180 nodes, 68 cores, 16 GB high bandwidth memory, 92 GB main memory, Omni-Path fabric (100 Gb/s), 200 TB SSD
    16p (2016 Phi, formally known as SciPhi-XVI ) -- 264 nodes, 64 cores, 16 GB high bandwidth memory, 192 GB main memory, Omni-Path fabric (100 Gb/s), 1TB disk

Xeon + Infiniband Cluster

 

  • 12s (2012 Sandy Bridge) -- 276 nodes, 16 cores, 32 GB memory, QDR Infiniband (40 Gb/s).

 

GPU Cluster:
  • 19g (2019 GeForce RTX2080) -- 32 nodes, octa RTX 2080 GPU, 24 Intel(R) Xeon(R) Gold 5118 cores, 196 GB memory, Omni-Path fabric (100 Gb/s)
  • 12k (2012 Kepler) -- 45 nodes, quad  K20m GPU, 16 x86 cores, 128 GB memory,  FDR Infiniband (56 Gb/s).

View additional details at https://www.jlab.org/hpc/?id=clusters.

All clusters have multiple QDR IB uplinks into the main disk server Infiniband fabric, and access all of the filesystems over Infiniband.

All except the newest 16p cluster performance ratings are posted at http://www.usqcd.org/meetings/allHands2015/performance.html.