Close menu Resources for... William & Mary
W&M menu close William & Mary

Node Types and Subclusters

SciClone and Chesapeake are clusters of many individual computers ("nodes"), so before you can run a job on SciClone or Chesapeake, you must decide on which particular member computers your job will run (often one or more groups of similar computers, called subclusters). The table below is intended to assist you with this decision, after which you can learn how to submit jobs using the Torque or Slurm batch system.

Please contact the HPC group for advice if you are unsure of the most appropriate type of node for your particular application.

Main-campus/Sciclone cluster 
  • kuro - MPI parallel jobs requiring at least 64 cores (Slurm)
  • astral - shared memory GPU cluster w/ 8x Nvidia A30 GPUs, 24GB (Slurm)
  • gulf - 2 High memory Nodes & 4 GPU nodes cluster w/ 2x Nvidia A40 GPUs, 48GB (Slurm)
  • bora - Main MPI/parallel cluster (Torque)
  • hima - Main shared memory cluster with some GPUs (Torque)
  • gust - Main cluster for large memory / shared memory jobs (Slurm)
  • femto - Currently reserved for Physics and VIMS use (Slurm)
  • vortex-alpha - Currently exclusive to AidData (Torque)
VIMS campus/Chesapeake cluster
  • James - MPI/parallel cluster (Slurm
  • Potomac - Serial /shared memory / small parallel jobs (Slurm
  • Pamunkey - Shared memory only / exclusive to bio/bioinformatics calcs  (Slurm
Main-Campus ("SciClone") Cluster
Name

Cores per node

/

Threads per node

Memory

(GB)

/local/scr Network OS Deployed

SciClone Kuro cluster

(3008 AMD Zen 4 cores)

Batch System: Slurm

kuro.sciclone.wm.edu

32 384 1.8 TB

10 GbE

HDR IB

Rocky 9.2 2024

ku01-ku47

64 384 980 GB

1 GbE

HDR IB

Rocky 9.2 2024

SciClone Gulf Cluster

(16 CPU cores w/ 512G Mem + 32 CPU cores w/ 2 Nvidia A40 GPUs)

Batch system:  Slurm

gu01-gu02

16 512 1.8 TB

10 GbE

HDR IB

Rocky 9.3 2024

gu03 - gu06

(each w/ 2x Nvidia A40 GPU (48GB)

32 128 1.8 TB

10 GbE

HDR IB

Rocky 9.3 2024

SciClone Astral Cluster

(32 CPU cores + 64 CPU cores/8 Nvidia A30 GPUs

Batch system: Slurm

astral.sciclone.wm.edu

32 256 63 GB

10 GbE

HDR IB

Rocky 9.2 2024

as01 (has 8x Nvidia A30 GPU (24GB)

64 512 14 TB

10 GbE

HDR IB

Rocky 9.2 2024

SciClone, Bora and Hima subclusters

(1324 Xeon "Broadwell" cores / 2648 threads)

Batch system:  Torque

bora.sciclone.wm.edu

(front-end/login)

20

64 10 GB

10 GbE,

FDR IB

CentOS 7.3 2017
bo01-bo55 128 524 GB

1 GbE,

FDR IB

hi01-hi07

32

256 3.7 TB [2]

1 GbE,

QDR IB

SciClone Gust subcluster

(256 EPYC "Zen 2" cores)

Batch system: Slurm

gust.sciclone.wm.edu

(front-end/login)

32 32 64 GB

GbE,

EDR IB

Rocky 9.3 2020
gt01-gt02 128 512 670 GB

10 GbE,

EDR IB

SciClone, Femto subcluster

(960 Xeon "Skylake" cores)

Batch system:  Slurm

femto.sciclone.wm.edu

(front-end/login)

32 96 10 GB

1 GbE,

EDR IB

CentOS 7.6 2019

fm01-fm30

32 96 2 TB [3]

1 GbE,

EDR IB

CentOS 7.6 2019
VIMS CLUSTER

Chesapeake:  Potomac subcluster

(360 "Seoul" and compute cores)

Batch system: Slurm

chesapeake.hpc.vims.edu

(front-end/login)

12 32 242 GB

10 GbE,

QDR IB

 

 

CentOS 7.9

 

2014
pt01-pt30 12 32 242 GB

1 GbE,

QDR IB

Chesapeake: James  and Pamunkey subclusters

(540 "Skylake" Xeon 128 "Abu Dhabi" Opteron compute cores)

Batch system: Slurm

james.hpc.vims.edu 

(front-end/login)

8c/

16t

32 10 GB

10 GbE,

EDR IB

CentOS 7.5

 

2018

 

jm01-jm27

20

64 1.8 TB

1 GbE,

EDR IB

pm01-pm02

64

256 1.3 TB [2]

10 GbE

CentOS 7.3

2016

choptank

(/ches/data10)

12c/

24t

64 800 GB

1 GbE,

QDR IB

RHEL 7

2016

rappahannock

(/ches/scr10)

32 176 GB

10 GbE,

QDR IB

2014

  1. va01-va10 are on a separate InfiniBand switch and have full bandwidth with each other, but are 5:1 oversubscribed to the main FDR switch.
  2. Usually a node's local scratch filesystem is a partition on a single disk, but the local scratch filesystems on Pamunkey and Hima nodes are faster (~300 and ~400 MB/s) six- and eight-drive arrays, respectively.
  3. The femto nodes are equipped with a 2TB SS Disk for fast reads and writes.