Close menu Resources for... William & Mary
W&M menu close William & Mary

RC/HPC available resources

RC Compute Resources

Main-campus HPC

VIMS-campus cluster

Main-campus HPC (SciClone)

The main-campus HPC resources (collectively known as "sciclone") consist of a series a different sub-clusters, each of which has a set of hardware resources tied to it.   There are two types of HPC resources in general, those used for multi-node parallel calculations and those best suited for GPU/CPU serial, shared-memory applications within one or, at most, a few nodes.  All clusters have a maximum walltime of 72 hrs for jobs, excluding kuro which has a maximum of 48 hrs.

Multi-node parallel / MPI clusters
name

processor

cores/

node

total #

cores

mem (GB) network deployed notes
kuro 
kuro.sciclone.wm.edu 2x AMD EPYC 9124 32 32 384 10GbE/HDR IB 2024 The kuro cluster is currently reserved from 3pm Friday-8pm Sundays
ku01-ku47 2x AMD EPYC 9334 64 3008 384 1GbE/HDR IB 2024
femto
femto.sciclone.wm.edu 2x Intel Xeon Gold 6130 32 32 96 1GbE/EDR IB 2019
fm01-fm30 2x Intel Xeon Gold 6130 32 960 96 1 GbE/EDR IB 2019
bora
bora.sciclone.wm.edu 2x Intel Xeon E5-2640 20 20 64 10 GbE/FDR IB 2017
bo01-bo55 2x Intel Xeon E5-2640 20 1100 128 10 GbE/FDR IB 2017
Serial / Shared memory clusters / GPU resources
name GPUs

processor

cores/

node

total # cores mem (GB) network deployed notes
hima
bora.sciclone.wm.edu --- 2x Intel Xeon E5-2640 20 20 64 10GbE/FDR IB 2017 The hima cluster uses bora for compiling and job control
hi01-hi07

2x NVIDIA P100 (16GB)

1x NVIDIA V100 (16GB)

2x Intel Xeon E5-2683 32 224 256 1GbE/FDR IB 2017
astral
astral.sciclone.wm.edu --- 2x Intel Xeon Gold 6336Y 48 48 256 1GbE/EDR IB 2019
as01 8x NVIDIA A30 (24GB) 2x Intel Xeon Platinum 8362 64 64 512 1 GbE/EDR IB 2019
gust
gust.sciclone.wm.edu --- 2x AMD EPYC 7302 32 32 32 10 GbE/EDR IB 2020
gt01-gt02 --- 2x AMD EPYC 7702 128 256 512 1 GbE/EDR IB 2020
gulf
gulf.sciclone.wm.edu --- AMD EPYC 7313P 32 32 128 10 GbE/HDR IB 2024
gu01-gu02 --- AMD EPYC 7313P 16 32 512 1 GbE/HDR IB 2024
gu03-gu06

2x NVIDIA A40 (48GB)

per node

2x AMD EPYC 7313P 32 128 128 10 GbE/HDR IB 2024

 

Main-campus Kubernetes cluster

Coming...

 

VIMS-campus cluster (Chesapeake)

The James cluster has a maximum walltime of 72 hrs for jobs.  Potomac and Pamunkey allow 120 hrs.

Multi-node parallel / MPI cluster
name

processor

cores/

node

total # cores mem (GB) network deployed notes
james
james.hpc.vims.edu Intel Xeon Silver 4112 8 8 32 10GbE/EDR IB 2018 Restrictions
jm01-jm27 Intel Xeon Silver 4114 20 540 64 1GbE/EDR IB 2018
Serial / Shared memory clusters
name

processor

cores/

node

total # cores mem GB) network deployed notes
potomac
chesapeake.hpc.vims.edu AMD Opteron 4238 8 8 32 10GbE/EDR IB 2014
pt01-pt30 AMD Opteron 4334 20 540 64 1GbE/EDR IB 2014
pamunkey
james.hpc.vims.edu Intel Xeon Silver 4112 8 8 32 10GbE/EDR IB 2018

The pamunkey cluster uses james for compiling and job control. 

The pamunkey nodes do not have infiniband

pm01-pm02 AMD Opteron 6380 64 128 256 10GbE 2016

Please send an email to [[w|hpc-help]] for advice if you are unsure of the most appropriate type of node for your particular application.