Gust
GUST USERS:
There is a bug in the version of slurm running on gust that will not propogate conda info correctly to the nodes when running an interactive or batch job. To fix this, please be sure to explicitly load your anaconda module within your batch script or interactive session. Please send questions to hpc-help@wm.edu .
The Gust subcluster of SciClone contains 2 nodes each containing 128 AMD EPYC Rome cores. The front-end is ust.sciclone.wm.edu
and the same startup module file is named.cshrc.gust
.
Hardware
Front-end (gust / gt00) |
Parallel nodes (gt01-gt02) |
||
---|---|---|---|
Model | HP DL385 Gen10 | ||
Processor(s) |
2×16-core AMD EPYC 7703 32 total cores |
2x 64 core AMD EPYC 7702 128 core/node 256 cores total
|
|
Clock speed |
3.0 GHz |
2.0 GHz |
|
Memory | 32 GB | 512 GB | |
Network interfaces |
Application |
EDR IB (gt00-ib) |
EDR IB (gt??-ib) |
System |
1 GbE (gt00) |
1 GbE (gt??) |
|
OS | CentOS 7.9 |
Slurm
Example batch scripts
1. Single core job:
#!/bin/tcsh
#SBATCH --job-name=serial
#SBATCH --nodes=1 --ntasks-per-node=1
#SBATCH -t 30:00
srun ./a.out
2. 32 cores shared memory parallel job:
#!/bin/tcsh
#SBATCH --job-name=parallel
#SBATCH --nodes=2 --ntasks-per-node=32
#SBATCH -t 30:00
srun ./a.out
3. 2 node hybrid parallel job: 32 cores on each node using openmp within the node and mpi between nodes:
#!/bin/tcsh
#SBATCH --job-name=hybrid
#SBATCH --nodes=2 --ntasks=2 --cpus-per-task=32
#SBATCH --constraint=gust
#SBATCH -t 30:00
setenv OMP_NUM_THREADS 2
ckload 0.05 # report if any node usage is greater than 0.05
srun ./a.out
User Environment
To login, use SSH from any host on the William & Mary or VIMS networks and connect to femto.sciclone.wm.edu
with your HPC username (usually the same as your WMuserid) and W&M password.
Your home directory on Gust is the same as everywhere else on SciClone, and all of the usual filesystems (/sciclone/homeXX, /sciclone/dataXX, /sciclone/scrXX, /local/scr, etc.) are available throughout the cluster.
SciClone uses Environment Modules (a.k.a Modules) to automatically configure the user's shell environment across multiple computing platforms, as well as to organize the dozens of different software packages which are available on the system. We support tcsh as the primary shell environment for user accounts and applications.
The file which controls startup modules for Gust is .cshrc.gust
. The most recent version of this file can be found in /usr/local/etc/templates
on any of the front-end servers (including gust.sciclone.wm.edu
).
Preferred filesystems
All of the nodes are equipped with a 700 GB HDD. Every user has a directory on this filesystem in /local/scr/$USER. This filesystem should be the preferred filesytem if your code can use it effectively.
The preferred global file system for all work on is the parallel scratch file system available at /sciclone/pscr/$USER
on the front-end and compute nodes. /sciclone/scr10/$USER
is a good alternative (NFS, but connected to the same InfiniBand switch).
Compiler flags
Gust has the Intel Parallel Studio XE compiler suite as well as version 11.2.0 versions of the of the GNU compiler suite. Here are suggested compiler flags which should result in fairly optimized code on the Skylake architecture:
Intel | C | icc -O3 -xCORE-AVX2 -fma |
---|---|---|
C++ | icpc -std=c11 -O3 -xCORE-AVX2 -fma | |
Fortran | ifort -O3 -xCORE-AVX2 -fma | |
GNU | C | gcc -O3 -mavx2 -mfma |
C++ | g++ -std=c11-mavx2 -mfma | |
Fortran | gfortran -O3 -mavx2 -mfma |