Gaussian 16
IMPORTANT - Users which want to do multi-node parallelism with Gaussian 16 need to make the following modifications or these jobs will fail. Please send email to [[w|hpc-help]] if you need assistance.
Our old system used to use an old/insecure protocol for launch jobs requiring ssh for parallelism. As of today, we now use ssh for this purpose. However, two changes must be made to your environment:
- You must add a file named 'config' to your
/sciclone/home/$USER/.ssh
folder. This file must be merged with your own config file or added with '600' permisions (chmod u+rw, go-rwx) to your .ssh folder. Download the file here: config file - You must also add an ssh-key for password-less ssh into nodes if you don't already have one:
- log into
bora.sciclone.wm.edu
- run: ssh-keygen
- hit return for the default name of the file to save the key
- hit return to keep a blank passphrase
- hit return to confirm
- Finally, if you have no
/sciclone/home/$USER/.ssh/authorized_keys
file then simply copy/sciclone/home/$USER/.ssh/id_rsh.pub
to/sciclone/home/$USER/.ssh/authorized_keys
(must have permissions 644 (u+rw, go+r)). If you already have an/sciclone/home/$USER/.ssh/authorized_keys
file, copy the contents of/sciclone/home/$USER/.ssh/id_rsh.pub
into/sciclone/home/$USER/.ssh/authorized_keys
- log into
Gaussian 16 is a state-of-the-art software suite which performs ab-initio electronic structure calculations within a Gaussian basis. Here is a link that shows a summary of Gaussian 16 features and here is a link to the Gaussian 16 release notes. W&M has purchased both the serial and fully-parallel versions of Gaussian as well as GaussView for use on the W&M HPC cluster.
Currently, Gaussian 16 and GaussView are installed only on main-campus HPC clusters. Please email hpc-help@wm.edu if you need it to be installed somewhere else.
The Gaussian 16 site license specifically states that users must have their primary affiliation with the institution named in the license (W&M). Therefore, external collaborators will not have access to Gaussian or GaussView.
Preparing to use Gaussian/GaussView on the HPC cluster
Users need to load the gaussian/g16 module to use Gaussian 16 and/or GaussView. This can be done by putting this line in your SLURM batch script or in your start-up script for the bora sub-cluster .cshrc..
Running Gaussian16
There are a few ways to run Gaussian 16 on the cluster: serial (only one computing core), shared-memory parallel (using the cores in parallel on one node), distributed-memory parallel (using cores on multiple nodes) or shared-memory / distributed hybrid (multiple cores on multiple nodes where Gaussian's parallel execution environment, Linda, is used for communication between nodes but shared-memory is used within the nodes).
Serial
Here is a SLURM batch script for serial Gaussian16 jobs:
#!/bin/tcsh
|
Shared-memory parallel
This is an example of shared-memory parallel. The same as the serial script except 1) multiple cores are specified (--ntasks-per-node 20) and 2) the extra '-p=<N>' where <N> is the number of cores to use:
#!/bin/tcsh module load gaussian/g16 # <---- add if not loaded automatically |
Distributed-memory parallel:
Here is an example script for running a distributed-memory parallel Gaussian 16 job. The main differences between the serial and shared memory scripts are 1) 2 nodes are requested, each using 20 cores 2) The GAUSS_WDEF variable is used 3) the getlinda script is run with an argument of '1' to indicate distributed parallel is to be used for communication between all cores:
#!/bin/tcsh |
Hybrid shared/distributed memory parallel
This final approach combines the both of the last two examples. Here, one distributed-memory process runs on each node (--ntasks-per-node 1
) and is responsible for communications between nodes and shared-memory is used for communication within each node (--cpus-per-task 20
). This script contains the GAUSS_WDEF environmental variables and calls the getlinda script with an argument of '0' to indicate only to launch one distributed-memory process per node:
#!/bin/tcsh module load gaussian/g16 # <---- add if not loaded automatically |
Gaussian suggests that the hybrid method may be the faster option compared to full distributed-memory mode, however, users should test this on a sample calculation.
Running GaussView
GaussView is included in the W&M site license. GaussView can be run from an interactive job on any main-campus HPC cluster, however, it is best run on of the serial/shared-memory clusters (hima, gust, astral or gulf) since it only runs on one core. Users should not run GaussView on a front-end/login server unless obtaining prior permission from HPC staff. Furthermore, GaussView can be installed on any W&M owned computer so feel free to contact hpc-help@wm.edu to request a copy of the software if you wish to install it on your W&M owned computer.
Since GaussView is a graphical program, you will need to login to the cluster with X11 forwarded from your local computer to the cluster.
Once logged into the cluster So, to get an interactive job (with X11 forwarded) do:
salloc -N1 -n1 -t 30 --x11
This will put you on a vortex node with 1 core available for work. Next, launch the GaussView program:
gview.sh
Then you should see the GaussView display appear.
Please send email to hpc-help@wm.edu if there are any questions about running Gaussian 16 or GaussView.