Compilers and MPI software
Compilers
All main-campus and VIMS clusters have Intel and GNU compiler toolchains installed. As of our recent refresh to Rocky Linux 9, here are the available compilers:
Name | module | suggested optimization flags | release date | executable names (C,C++,Fortran) | information |
---|---|---|---|---|---|
Intel oneAPI DPC++/C++ Compiler /Fortran | intel/compiler-2024.0 | -O3 -xHost | 12/13/2023 |
icx,icpx,ifx ** ifort is available but will warn you to switch to ifx |
https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler-documentation.html - C/C++/DPC++ docs for current and previous versions. https://www.intel.com/content/www/us/en/developer/tools/oneapi/fortran-compiler-documentation.html - Fortran docs for current and previous versions. NOTE: the -ip flag is not supported for ifx |
Intel oneAPI DPC++/C++ Compiler /Fortran | intel/compiler-2022.2.1 | -O3 -xHost | 10/20/2022 |
icx,icpx,ifort ** icc and icpc are available but will warn you to switch to icx,icpx |
|
GNU Compiler suite (C++/C/Fortran) | N/A - defautl gcc on all refreshed systems | -O3 -march=native | 06/25/23 | gcc,g++,gfortran | https://gcc.gnu.org/onlinedocs/11.4.0/ - GNU compiler documentation |
As needed, older versions of compilers may be installed, however, we strongly encourage users to try working with the latest versions of Intel OneAPI and/or Gnu compilers listed above. We encourage users to read the man pages for the compiler tools they need more information about: e.g.:
>> man ifx
Use the following links to understand what has changed between older version of the compilers and the current versions:
Intel and GNU compiler flags with descriptions
Intel flag | GNU flag | description | notes |
---|---|---|---|
-O0,-O1,-O2,-O3,-Ofast | level of optimization to try (from none to most aggressive) | -On can result in other flags being used. Please read the man page to see which other flags are added for each level of optmization | |
-fast | N/A | A combination of -Ofast, -ipo, -static (for static linking), and -xHost. | Very aggressive optmization ; be sure you are getting correct results |
-qopenmp | -fopenmp | enable OpenMP | |
-fp-model <level> | -ffast-math | tunes floating-point accuracy |
for Intel, level can be strict, precise, or fast for GNU -ffast-math takes no arguments |
-ipo | N/A | enable inter procedural optmization | |
-xHost | -march=native | tune compilation for local processor |
MPI software
The HPC clusters at main-campus and VIMS have both IntelMPI and OpenMPI. We typically compile other parallel software using one or both flavors.
name | module | release date | wrapper scripts | notes |
---|---|---|---|---|
Intel MPI toolkit | intel/mpi-2021.11 |
06/05/2023 |
mpiicx, mpiicpx,mpiifx mpicc,mpicxx, mpif90 |
mpiicx, mpiicpx,mpiifx - uses Intel Compiler with Intel MPI mpicc,mpicxx, mpif90 - uses GNU compiler with Intel MPI |
OpenMPI-4.1.6. compiled with Intel-2024.0 | openmpi-ib/intel-2024.0/4.1.6 | 9/30/2023 | mpicc, mpicxx, mpif90 | |
OpenMPI-4.1.6 compiled with GNU 11.4.1 | openmpi-ib/gcc-11.4.1/4.1.6 |
Running MPI jobs in the cluster
All clusters run the SLURM batch system. We encourage using srun (see the RC SLURM users guide) instead of mpirun, mpiexec to launch MPI jobs.