Potomac Subcluster
Hardware
The potomac subcluster consists of a single homogeneous node type, designated as c18. Potomac is accessed via the server node chesapeake.hpc.vims.edu which handles logins, compilations, job submission, and job scheduling.
Front-end server node
Model: | Dell PowerEdge R515 | |
Processor: | 2 x Opteron 4238 six-core 3.3 GHz w/ 6 MB L2 / 8 MB L3 cache (Valencia) | |
Memory: | 32 GB 1600 MHz | |
External interface: | chesapeake.hpc.vims.edu, 10 Gb/s | |
Cluster interface: | cp00.hpc.vims.edu, 10 Gb/s | |
InfiniBand interface: | cp00-ib, 40 Gb/s (QDR) | |
OS: | Red Hat Enterprise Linux 6.2 (RHEL 6.2) | |
Default Compilers: | PGI 14.3, gcc 4.7.3 | |
Job scheduler: | TORQUE 2.3.7 | |
MPI Library: | MVAPICH2 1.9 | |
Shell Environment: | tcsh with Modules 3.2.10 |
c18 compute nodes (30)
Model: | Dell PowerEdge R415 | ||
Processor: | 2 x Opteron 4334 six-core 3.1 GHz 8 MB L3 cache (Seoul) | ||
Main Memory: | 32 GB 1600 MHz | ||
Ethernet interface: | 1 Gb/s | ||
InfiniBand interface: | 40 Gb/s (QDR) | ||
Local scratch filesystem: | 242 GB | ||
OS: | Red Hat Enterprise Linux 6.2 (RHEL 6.2) |
In the potomac subcluster, pt01-pt30 are designated as type c18 compute nodes, with the following TORQUE node properties:
User Environment
Compilation and job submission for the potomac subcluster is from the chesapeake server node. To log in, use SSH from any host on the William & Mary or VIMS networks (including the VIMS VPN), and connect to chesapeake.hpc.vims.edu with your HPC username and password (common across SciClone, Storm, and Chesapeake). For convenience, you can also use ches.hpc.vims.edu and cpk.hpc.vims.edu as shorthand notation for chesapeake.hpc.vims.edu. Also note that chesapeake.hpc.vims.edu is not to be confused with chesapeake.vims.edu, which is a completely different system with a different purpose.
Your home directory on the chesapeake server node and on all of the potomac compute nodes is the same. There are also two scratch filesystems available throughout the system, /ches/scr00 and /ches/scr10. /ches/scr00 is a 40 GB, medium-performance scratch disk. /ches/scr10 is a 60 TB high-performance Dell HPC NSS fileserver.
Chesapeake uses Environment Modules (a.k.a Modules) to automatically configure the user's shell environment across multiple computing platforms, as well as to organize the dozens of different software packages which are available on the system. We support tcsh as the primary shell environment for user accounts and applications.
New accounts are provisioned with the following set of environment configuration files:
.login |
- |
recommended settings for login shells |
.cshrc |
- |
personal environment settings, customize to meet your needs |
.cshrc.rhel6-opteron |
- |
personal settings for RHEL 6 / Opteron |
The most recent versions of theses files can be found in /usr/local/etc/templates on the chesapeake server node.
System-wide environment settings are initialized in:
These files are automatically invoked at the beginning of your personal .cshrc and .login files, respectively.
A default set of environment modules is loaded at the end of the platform-specific .cshrc.* files, and these should be enough to get you started. In the case of the potomac subcluster, .cshrc.rhel6-opteron is the relevant file. If you want to dig a little deeper, you can run "module avail" or "module whatis" to see a complete list of available modules. Most of these should be self-explanatory, but the isa module deserves a little more discussion.
Because Chesapeake (like many clusters) contains a mix of different hardware with varying capabilities, we need to build several different versions of most software packages, and then provide some way for the user to specify which version he or she wants to use. The primary distinction is based on the "Instruction Set Architecture" (ISA) of the particular platform, which is simply the set of instructions that the CPU is capable of executing, along with the desired addressing mode (32-bit or 64-bit).
The choice of ISA nomenclature is problematic, in part because the code names and marketing designations used by chip vendors are very complex, and also because there is little commonality in terminology across different compiler suites. Consequently, we have established our own local conventions. For the RHEL 6/Opteron platform we presently support two ISAs:
seoul |
- |
Opteron Bulldozer version 2 (Piledriver), 64-bit, matches the potomac compute nodes. This is the default. |
valencia |
- |
Opteron Bulldozer version 1 (Bulldozer), 64-bit, matches the ISA of the front-end node, chesapeake. |
When a user's shell initializes, an isa module is loaded which establishes a default environment based on the ISA (seoul in this case). Many of the modules which are loaded subsequent to the isa module are keyed to environment settings which are established by the isa module.
In general, there should be no problem with using "seoul" for both chesapeake and the potomac compute nodes (see NOTE in the next section). Both "seoul" and "valencia" are 64-bit ISAs; we do not support a 32-bit environment under RHEL 6.
Compilers
There are two compiler suites available in Chesapeake's RHEL 6/Opteron environment. These are the Portland Group Compiler Suite 14.3 [point this to a local Wiki page for PGI 14.3] (PGI) and the GNU Compiler Collection (GCC).
In some cases libraries and applications are supported only for a particular compiler suite; in other cases they may be supported across multiple compiler suites. For a complete list of available compilers, consult our Software pages or use the "module avail" command. For compiler-related information on specific software packages, follow the appropriate links from the master Software page.
In most cases code generated by the PGI compilers will outperform that generated by the open-source GNU compilers, sometimes by a wide margin. There are exceptions, however, so we strongly encourage you to experiment with different compiler suites in order to determine which will yield the best performance for a given task. When a GNU compiler is required, we recommend GCC 4.7.3 for use with the potomac subcluster.
You can switch between alternative compilers by modifying the appropriate "module load" command in your .cshrc.rhel6-opteron file. The default configuration loads pgi/14.3. Because of conflicts with command names, environment variables, libraries, etc., attempts to load multiple compiler modules into your environment simultaneously may result in an error.
For details about compiler installation paths, environment variables, etc., use the "module show" command for the compiler of interest, e.g.,
etc.
For proper operation and best performance, it is important to choose compiler options that match the target architecture and enable the most profitable code optimizations. The options listed below are suggested as starting points. Note that for some codes, these optimizations may be too aggressive and may need to be scaled back. Consult the appropriate compiler manuals for full details.
PGI 14.3:
GCC 4.7.3:
NOTE: The chesapeake front-end node contains the Bulldozer version of the Opteron processor. The potomac subcluster contains the Piledriver version of the Opteron processor. If you specify -tp pilederiver
(for PGI) or -march=bdver2
(for GNU) in order to optimize performance on the potomac nodes, your resulting executable might not run on the chesapeake front-end (you will receive an illegal instruction error). Also note that the Piledriver ISA is a superset of the Bulldozer ISA so specifying -tp bulldozer
(PGI) or -march=bdver1
(GNU) will create code that runs on both chesapeake and the potomac subcluster.
MPI
Currently (August 2014) MVAPICH2 version 1.9 is available for use in the Chesapeake RHEL 6/Opteron environment. A GNU and PGI version is available.
Just as on SciClone and Storm, MVAPICH2 programs must be initiated with the mvp2run
command, which replaces and incorporates MVAPICH2's standard mpirun_rsh
command. mvp2run
provides an interface between mpirun_rsh
and the TORQUE job scheduler, and has a number of other features for managing processor affinity, controlling the mapping of processes onto processors, checking for extraneous loads on assigned nodes, and enabling or disabling certain features in MVAPICH2.
Additional Software
Many additional software packages are available in the RHEL / Opteron environment. For a complete list, see our Software pages. If there are components that you need, please submit a request and we will try to help as time and technical considerations permit.