Networks
Every node is connected to at least one Ethernet network, which is used primarily for system management, and almost every node has either an InfiniBand or Omni-Path interface, which provides high-bandwidth, low-latency communication for large parallel computations and I/O. To distinguish between interfaces on the same node, the interfaces are given different hostnames. For example, the log-in node for SciClone's Vortex subcluster has several hostnames, including
vortex.sciclone.wm.edu
, for Ethernet traffic from outside the cluster,vx00
, for Ethernet traffic within the cluster, andvx00-i8
, for InfiniBand traffic within the cluster.
Its compute nodes, which do not have an external/internal interface distinction, have names like vx05
, referring to their connection to the cluster Ethernet network, and vx05-i8
, referring to their connection to the InfiniBand network. See the file /etc/hosts
on any of the server nodes for more information.
On Chesapeake, a distinction is made between the "internal" cluster Ethernet network and the "external" server node Ethernet network. Whereas on SciClone, vx00
is just an alias for vortex
, on Chesapeake, a reference to choptank
instead of ct00
from a compute node will result in traffic being routed along a longer and slower path from one subnet to another.
Therefore, even on SciClone, references (including logins and file transfer operations) initiated from outside the cluster should use the "external" hostnames, while references from within the cluster should use the "internal" hostnames.
That said, both Chesapeake and SciClone's "internal" Ethernet networks do use Internet-routable/public (128.239.56-59.x
and 139.70.208.x
) IP addresses, in order to accomodate special use cases like bulk data transfers and bandwidth-intensive applications such as visualization, with prior authorization. Contact the HPC group if you will need direct access to compute nodes.
Generally, network connections are full-speed within a subcluster, but may be oversubscribed to other subclusters and servers. The principal switches in SciClone's Ethernet network are:
Name | Model | Subclusters | Uplink |
---|---|---|---|
jsc03 | Juniper EX4600 | N/A | 10 Gb/s to campus |
jsc02 | Foundry BigIron RX-16 | Monsoon | 10 Gb/s to jsc03 |
jsg05 | Foundry FWSX448 | Vortex | 10 Gb/s to jsc02 |
jsg07 | Cyclops | 4 Gb/s to jsc02 | |
jsg09 | Bora | 1 Gb/s to jsc02 | |
jsg08 mlte |
(3) Dell S3048-ON | Meltemi | 10 Gb/s to jsc02 |
jsg06 | (5) Dell PowerConnect 6248 | Ice, Wind | 10 Gb/s to jsc02 |
jst01 | (2) Dell S4128F-ON | Geo | 20 Gb/s to jsc02 |
jsg10 | Juniper EX3400-48T | Femto | 1 Gb/s to jsg06 |
jsg11 |
Hurricane, Whirlwind overflow, Hima |
10 Gb/s to jsc03 | |
jsg12 | Whirlwind | 10 Gb/s to jsc03 |
In addition, all of the compute nodes except Meltemi (which is connected with Omni-Path) and most of the servers are connected at speeds ranging from 40 to 100 Gb/s by an InfiniBand network comprised of the following switches:
Name | Speed | Subclusters | Filesystems | Uplink |
---|---|---|---|---|
ib04 |
FDR (56 Gb/s) |
Vortex |
aiddata10 pscr
|
|
ib07 |
EDR (100 Gb/s) |
Femto |
|
112 Gb/s to ib04 |
FDR (56 Gb/s) |
Vortex-α |
112 Gb/s to ib04 |
||
ib05 |
QDR (40 Gb/s) |
Ice, Wind | data10 |
40 Gb/s to ib04 |
ib03 | Hurricane, Whirlwind | scr20 |
80 Gb/s to ib04 |
|
ib06 | Hima |
40 Gb/s to ib03 |
SciClone shares main campus' 10 Gb/s route to the VIMS campus, where Chesapeake is interconnected with a 100 Mb/s to 10 Gb/s Ethernet network. Chesapeake's InfiniBand network consists of two switches: a QDR (40 Gb/s) switch hosting the Potomac subcluster, and an EDR (100 Gb/s) switch hosting the James subcluster.