Close menu Resources for... William & Mary
W&M menu close William & Mary

Networks

Every node is connected to at least one Ethernet network, which is used primarily for system management, and almost every node has an InfiniBand interface, which provides high-bandwidth, low-latency communication for large parallel computations and I/O. To distinguish between interfaces on the same node, the interfaces are given different hostnames. For example, the log-in node for SciClone's bora subcluster has several hostnames, including

  • bora.sciclone.wm.edu, for Ethernet traffic from outside the cluster,
  • bo00, for Ethernet traffic within the cluster, and
  • b00-ib, for InfiniBand traffic within the cluster.

Its compute nodes, which do not have an external/internal interface distinction, have names like bo05, referring to their connection to the cluster Ethernet network, and bo05-ib, referring to their connection to the InfiniBand network. See the file /etc/hosts on any of the server nodes for more information.

On Chesapeake, a distinction is made between the "internal" cluster Ethernet network and the "external" server node Ethernet network.  Whereas on SciClone, bo00 is just an alias for bora, on Chesapeake, a reference to choptank instead of ct00 from a compute node will result in traffic being routed along a longer and slower path from one subnet to another.

Therefore, even on SciClone, references (including logins and file transfer operations) initiated from outside the cluster should use the "external" hostnames, while references from within the cluster should use the "internal" hostnames.

That said, both Chesapeake and SciClone's "internal" Ethernet networks do use Internet-routable/public IP addresses, in order to accomodate special use cases like bulk data transfers and bandwidth-intensive applications such as visualization, with prior authorization. Contact the HPC group if you will need direct access to compute nodes.

Generally, network connections are full-speed within a subcluster, but may be oversubscribed to other subclusters and servers. The principal switches in SciClone's Ethernet network are:

Name Model Subclusters Uplink
jsc03 Juniper EX4600 N/A 10 Gb/s to campus
jsc02 Foundry BigIron RX-16 Monsoon 10 Gb/s to jsc03
jsg05 Foundry FWSX448 Vortex 10 Gb/s to jsc02
jsg07 Cyclops 4 Gb/s to jsc02
jsg09 Bora 1 Gb/s to jsc02

jsg08

mlte

(3) Dell S3048-ON Meltemi 10 Gb/s to jsc02
jsg06 (5) Dell PowerConnect 6248 Ice, Wind 10 Gb/s to jsc02
jst01 (2) Dell S4128F-ON Geo 20 Gb/s to jsc02
jsg10 Juniper EX3400-48T Femto 1 Gb/s to jsg06
jsg11

Hurricane,

Whirlwind overflow,

Hima

10 Gb/s to jsc03
jsg12 Whirlwind 10 Gb/s to jsc03

In addition, all of the compute nodes and most of the servers are connected at speeds ranging from 40 to 100 Gb/s by an InfiniBand network comprised of the following switches:

Speed Subclusters Filesystems

FDR

(56 Gb/s)

bora, hima

pscr 

EDR

(100 Gb/s)

astral, femto, gust

 scr10, scr20, data10

aiddata10, schism10, gluex10

HDR

(200 Gb/s)

kuro, gulf scr-lst

SciClone shares main campus' 10 Gb/s route to the VIMS campus, where Chesapeake is interconnected with a 100 Mb/s to 10 Gb/s Ethernet network. Chesapeake's InfiniBand network consists of two switches: a QDR (40 Gb/s) switch hosting the Potomac subcluster, and an EDR (100 Gb/s) switch hosting the James subcluster.