For trivially parallel problems, a special interconnect is not needed, and enormous computing power can be reached just from computers spread randomly over the Internet. For example, the Folding@home project using the personal computers of volunteers worldwide reached a performance of 2,43 exaflops on April 12, 2020.
However, many scientific computing problems are more tightly coupled, and a high-speed interconnect is essential for solving them.
Two main characteristics of an interconnect are latency and bandwidth:
Latency is the minimum time it takes to do anything (i.e., the time it takes to transfer a single byte). Although latency is inevitable, the smaller it is, the better.
Bandwidth is the rate at which large amounts of data are transferred. For bandwidth, larger is better.
Following our analogy of distributed office workers communicating by phone, latency would be the time it takes for a caller to dial the number and the callee to pick up. Bandwidth would be the speed at which the caller can speak comprehensibly.
As an example of the latency and bandwidth numbers in modern supercomputers, the interconnect in CSC’s Mahti supercomputer has a latency of about 0.5 microseconds ($ 0.5\times10^{-6} $ or 0.5 millionths of a second), and the maximum bandwidth between two nodes is 200 Gb/s. The corresponding numbers for a very high-speed internet connection on fiber optic at home could be a latency of five milliseconds (10,000 times more) and a bandwidth of 1 Gb/s (200 times less).
To put the latency in a supercomputer interconnect more in perspective, in 0.5 microseconds, light travels 150 meters. Therefore, if two computers were more than 150 meters apart, laws of physics would render a latency shorter than 0.5 microseconds impossible. Thus, for problems where low latency is important, it is clearly not efficient to use a computer network distributed all around the world.
The network topology is how the connections between the nodes are arranged.
Conceptually the simplest topology is a fully connected network, in which there is a direct connection between all pairs of nodes. However, even though a fully connected network would provide the best performance, it is too complex and costly for anything but very small networks. For example, with Mahti having 1,400 nodes, a fully connected network would require almost 1,000,000 connections.
The network topologies try to compromise between the number of connections, and thus price, and the performance obtainable from the topology. Different parallel problems have different communication characteristics. While a CPU core in a node might only need to communicate with a few fixed cores in another node, the cores, and nodes communicated with can be dynamically changing. In some cases, the same core might need to communicate with all the other cores used by the application.
It is very rare that a single application uses the whole supercomputer. Instead, the batch job system typically reserves different nodes for different runs. Furthermore, for some runs, the nodes can be physically close to each other, while for other runs, they are physically distant. Thus, many parameters need to be considered when choosing an interconnect topology, and the topologies themselves can be conceptually quite complex as well.
For example, the network topology in Mahti is called Dragonfly topology, in which the nodes are divided into six dragonfly groups with 234 nodes in each. Furthermore, there are so-called fat-tree topologies within a dragonfly group, which are then fully connected between the dragonfly groups.
Includes material from "Supercomputing" online-course (https://www.futurelearn.com/courses/supercomputing/)
by Edinburgh Supercomputing Center (EPCC), licensed under Creative Commons SA-BY