The history of machine computing started with mechanical computers that used gears, levers, curved plates, and the like to perform calculations. The mechanical computers were typically single-purpose machines and could not be programmed in the modern sense. In addition to the standard desk calculators, typical examples were the devices used in aiming bombs, guns, and torpedoes during World War II. However, programmable motor-driven mechanical machines such as Z1 were also developed.
The mechanical computers were later replaced by electro-mechanical machines that used switches and relays to implement logic. These machines paved the way for modern-day computers and eventually supercomputers.
ENIAC (Electronic Numerical Integrator and Computer), which became operational in 1945, was the first fully electronic programmable general-purpose computer and can as such be considered the first supercomputer (above figure). Initially constructed for calculating artillery shell and missile trajectories, it was located at the University of Pennsylvania and contributed to the US thermonuclear weapons development.
ENIAC relied on vacuum tubes, and in terms of performance (about 400 flop/s or floating-point operations per second) it was around one thousand times more capable than the preceding electro-mechanical computers. In addition, ENIAC was a decimal (as opposed to modern binary) computer and was programmed by changing the system’s configuration by turning dials and plugging cables into receptable sockets.
ENIAC and its vacuum-tubed successors were followed by transistorized systems in the latter half of the 1950's. One of them, IBM's 7090 mainframe (first installation in 1959), was specifically designed for solving large-scale scientific and engineering problems.
The term supercomputing was first used in 1964 when Control Data Corporation introduced the CDC 6600 system designed by Seymour Cray, whose name went on to became synonymous with supercomputers. An improved version, called CDC 6700, was introduced in 1969. Cray subsequently left Control Data Corporation, founded a company of his own, and debuted the iconic Cray-1 supercomputer in 1976.
Parallel computers were investigated already in the 1960s, but until the late 1980s most supercomputers employed only a single, or at most a few processors. Furthermore, the processors used were also often designed specifically for supercomputers. As an example, Cray Y-MP introduced in 1989 had eight special vector processors.
During the 1990s, supercomputers started to be based more on commodity processors (such as Intel x86) and massively parallel processing. An important landmark in 1995 was the Beowulf cluster, which was built using commodity-grade components (Intel DX4 processors and Ethernet network) and ran the Linux operating system. Nowadays, most supercomputers are based on commodity processors, but the network connecting the processors is typically designed for high performance computing.
Supercomputer performance in the past has followed Moore's law steadily. In its original formulation (by Gordon Moore, founder of Intel), Moore's law states that the density of the transistors in an integrated circuit doubles every two years, implying that the performance also doubles. For a long time, the clock frequency of processors or, in other words, the speed at which they operate, increased constantly.
However, since the power consumed by a processor (and thus the heat it generates) increases rapidly with the clock frequency, clock frequencies have stalled since around 2005 (figure below). Since then, performance has been increased by adding multiple cores to a single CPU (nowadays multicore CPUs are common in all devices from smartphones to supercomputers) and, in the case of a supercomputer, by adding more and more multicore CPUs to the same computer. As a result, the performance of supercomputers has kept on doubling roughly in every two years.
Nowadays, a supercomputer is often defined as a computer that has much higher computing power than a typical desktop computer. This is naturally a definition that is constantly changing; as the performance of desktop computers has also increased, a modern laptop is 1000 times more powerful than the biggest supercomputer in the Nordic countries thirty years ago.
In addition to Cray (now a product line of HPE), the most common names of the supercomputer vendors include IBM, SGI/Silicon Graphics, Hewlett-Packard, Atos, Dell, Intel, Fujitsu, Lenovo, and Sun.
CSC’s (Center for Scientific Computing) history can be traced back to 1971 when a special office was founded to operate a Univac 1108 system. Since then, there have been several changes to the name, the organization, and the entity’s owner that was tasked with operating the Finnish high-performance computing resources for scientists. In the 1990s, the name CSC was introduced, and the Ministry of Education and Culture became the company’s sole owner. Today, universities and polytechnics own a minor share of CSC, the IT Center for Science Ltd.
The following list contains information on major systems that have been operated by CSC and its predecessors. The list may contain confusing terms, units, and abbreviations, but don’t worry; those will be explained in the following sections.
The graphs below depict the performance of selected CSC supercomputers from 1993 to 2020. The top graph shows the computing capacity of each given system with units being standardized processors. Note that the top graph is logarithmic, meaning that the growth is exponential, not linear.
The bottom graph depicts the ranking of these systems on the top 500 rating. Each line shows how that system drops rank throughout the years. As of 2020, the latest CSC system was still in the top 50 ranking.