Posted with permission from Popular Science

The bigger they are, the harder they compute

Supercomputing superpower
Photograph by Levi Brown
Supercomputing superpower
The first supercomputer went live in 1965. Since then, the computing power of these mega machines has grown exponentially.

Engineers measured early computing devices in kilo-girls, a unit roughly equal to the calculating ability of a thousand women. By the time the first super­computer arrived in 1965, we needed a larger unit. Thus, FLOPS, or floating point operations (a type of calculation) per second.

In 1946, ENIAC, the first (nonsuper) computer, processed about 500 FLOPS. Today’s supers crunch petaFLOPS—or 1,000 trillion. Shrinking transistor size lets more electronics fit in the same space, but processing so much data requires a complex design, intricate cooling systems, and openings for humans to access hardware. That’s why supercomputers stay supersize.

Supercomputers over time
Sara Chodosh

A few special call-outs:

1. CDC 6600

Rapidly sifted through 3 million of CERN’s experimental research images per year

2. ASCI Red

Modeled the U.S.’s nuclear weapons’ capabilities, avoiding underground testing

3. IBM Sequoia

Used more than 1 million cores to help Stanford engineers study jet engines

4. Sunway TaihuLight

Reached a record 93 petaFLOPS by trading slower memory for high energy efficiency

This article was originally published in the May/June 2017 issue of Popular Science, under the title “The Bigger They Are, the Harder They Compute.”