The bigger they are, the harder they compute
Engineers measured early computing devices in kilo-girls, a unit roughly equal to the calculating ability of a thousand women. By the time the first supercomputer arrived in 1965, we needed a larger unit. Thus, FLOPS, or floating point operations (a type of calculation) per second.
In 1946, ENIAC, the first (nonsuper) computer, processed about 500 FLOPS. Today’s supers crunch petaFLOPS—or 1,000 trillion. Shrinking transistor size lets more electronics fit in the same space, but processing so much data requires a complex design, intricate cooling systems, and openings for humans to access hardware. That’s why supercomputers stay supersize.
This article was originally published in the May/June 2017 issue of Popular Science, under the title “The Bigger They Are, the Harder They Compute.”