In 2018, a brand new supercomputer referred to as Summit was put in at Oak Ridge National Laboratory, in Tennessee. Its theoretical peak capability was practically 200 petaflops—that’s 200 thousand trillion floating-point operations per second. At the time, it was probably the most highly effective supercomputer in the world, beating out the earlier document holder, China’s Sunway TaihuLight, by a snug margin, in accordance to the well-known Top500 rating of supercomputers. (Summit is presently No. 2, a Japanese supercomputer referred to as Fugaku having since overtaken it.)
In simply 4 brief years, although, demand for supercomputing companies at Oak Ridge has outstripped even this colossal machine. “Summit is 4 to 5 occasions oversubscribed,” says Justin Whitt, who directs ORNL’s Leadership Computing Facility. “That limits the variety of analysis initiatives that may use it.”
The apparent treatment is to get a sooner supercomputer. And that’s precisely what Oak Ridge is doing. The new supercomputer being assembled there may be referred to as Frontier. When full, it’ll have a peak theoretical capability in extra of 1.5 exaflops.
The exceptional factor about Frontier will not be that it is going to be greater than seven occasions as highly effective as Summit, beautiful as that determine is. The exceptional factor is that it’s going to use solely twice the facility. That’s nonetheless quite a lot of energy—Frontier is anticipated to draw 29 megawatts, sufficient to energy a city the dimensions of Cupertino, Calif. But it’s a manageable quantity, each in phrases of what the grid there can provide and what the electrical energy invoice shall be.
“The effectivity comes from placing extra laptop {hardware} in smaller and smaller areas,” says Whitt. “Each of those [computer] cupboards weighs as a lot as a full-sized pickup.” That’s as a result of they’re full of what ORNL’s spec sheet describes as “excessive density compute blades powered by HPC- and AI-optimized AMD EPYC processors and Radeon Instinct GPU accelerators purpose-built for the wants of exascale computing.”
Building a supercomputer of this capability is tough sufficient. But doing so throughout a pandemic has been particularly difficult. “Supply-chain points have been broad,” says Whitt, together with shortages of many issues that aren’t particular to constructing a high-performance supercomputer. “It might simply be sheet steel or screws.”
Supply points are certainly the explanation Frontier will develop into operational in 2022 forward of one other deliberate supercomputer, Aurora, which shall be put in at Argonne National Laboratory, in Illinois. Aurora was to come first, however its building has been delayed, as a result of Intel is having problem supplying the processors and GPUs wanted for this machine.
At the time of this writing, technicians at Oak Ridge have been assembling and testing elements of Frontier in hopes that the large machine will come collectively earlier than the tip of 2021 and with the intention of constructing it absolutely operational and accessible for customers in 2022. Will we then give you the chance to name it the world’s first exascale supercomputer?
That will depend on your definition. “[Japan’s Fugaku supercomputer] truly achieved 2 exaflops with a distinct benchmark,” says Jack Dongarra of the University of Tennessee, one of many specialists behind the Top500 checklist. Those rankings, he explains, are primarily based on a benchmark that entails 64-bit floating-point calculations, the type used to resolve three-dimensional partial differential equations as required for a lot of bodily simulations. “That’s the underside line of what supercomputers are getting used for,” says Dongarra. But he additionally factors out that supercomputers are more and more used to practice deep neural networks, the place 16-bit precision can suffice.
Will we give you the chance to name Frontier the world’s first exascale supercomputer? That will depend on your definition.
And then there’s Folding@Home, a distributed-computing challenge supposed to simulate protein folding. “I might name {that a} specialised laptop,” says Dongarra, one that may do its job as a result of the calculations concerned are “embarrassingly parallel.” That is, separate computer systems can carry out the required calculations independently—or a minimum of largely so, with what little communication between them is required being conveyed over the Internet. In March of 2020, the Folding@Home challenge proudly introduced on Twitter, “We’ve crossed the exaflop barrier!”
But in the event you follow the same old benchmark, the one used for the Top500 rating, no supercomputer but qualifies as an exascale machine. Frontier often is the first. Or properly, it’s on monitor to be the primary identified exascale supercomputer, says Dongarra. He explains that earlier than the June 2021 Top500 rating got here out, a rumor emerged that China has a minimum of one, if not two, supercomputers already working at exascale.
Why would Chinese engineers assemble such a machine with out telling anybody about it? At the time, Dongarra says, he thought that perhaps they have been ready for the 100-year anniversary of the founding of the Chinese communist occasion. But that date got here and went in July. He now speculates that Chinese officers could also be anxious that making its existence public would exacerbate geopolitical rivalries and trigger the United States to limit the export of sure applied sciences to China.
Perhaps that explains it. But it’s going to be more and more troublesome for Chinese researchers not to let this cat, if it really exists, out of the bag. For the second, anyway, with solely rumors to go on, this exascale rival to Frontier is a Schrödinger’s cat—each right here and never right here on the identical time.
This article seems in the January 2022 print difficulty as “The Exascale Era Is Upon Us.”
From Your Site Articles
Related Articles Around the Web