William & Mary is part of an elite group preparing for the dawn of exascale computing
Andreas Stathopoulos is part of a collaboration that aspires to simulate the building blocks of matter on some of the biggest computers ever made.
To be precise, this effort will require an armada of computers, a constellation of processors large enough to tackle some of the biggest questions that humans can ask.
Stathopoulos is a professor in William & Mary’s Department of Computer Science. He is William & Mary’s lead in a massive effort known as ECP — the Exascale Computing Project. The ECP is part of the National Strategic Computing Initiative, which was launched by the Obama Administration through the U.S. Department of Energy. It is an initiative that has immediate scientific and national-security applications. Stathopoulos says the ECP holds implications for a number of commercial applications, as well.
Stathopoulos and his collaborator Kostas Orginos, a professor in William & Mary’s Department of Physics, are part of a ECP collaboration working on a project titled “Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics.” Their group includes scientists from Fermilab, Jefferson Lab, Columbia University, Boston University, Stony Brook and other research institutions.
“It’s a big consortium. The problem is called lattice QCD, which is lattice quantum chromodynamics,” Stathopoulos said. It’s a theory of the strong force, how quarks and gluons interact to hold the nucleus of the atom together. He and Orginos have been collaborating for a decade on lattice QCD, and Stathopoulos describes the nature of their collaboration.
“I’m not a physicist,” he said. “I work in the Department of Computer Science and my specialty is scientific computing. That means I develop mathematics, code, and algorithms to solve problems in physics, engineering — and science in general.”
The first round of funding for the ECP projects totaled $39.8 million and Stathopoulos and Orginos’ lattice QCD group received $10 million from that pot.
“There were 15 projects that were funded, almost exclusively from top-tier research institutions. We are proud that William & Mary is part of such a group,” he said. “There were some big data and statistical applications, but more than half were material science, theoretical physics, and some chemistry applications as well.”
Experiments conducted at many DOE facilities and the Large Hadron Collider at CERN are all based on the predictions of Orginos and other theoretical physicists. Stathopoulos explained that the calculations required to model QCD theory have greatly exceeded the capabilities of even the most powerful computers.
“This model comes up with equations that are not solvable, at least not on paper. And to solve them on the computer, you need the resources of exascale,” he explained.
“I’ll tell you what exascale is: it means 1018 flops. That’s 1018 floating point operations per second,” he said, referring to the standard measure of computer processing speed. Stathopoulos jabs a thumb over his shoulder at his own souped-up desktop Macintosh.
“Let me give you a perspective on 1018,” he said. “Right now, this computer here, which is a pretty beefy machine, cannot reach more than 1010 flops per second. There is a power of eight missing. We need a hundred million such computers to reach exascale!”
When you look at the gap between today’s state of the art computers and exascale, there are still a whole lot of flops missing. The current fastest supercomputer is a petascale machine with a computing speed measured in petaflops— 1015 floating point operations per second. (Watson, the IBM Jeopardy champion, is a mere teraflop-range computer — 1012 floating point operations per second.)
Stathopoulos says that for the foreseeable future, there’s only one way to achieve, or even approach, exascale: parallel computing. “Instead of faster computers, we’ll give you more computers,” he said.
Parallel-computing concepts have been around since the 1960s, but Stathopoulos says they were limited to high-end computing. It wasn’t until the last decade that parallel computing has caught the public’s eye through the popularity of multi-core personal computers.
“We see this on our desks,” he said. “How many cores does your laptop have? Each computer, each core, is not that much different from the one you had eight years ago. But there’s more power! Where does the power come from? From parallelism, among other things.”
Parallelism is a silicon-based version of the adage “many hands make light work.” An operation is divided so that different parts of it will run on a large number of processors at once. Multi-core computers are essentially multi-computer computers, but reaching exascale performance requires a constellation of computers.
“How many? We’re talking about close to half a million or a million computers,” Stathopoulos said. “Hooked together, in one very, very fast network.”
He said building such a network would present a number of engineering and construction challenges — for one thing, a million-computer exascale machine would require its own power plant.
But Stathopoulos said the technology to build the individual machines for such an array is well established. In fact, three such exascale facilities are planned: one at Argonne National Lab in Chicago, another at Oak Ridge National Lab in Tennessee and a third at Lawrence Livermore National Lab in California — all Department of Energy facilities.
“So we can build an exascale machine. But I won’t run Windows on it. I don’t want to run Word and Power Point,” Stathopoulos said. “I want to solve big problems.”
Stathopoulos and other computer scientists have been working for years on algorithms for large problems, but it’s not just a matter of waiting for an exascale computer to be built.
“Not all algorithms are amenable to these architectures that we’re building. So we need special algorithms,” he said.
First of all, sequential processing erodes much of the advantage of a parallel computing architecture, Stathopoulos said. Sequential instructions in a task list require back-and-forth, as cores have to synchronize and communicate.
“This is a waste of time, because you’re not solving a problem, you’re coordinating. It’s like a committee,” he said. “To beat that, we need lots of algorithms that don’t have too many sequential components.”
And in some cases, Stathopoulos said, the parallel architecture of exascale computing is going to require worse algorithms.
“Yes, worse!” he confirmed, going on to explain that algorithm creation follows an iterative process, in which successive improvements are made until the desired result is achieved.
“And let’s say my beautiful algorithm takes a thousand steps — it’s a great algorithm. But it’s very sequential. I have to do one step after the other after the other,” he said. “There’s no way I can parallelize this algorithm.”
So Stathopoulos will take his beautiful, elegant, carefully crafted thousand-step algorithm, and in the name of exascale parallelism, hit it with the coding equivalent of the ugly stick.
“And so I will give you a dumb algorithm, one that will take a hundred thousand steps, but they’re all independent,” he said. “Now, you can push it out on a hundred thousand cores, and in one step, you’re all done! Finding the middle ground between these extremes is the focus of our research.”
The range of applications receiving support in the initial round reflects the range of potential uses for exascale computing capability. For instance, Stathopoulos said astronomers would be able to use exascale power to simulate big-data situations such as events related to the Big Bang and the modeling of intergalactic collisions.
“This takes a lot of computation, because every galaxy has billions of stars,” he said. “Imagine trying to simulate billions of stars here, and billions of stars over there and all the gravitational forces.”
Another area that will benefit from exascale computing is materials science, responsible for a range of advancements that includes consumer products such as smart phones, Stathopoulos said.
“All these things have been created because they have first been simulated on computers,” he noted. “Big computers.”
Stathopoulos recalled that he helped to produce the first quantum dot calculations when he was working with material scientists in the 90s.
“And we needed a Cray computer to run for several hours to get the first distribution of charts,” he said. “Now quantum dot televisions are in Best Buy.”
The trickling down of technology, from giant, federally funded laboratories is a benefit of modern life, he noted.
“Eventually, these things make it down to our desktops,” he said. “It’s like Formula 1. We don’t need Formula 1 for our daily life, but eventually all the technology there is going to be on your Hyundai or Honda.”