The Tianhe-2 has more than 3 million processor cores and it’s the world’s most powerful supercomputer. It can perform more than 30 quadrillion calculations per second, easily dwarfing the runner-up, an Oak Ridge National Laboratories machine known as Titan. The Oak Ridge system can do 17.59 quadrillion calculations per second, according to its most recent published benchmarks. Image: The Invisible Boy (1957)
On Monday, the people who keep track of the world’s biggest supercomputers will release the latest rankings, called the Top500 List, and the smart money is betting Tianhe-2 will be on top.
The United States, long the dominant power in supercomputing, won’t have a comparable system until around 2016, when the U.S. Department of Energy is expected to build a Tihane-2-range supercomputer called Trinity. Tihane-2 probably will beat out all U.S. systems for a few years, which is more than a loss of bragging rights for the U.S. It raises questions about whether the U.S. is investing enough in research and development to keep its supercomputing lead.
“The most important thing about this system is that it not only has a top performance, it also has a substantial investment in technology,” says Dongarra, a computer science professor with the University of Tennessee.
In fact, the Tianhe-2 is remarkably Chinese. It runs a special version of Linux called Kylin, developed by the National University of Defense Technology. It also has its own homegrown networking gear. It’s even using Chinese processors to power the supercomputer’s management tools. In fact, the only American components are the Intel microprocessors used to do the system’s mathematical calculations.
To be sure, those Intel chips are critical components, but Dongarra believes that on future supercomputers, they eventually will be replaced by Chinese chips — though he’s not sure when that will happen. “They’re developing components here that will go into a system that will ultimately be all Chinese,” he says.
It’s a remarkable success story for a country that didn’t have a single system on the 500 top-ranked supercomputers in 2001. It’s also a warning sign that the United States is losing its lead, as Europe, Japan, and China ramp up their supercomputing efforts.
“A decade ago, if it were a race, we had laps on the field,” says Daniel Reed, vice president of research and economic development at the University of Iowa. “Now the delta is a few lengths and closing.”
That matters a lot. Supercomputers are the test bed for many of the computing advances that we now see in everything, from the multicore processors in Apple’s iPhone to the futuristic networking technologies in Google’s data centers.
“Cloud data centers and [high performance computing] system are twins separated at birth,” Reed says. He should know. Four years ago, Microsoft hired him to help figure out how to build next generation systems for its data centers.
It wasn’t supposed to get this close. Five years ago, the U.S. was on track to build a supercomputer on par with the Tihane-2. The plan is still to someday build these “exascale systems” — machines that are 30 times as powerful as Tihane-2 — but by 2010 the recession intervened and funding never materialized, says Horst Simon, Deputy Laboratory Director at Lawrence Berkeley National Laboratory. “At the same time that the Chinese have made this big step forward, the American investment is stagnating,” he says.
To build these so-called exascale systems will take a coordinated effort. Many of the components are under development. Chipmakers such as Nvidia, Intel and AMD are working on new microprocessors that will be power-efficient enough to make these systems work. But the country also needs basic research to develop the networking and software tools that will power these systems.
That isn’t happening fast enough, says Dongarra. “The country’s paralyzed in terms of spending money,” he says. “Right now, we can’t get our act together in terms of the exascale plan.”
China’s Tihane-2, the world’s top supercomputer. Photo: Jack Dongarra