Bert Debuscherre speaks to Olivier Le Maître (French visiting professor at Duke University) and Paul Mycek (postdoc at Duke University, not shown in photo). Omar Knio (professor at Duke University, not show in photo) about their collaboration on fault tolerance in exascale computing.
Computing power today is more potent than ever before. Or is it? In many applications, yes, but when it comes to sophisticated, detailed modeling of the Earth’s climate, an analogy of using an abacus to track the national debt may be only a slight exaggeration. “To accurately predict the Earth’s climate over the next 200–300 years, one needs to simulate the atmosphere, the oceans, and the Earth’s land, and one would need to do it all at the same time,” said Bert Debusschere (8351). Current supercomputers, as powerful as they may be, would take several years to deliver accurate predictions—and that’s only if they could be dedicated for that sole purpose.
“Each component of climate modeling/simulation [atmosphere, oceans, and land] is, by itself, challenging the most powerful computing resources known today,” Bert said. “The problem is, to make sound predictions about the future, we need to have the computers run simulation programs, not just once, but hundreds of times for slightly different conditions, for each model or component. Predictive power requires that kind of computational muscle and capacity.” Bert’s project includes a collaboration with Duke University that began its three-year project last summer, following a year-long pilot study funded by Advanced Scientific Computing Research.
Bert’s team delivered a presentation on this new approach at the Society for Industrial and Applied Mathematics (SIAM) Conference on Parallel Processing for Scientific Computing in Portland, Oregon, in February 2014. This conference also included presentations from more than a dozen other Sandia researchers working to push the frontiers of computing to the exascale.
Read the full article in CRF News.