Algorithms and data structures for massively parallel generic adaptive finite element codes
KAUST Grant NumberKUS-C1-016-04
MetadataShow full item record
AbstractToday's largest supercomputers have 100,000s of processor cores and offer the potential to solve partial differential equations discretized by billions of unknowns. However, the complexity of scaling to such large machines and problem sizes has so far prevented the emergence of generic software libraries that support such computations, although these would lower the threshold of entry and enable many more applications to benefit from large-scale computing. We are concerned with providing this functionality for mesh-adaptive finite element computations. We assume the existence of an "oracle" that implements the generation and modification of an adaptive mesh distributed across many processors, and that responds to queries about its structure. Based on querying the oracle, we develop scalable algorithms and data structures for generic finite element methods. Specifically, we consider the parallel distribution of mesh data, global enumeration of degrees of freedom, constraints, and postprocessing. Our algorithms remove the bottlenecks that typically limit large-scale adaptive finite element analyses. We demonstrate scalability of complete finite element workflows on up to 16,384 processors. An implementation of the proposed algorithms, based on the open source software p4est as mesh oracle, is provided under an open source license through the widely used deal.II finite element software library. © 2011 ACM 0098-3500/2011/12-ART10 $10.00.
CitationBangerth W, Burstedde C, Heister T, Kronbichler M (2011) Algorithms and data structures for massively parallel generic adaptive finite element codes. TOMS 38: 1–28. Available: http://dx.doi.org/10.1145/2049673.2049678.
SponsorsW. Bangerth was partially supported by Award No. KUS-C1-016-04 made by King Abdullah University of Science and Technology (KAUST), by a grant from the NSF-funded Computational Infrastructure in Geodynamics initiative, and by an Alfred P. Sloan Research Fellowship. C. Burstedde was partially supported by NSF grants OPP-0941678, OCI-0749334, DMS-0724746, AFOSR grant FA9550-09-1-0608, and DOE grants DE-SC0002710 and DEFC02-06ER25782. T. Heister was partially supported by the German Research Foundation (DFG) through Research Training Group GK 1023. M. Kronbichler was supported by the Graduate School in Mathematics and Computation (FMB). Most of the work was performed while T. Heister and M. Kronbichler were visitors at Texas A&M University.Some computations for this article were performed on the "Ranger" cluster at the Texas Advanced Computing Center (TACC), and the "Brazos" and "Hurr" clusters at the Institute for Applied Mathematics and Computational Science (IAMCS) at Texas A&M University. Ranger was funded by NSF award OCI-0622780, and the author's used an allocation obtained under NSF award TG-MCA04N026. Part of Brazos was supported by NSF award DMS-0922866. Hurr is supported by Award No. KUS-C1-016-04 made by King Abdullah University of Science and Technology (KAUST).