MetadataShow full item record
AbstractAs processor clock rates become more dynamic and workloads become more adaptive, the vulnerability to global synchronization that already complicates programming for performance in today's petascale environment will be exacerbated. Algebraic multigrid (AMG), the solver of choice in many large-scale PDE-based simulations, scales well in the weak sense, with fixed problem size per node, on tightly coupled systems when loads are well balanced and core performance is reliable. However, its strong scaling to many cores within a node is challenging. Reducing synchronization and increasing concurrency are vital adaptations of AMG to hybrid architectures. Recent communication-reducing improvements to classical additive AMG by Vassilevski and Yang improve concurrency and increase communication-computation overlap, while retaining convergence properties close to those of standard multiplicative AMG, but remain bulk synchronous.We extend the Vassilevski and Yang additive AMG to asynchronous task-based parallelism using a hybrid MPI+OmpSs (from the Barcelona Supercomputer Center) within a node, along with MPI for internode communications. We implement a tiling approach to decompose the grid hierarchy into parallel units within task containers. We compare against the MPI-only BoomerAMG and the Auxiliary-space Maxwell Solver (AMS) in the hypre library for the 3D Laplacian operator and the electromagnetic diffusion, respectively. In time to solution for a full solve an MPI-OmpSs hybrid improves over an all-MPI approach in strong scaling at full core count (32 threads per single Haswell node of the Cray XC40) and maintains this per node advantage as both weak scale to thousands of cores, with MPI between nodes.
CitationAlOnazi A, Markomanolis GS, Keyes D (2017) Asynchronous Task-Based Parallelization of Algebraic Multigrid. Proceedings of the Platform for Advanced Scientific Computing Conference on - PASC ’17. Available: http://dx.doi.org/10.1145/3093172.3093230.
SponsorsWe thank Hatem Ltaief, Stefano Zampini, and Lisandro Dalcin of the Extreme Computing Research Center at KAUST for their help. We also thank Ulrike Yang from Lawrence Livermore National Laboratory for her useful comments. For performance tests on the Shaheen II Cray XC40 supercomputer we gratefully acknowledge the KAUST Supercomputing Laboratory.
Conference/Event namePlatform for Advanced Scientific Computing Conference, PASC 2017