Extreme Scale Multi-Physics Simulations of the Tsunamigenic 2004 Sumatra Megathrust Earthquake

Abstract
We present a high-resolution simulation of the 2004 Sumatra-Andaman earthquake, including non-linear frictional failure on a megathrust-splay fault system. Our method exploits unstructured meshes capturing the complicated geometries in subduction zones that are crucial to understand large earthquakes and tsunami generation. These up-to-date largest and longest dynamic rupture simulations enable analysis of dynamic source effects on the seafloor displacements. To tackle the extreme size of this scenario an end-to-end optimization of the simulation code SeisSol was necessary. We implemented a new cache-aware wave propagation scheme and optimized the dynamic rupture kernels using code generation. We established a novel clustered local-time-stepping scheme for dynamic rupture. In total, we achieved a speed-up of 13.6 compared to the previous implementation. For the Sumatra scenario with 221 million elements this reduced the time-to-solution to 13.9 hours on 86,016 Haswell cores. Furthermore, we used asynchronous output to overlap I/O and compute time.

Citation
Uphoff, C., Rettenberger, S., Bader, M., Madden, E. H., Ulrich, T., Wollherr, S., & Gabriel, A.-A. (2017). Extreme scale multi-physics simulations of the tsunamigenic 2004 sumatra megathrust earthquake. Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. doi:10.1145/3126908.3126948

Acknowledgements
The work presented in this paper was supported by the Volkswagen Foundation (project ASCETE - Advanced Simulation of Coupled Earthquake-Tsunami Events, grant no. 88479), by the German Research Foundation (DFG) (project no. KA 2281/4-1, AOBJ 584936/TG-92), by the Bavarian Competence Network for Technical and Scientific High Performance Computing (KONWIHR) (project GeoPF - Geophysics for PetaFlop Computing), and by Intel as part of the Intel Parallel Computing Center ExScaMIC-KNL. Computing resources were provided by the Leibniz Supercomputing Centre (LRZ, project no. pr45f and h019z, on SuperMUC), by P. Martin Mai, King Abdullah University of Science and Technology (KAUST, on Shaheen-II) and by the National Energy Research Scientific Computing Center (NERSC, on Cori). We especially thank Nicolay Hammer (LRZ), as well as Richard Gerber and Jack Deslippe (NERSC) for their highly valuable support.

Publisher
ACM

Conference/Event Name
International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2017

DOI
10.1145/3126908.3126948

Additional Links
https://dl.acm.org/doi/10.1145/3126908.3126948

Permanent link to this record