Show simple item record

dc.contributor.authorAlturkestani, Tariq Lutfallah Mohammed
dc.contributor.authorLtaief, Hatem
dc.contributor.authorKeyes, David E.
dc.date.accessioned2020-02-25T13:14:03Z
dc.date.available2020-02-25T13:14:03Z
dc.date.issued2020
dc.date.submitted2020-02-21
dc.identifier.urihttp://hdl.handle.net/10754/661689
dc.description.abstractReverse Time Migration (RTM) is an important scientific application for oil and gas exploration. The 3D RTM simulation generates terabytes of intermediate data that does not fit in main memory. In particular, RTM has two successive computational phases, i.e., the forward modeling and the backward propagation, that necessitate to write and then to read the state of the computed solution grid at specific time steps of the time integration. Advances in memory architecture have made it feasible and affordable to integrate hierarchical storage media on large-scale systems, starting from the traditional Parallel File Systems (PFS) to intermediate fast disk technologies (e.g., node-local and remote-shared Burst Buffer) and up to CPU’s main memory. To address the trend of heterogeneous HPC systems deployment, we introduce an extension to our Multilayer Buffer System (MLBS) framework to further maximize RTM I/O bandwidth in presence of GPU hardware accelerators. The main idea is to leverage the GPU’s High Bandwidth Memory (HBM) as an additional storage media layer. The objective of MLBS is ultimately to hide the applications I/O overhead by enabling a buffering mechanism operating across all the hierarchical storage media layers. MLBS is therefore able to sustain the I/O bandwidth at each storage media layer. By asynchronously performing expensive I/O operations and creating opportunities for overlapping data motion with computations, MLBS may transform the original I/O bound behavior of the RTM application into a compute-bound regime. In fact, the prefetching strategy of MLBS allows the RTM application to believe that it has access to a larger memory capacity on the GPU, while transparently performing the necessary housekeeping across the storage layers. We demonstrate the effectiveness of MLBS on the Summit supercomputer using 2048 compute nodes equipped with a total of 12288 GPUs by achieving up to 1.4X performance speedup compared to the reference PFS-based RTM implementation for large 3D solution grid, respectively.
dc.description.sponsorshipFor computer time, this research used the resources of the Supercomputing Laboratory at King Abdullah University of Science & Technology (KAUST) in Thuwal, Saudi Arabia and the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This research was funded by Aramco.
dc.language.isoen
dc.publisherSubmitted to Springer
dc.rightsThis preprint has been submitted to the 26th International European Conference on Parallel and Distributed Computing
dc.titleMaximizing I/O Bandwidth for Reverse Time Migration on Heterogeneous Large-Scale Systems
dc.typePreprint
dc.contributor.departmentApplied Mathematics and Computational Science Program
dc.contributor.departmentComputer Science
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentExtreme Computing Research Center
dc.contributor.departmentOffice of the President
dc.identifier.journalSubmitted to 26th International European Conference on Parallel and Distributed Computing
dc.eprint.versionPre-print
dc.contributor.affiliationKing Abdullah University of Science and Technology (KAUST)
pubs.publication-statusSubmitted
kaust.personAlturkestani, Tariq Lutfallah Mohammed
kaust.personLtaief, Hatem
kaust.personKeyes, David E.
display.summary<p>This record has been merged with an existing record at: <a href="http://hdl.handle.net/10754/665194">http://hdl.handle.net/10754/665194</a>.</p>
kaust.acknowledged.supportUnitSupercomputing Laboratory
dc.date.posted2020-02-25


Files in this item

Thumbnail
Name:
paper.pdf
Size:
1.133Mb
Format:
PDF
Description:
Preprint

This item appears in the following Collection(s)

Show simple item record