Porting an Explicit Time-Domain Volume Integral Equation Solver onto Multiple GPUs Using MPI and OpenACC
dc.contributor.author | Feki, Saber | |
dc.contributor.author | Al-Jarro, Ahmed | |
dc.contributor.author | Bagci, Hakan | |
dc.date.accessioned | 2021-07-07T08:52:26Z | |
dc.date.available | 2021-07-07T08:52:26Z | |
dc.date.issued | 2018 | |
dc.identifier.issn | 1943-5711 | |
dc.identifier.issn | 1054-4887 | |
dc.identifier.uri | http://hdl.handle.net/10754/670062 | |
dc.description.abstract | scalable parallelization algorithm to port an explicit marching-on-in-time (MOT)-based time domain volume integral equation (TDVIE) solver onto multi-GPUs is described. The algorithm makes use of MPI and OpenACC for efficient implementation. The MPI processes are responsible for synchronizing and communicating the distributed compute kernels of the MOT-TDVIE solver between the GPUs, where one MPI task is assigned to one GPU. The compiler directives of the OpenACC are responsible for the data transfer and kernels’ offloading from the CPU to the GPU and their execution on the GPU. The speedups achieved against the MPI/OpenMP code execution on multiple CPUs and parallel efficiencies are presented. Index Terms ─ Explicit marching-on-in-time scheme, GPU, MPI, OpenACC, time-domain volume integral equation. | |
dc.relation.url | https://web.a.ebscohost.com/abstract?site=ehost&scope=site&jrnl=10544887&AN=129213316&h=JsJYyIajUivoDzVe6BObEQowNIcGKDhor8s91pC%2besl1NQeUY87zMfZUBgi%2fQTWVkg8J2uaRAL2ok4ePagFiaQ%3d%3d&crl=c&resultLocal=ErrCrlNoResults&resultNs=Ehost&crlhashurl=login.aspx%3fdirect%3dtrue%26profile%3dehost%26scope%3dsite%26authtype%3dcrawler%26jrnl%3d10544887%26AN%3d129213316 | |
dc.rights | Archived with thanks to APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY JOURNAL | |
dc.subject | Explicit marching-on-in-time scheme | |
dc.subject | GPU | |
dc.subject | MPI | |
dc.subject | OpenACC | |
dc.subject | time-domain volume integral equation | |
dc.title | Porting an Explicit Time-Domain Volume Integral Equation Solver onto Multiple GPUs Using MPI and OpenACC | |
dc.type | Article | |
dc.contributor.department | Computational Electromagnetics Laboratory | |
dc.contributor.department | Computational Scientists | |
dc.contributor.department | Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division | |
dc.contributor.department | Electrical and Computer Engineering Program | |
dc.identifier.journal | APPLIED COMPUTATIONAL ELECTROMAGNETICS SOCIETY JOURNAL | |
dc.identifier.wosut | WOS:000428995000011 | |
dc.eprint.version | Post-print | |
dc.identifier.volume | 33 | |
dc.identifier.issue | 2 | |
dc.identifier.pages | 164-167 | |
kaust.person | Feki, Saber | |
kaust.person | Bagci, Hakan |
This item appears in the following Collection(s)
-
Articles
-
Electrical and Computer Engineering Program
For more information visit: https://cemse.kaust.edu.sa/ece -
Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division
For more information visit: https://cemse.kaust.edu.sa/