Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems

Handle URI:
http://hdl.handle.net/10754/599161
Title:
Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems
Authors:
Wu, Xingfu; Taylor, Valerie
Abstract:
In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.
Citation:
Wu X, Taylor V (2011) Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems. 2011 14th IEEE International Conference on Computational Science and Engineering. Available: http://dx.doi.org/10.1109/CSE.2011.42.
Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Journal:
2011 14th IEEE International Conference on Computational Science and Engineering
KAUST Grant Number:
KUS-I1-010-01
Issue Date:
Aug-2011
DOI:
10.1109/CSE.2011.42
Type:
Conference Paper
Sponsors:
This work is supported by NSF grant CNS-0911023 and theAward No. KUS-I1-010-01 made by King AbdullahUniversity of Science and Technology (KAUST). Theauthors would like to acknowledge Argonne LeadershipComputing Facility for the use of BlueGene/P under DOEINCITE project “Performance Evaluation and AnalysisConsortium End Station”, the SDSC for the use of DataStarP655 under TeraGrid project TG-ASC040031, and TAMUSupercomputing Facilities for the use of Hydra. We wouldalso like to thank Stephane Ethier from Princeton PlasmaPhysics Laboratory and Shirley Moore from University ofTennessee for providing the GTC code.
Appears in Collections:
Publications Acknowledging KAUST Support

Full metadata record

DC FieldValue Language
dc.contributor.authorWu, Xingfuen
dc.contributor.authorTaylor, Valerieen
dc.date.accessioned2016-02-25T13:54:02Zen
dc.date.available2016-02-25T13:54:02Zen
dc.date.issued2011-08en
dc.identifier.citationWu X, Taylor V (2011) Performance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systems. 2011 14th IEEE International Conference on Computational Science and Engineering. Available: http://dx.doi.org/10.1109/CSE.2011.42.en
dc.identifier.doi10.1109/CSE.2011.42en
dc.identifier.urihttp://hdl.handle.net/10754/599161en
dc.description.abstractIn this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore clusters: IBM POWER4, POWER5+ and Blue Gene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore clusters because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyro kinetic Toroidal Code in magnetic fusion to validate our performance model of the hybrid application on these multicore clusters. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore clusters. © 2011 IEEE.en
dc.description.sponsorshipThis work is supported by NSF grant CNS-0911023 and theAward No. KUS-I1-010-01 made by King AbdullahUniversity of Science and Technology (KAUST). Theauthors would like to acknowledge Argonne LeadershipComputing Facility for the use of BlueGene/P under DOEINCITE project “Performance Evaluation and AnalysisConsortium End Station”, the SDSC for the use of DataStarP655 under TeraGrid project TG-ASC040031, and TAMUSupercomputing Facilities for the use of Hydra. We wouldalso like to thank Stephane Ethier from Princeton PlasmaPhysics Laboratory and Shirley Moore from University ofTennessee for providing the GTC code.en
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.subjecthybrid MPI/OpenMPen
dc.subjectmemory bandwidth contentionen
dc.subjectmulticore clustersen
dc.subjectPerformance modelingen
dc.titlePerformance Modeling of Hybrid MPI/OpenMP Scientific Applications on Large-scale Multicore Cluster Systemsen
dc.typeConference Paperen
dc.identifier.journal2011 14th IEEE International Conference on Computational Science and Engineeringen
dc.contributor.institutionTexas A and M University, College Station, United Statesen
kaust.grant.numberKUS-I1-010-01en
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.