Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

Handle URI:
http://hdl.handle.net/10754/347275
Title:
Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units
Authors:
Boukaram, W.; Ltaief, H.; Litvinenko, Alexander ( 0000-0001-5427-3598 ) ; Abdelfattah, A.; Keyes, David E. ( 0000-0002-4052-7224 )
Abstract:
Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth's crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.
KAUST Department:
Extreme Computing Research Center; Center for Uncertainty Quantification in Computational Science and Engineering (SRI-UQ)
Publisher:
Extended abstract to the International Computational Science and Engineering Conference (ICSEC15)
Issue Date:
25-Mar-2015
Type:
Conference Paper
Description:
This is an extended abstract submitted to the International Computational Science and Engineering Conference (ICSEC15)
Sponsors:
SRI Uncertainty Quantification Center at KAUST, Extreme Computing Research Center at KAUST
Appears in Collections:
Conference Papers; Extreme Computing Research Center

Full metadata record

DC FieldValue Language
dc.contributor.authorBoukaram, W.en
dc.contributor.authorLtaief, H.en
dc.contributor.authorLitvinenko, Alexanderen
dc.contributor.authorAbdelfattah, A.en
dc.contributor.authorKeyes, David E.en
dc.date.accessioned2015-03-29T06:01:10Zen
dc.date.available2015-03-29T06:01:10Zen
dc.date.issued2015-03-25en
dc.identifier.urihttp://hdl.handle.net/10754/347275en
dc.descriptionThis is an extended abstract submitted to the International Computational Science and Engineering Conference (ICSEC15)en
dc.description.abstractLarge dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth's crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.en
dc.description.sponsorshipSRI Uncertainty Quantification Center at KAUST, Extreme Computing Research Center at KAUSTen
dc.language.isoenen
dc.publisherExtended abstract to the International Computational Science and Engineering Conference (ICSEC15)en
dc.subjectparallel hierarchical matricesen
dc.subjectCUDA GPUen
dc.subjectlarge covariance matrixen
dc.subjectKSPARSEen
dc.titleAccelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Unitsen
dc.typeConference Paperen
dc.contributor.departmentExtreme Computing Research Centeren
dc.contributor.departmentCenter for Uncertainty Quantification in Computational Science and Engineering (SRI-UQ)en
dc.eprint.versionPreprinten
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.