Exploiting Data Sparsity for Large-Scale Matrix Computations

Handle URI:
http://hdl.handle.net/10754/627403
Title:
Exploiting Data Sparsity for Large-Scale Matrix Computations
Authors:
Akbudak, Kadir ( 0000-0002-1057-1590 ) ; Ltaief, Hatem ( 0000-0002-6897-1095 ) ; Mikhalev, Aleksandr ( 0000-0002-9274-7237 ) ; Charara, Ali ( 0000-0002-9509-7794 ) ; Keyes, David Elliot ( 0000-0002-4052-7224 )
Abstract:
Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.
KAUST Department:
Extreme Computing Research Center
Issue Date:
24-Feb-2018
Type:
Technical Report
Appears in Collections:
Technical Reports

Full metadata record

DC FieldValue Language
dc.contributor.authorAkbudak, Kadiren
dc.contributor.authorLtaief, Hatemen
dc.contributor.authorMikhalev, Aleksandren
dc.contributor.authorCharara, Alien
dc.contributor.authorKeyes, David Ellioten
dc.date.accessioned2018-04-04T08:51:09Z-
dc.date.available2018-04-04T08:51:09Z-
dc.date.issued2018-02-24-
dc.identifier.urihttp://hdl.handle.net/10754/627403-
dc.description.abstractExploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.en
dc.subjectDense Linear Algebraen
dc.subjectdata compressionen
dc.titleExploiting Data Sparsity for Large-Scale Matrix Computationsen
dc.typeTechnical Reporten
dc.contributor.departmentExtreme Computing Research Centeren
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.