KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

Handle URI:
http://hdl.handle.net/10754/621727
Title:
KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators
Authors:
Abdelfattah, Ahmad ( 0000-0001-5054-4784 ) ; Keyes, David E. ( 0000-0002-4052-7224 ) ; Ltaief, Hatem ( 0000-0002-6897-1095 )
Abstract:
KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA's standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.
KAUST Department:
Extreme Computing Research Center
Citation:
Abdelfattah A, Keyes D, Ltaief H (2016) KBLAS. TOMS 42: 1–31. Available: http://dx.doi.org/10.1145/2818311.
Publisher:
Association for Computing Machinery (ACM)
Issue Date:
11-May-2016
DOI:
10.1145/2818311
Type:
Article
ISSN:
0098-3500
Appears in Collections:
Articles; Extreme Computing Research Center

Full metadata record

DC FieldValue Language
dc.contributor.authorAbdelfattah, Ahmaden
dc.contributor.authorKeyes, David E.en
dc.contributor.authorLtaief, Hatemen
dc.date.accessioned2016-11-03T13:23:40Z-
dc.date.available2016-11-03T13:23:40Z-
dc.date.issued2016-05-11en
dc.identifier.citationAbdelfattah A, Keyes D, Ltaief H (2016) KBLAS. TOMS 42: 1–31. Available: http://dx.doi.org/10.1145/2818311.en
dc.identifier.issn0098-3500en
dc.identifier.doi10.1145/2818311en
dc.identifier.urihttp://hdl.handle.net/10754/621727-
dc.description.abstractKBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA's standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.en
dc.publisherAssociation for Computing Machinery (ACM)en
dc.subjectBasic linear algebra subroutinesen
dc.subjectCUDA optimizationsen
dc.subjectGPU acceleratorsen
dc.subjectMemory-bound kernelsen
dc.titleKBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Acceleratorsen
dc.typeArticleen
dc.contributor.departmentExtreme Computing Research Centeren
kaust.authorAbdelfattah, Ahmaden
kaust.authorKeyes, David E.en
kaust.authorLtaief, Hatemen
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.