Show simple item record

dc.contributor.authorDongarra, Jack
dc.contributor.authorFaverge, Mathieu
dc.contributor.authorLtaief, Hatem
dc.contributor.authorLuszczek, Piotr R.
dc.date.accessioned2015-08-24T09:25:12Z
dc.date.available2015-08-24T09:25:12Z
dc.date.issued2011
dc.identifier.isbn9781450311458
dc.identifier.doi10.1145/2132876.2132885
dc.identifier.urihttp://hdl.handle.net/10754/575750
dc.description.abstractThe goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors. © 2011 ACM.
dc.publisherAssociation for Computing Machinery (ACM)
dc.subjectLU factorization
dc.subjectmulticore parallel performance
dc.subjectruntime DAG scheduling
dc.titleHigh performance matrix inversion based on LU factorization for multicore architectures
dc.typeConference Paper
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentExtreme Computing Research Center
dc.contributor.departmentKAUST Supercomputing Laboratory (KSL)
dc.identifier.journalProceedings of the 2011 ACM international workshop on Many task computing on grids and supercomputers - MTAGS '11
dc.conference.dateNovember 14th, 2011
dc.conference.nameProceedings of the 2011 ACM international workshop on Many task computing on grids and supercomputers
dc.conference.locationSeattle Washington
dc.contributor.institutionUniversity of Tennessee, 1122 Volunteer Blvd, Knoxville, TN, United States
dc.contributor.institutionComputer Science and Mathematics Division, Oak Ridge National Laboratory, United States
dc.contributor.institutionSchool of Mathematics, School of Computer Science, University of Manchester, United Kingdom
kaust.personLtaief, Hatem


This item appears in the following Collection(s)

Show simple item record