Recent Submissions

  • Efficient Sparse Collective Communication and its application to Accelerate Distributed Deep Learning

    Fei, Jiawei; Ho, Chen-Yu; Sahu, Atal Narayan; Canini, Marco; Sapio, Amedeo (2020-09-30) [Technical Report]
    Efficient collective communication is crucial to parallel-computing applications such as distributed training of large-scale recommendation systems and natural language processing models. Existing collective communication libraries focus on optimizing operations for dense inputs, resulting in transmissions of many zeros when inputs are sparse. This counters current trends that see increasing data sparsity in large models. We propose OmniReduce, an efficient streaming aggregation system that exploits sparsity to maximize effective bandwidth use by sending only non-zero data blocks. We demonstrate that this idea is beneficial and accelerates distributed training by up to 8.2×. Even at 100 Gbps, we assess that OmniReduce delivers 1.2-2.6× better performance for network-bottlenecked DNNs.
  • Scaling Distributed Machine Learning with In-Network Aggregation

    Sapio, Amedeo; Canini, Marco; Ho, Chen-Yu; Nelson, Jacob; Kalnis, Panos; Kim, Changhoon; Krishnamurthy, Arvind; Moshref, Masoud; Ports, Dan R. K.; Richtarik, Peter (2020-09-30) [Technical Report]
    Training machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide an efficient solution that speeds up training by up to 5.5× for a number of real-world benchmark models.
  • Performance Analysis of Satellite Communication Systems with Randomly Located Ground Users

    Na, Dong-Hyoun; Park, Ki-Hong; Alouini, Mohamed-Slim; Ko, Young-Chai (2020) [Technical Report]
    Satellite communication (SatCom) is one of the emerging technologies that compensate for digital divide between urban and remote areas. The outage event of SatCom link connecting to a network is more critical in an infrastructure deficient remote area. In this paper, we analyze an outage probability (OP) and symbol error rate (SER) over SatCom downlink channels when the users are randomly located in a single beam area. The downlink beam will suffer from propagation loss and the shadowed-Rician fading depending on the user location which is assumed to follow a Poisson point process. For mathematically tractable, informative, and insightful interpretation, we obtain and verify the asymptotic OP and SER expressions of user link under several channel conditions in the high power regime.
  • Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation

    Xu, Hang; Ho, Chen-Yu; Abdelmoniem, Ahmed M.; Dutta, Aritra; Bergou, El Houcine; Karatsenidis, Konstantinos; Canini, Marco; Kalnis, Panos (2020) [Technical Report]
    Powerful computer clusters are used nowadays to train complex deep neural networks (DNN) on large datasets. Distributed training workloads increasingly become communication bound. For this reason, many lossy compression techniques have been proposed to reduce the volume of transferred data. Unfortunately, it is difficult to argue about the behavior of compression methods, because existing work relies on inconsistent evaluation testbeds and largely ignores the performance impact of practical system configurations. In this paper, we present a comprehensive survey of the most influential compressed communication methods for DNN training, together with an intuitive classification (i.e., quantization, sparsification, hybrid and low-rank). We also propose a unified framework and API that allows for consistent and easy implementation of compressed communication on popular machine learning toolkits. We instantiate our API on TensorFlow and PyTorch, and implement 16 such methods. Finally, we present a thorough quantitative evaluation with a variety of DNNs (convolutional and recurrent), datasets and system configurations. We show that the DNN architecture affects the relative performance among methods. Interestingly, depending on the underlying communication library and computational cost of compression/decompression, we demonstrate that some methods may be impractical.
  • Unified Finite Series Approximation of FSO Performance over Strong Turbulence Combined with Various Pointing Error Conditions

    Jung, Kug-Jin; Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai (IEEE Transactions on Communications, IEEE, 2020) [Article]
    In this paper, we investigate both the bit error rate (BER) and outage performance of free-space optical (FSO) links over strong turbulence combined with various pointing error conditions. Considering atmospheric turbulence and pointing errors as main factors that deteriorate the quality of an optical link, we obtain a unified finite series approximation of the composite probability density function, which embraces generalized pointing error models. This approximation leads to new unified formulas for the BER and outage capacity of an FSO link, which account for the two possible detection mechanisms of intensity modulation/direct detection and heterodyne detection. Selected simulation results confirm that the newly derived approximations can give precise predictions of both the average BER and the outage capacity of FSO communication that are generally applicable to all environments.
  • Exploiting Data Sparsity for Large-Scale Matrix Computations

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E. (2018-02-24) [Technical Report]
    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.
  • Batched Tile Low-Rank GEMM on GPUs

    Charara, Ali; Keyes, David E.; Ltaief, Hatem (2018-02) [Technical Report]
    Dense General Matrix-Matrix (GEMM) multiplication is a core operation of the Basic Linear Algebra Subroutines (BLAS) library, and therefore, often resides at the bottom of the traditional software stack for most of the scientific applications. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract the hardware performance. With the emergence of big data applications involving large data-sparse, hierarchically low-rank matrices, the off-diagonal tiles can be compressed to reduce the algorithmic complexity and the memory footprint. The resulting tile low-rank (TLR) data format is composed of small data structures, which retains the most significant information for each tile. However, to operate on low-rank tiles, a new GEMM operation and its corresponding API have to be designed on GPUs so that it can exploit the data sparsity structure of the matrix while leveraging the underlying TLR compression format. The main idea consists in aggregating all operations onto a single kernel launch to compensate for their low arithmetic intensities and to mitigate the data transfer overhead on GPUs. The new TLR GEMM kernel outperforms the cuBLAS dense batched GEMM by more than an order of magnitude and creates new opportunities for TLR advance algorithms.
  • Performance Impact of Rank-Reordering on Advanced Polar Decomposition Algorithms

    Esposito, Aniello; Keyes, David E.; Ltaief, Hatem; Sukkari, Dalal (2018) [Technical Report]
    We demonstrate the importance of both MPI rank reordering and choice of processor grid topology in the context of advanced dense linear algebra (DLA) applications for distributed-memory systems. In particular, we focus on the advanced polar decomposition (PD) algorithm, based on the QR-based Dynamically Weighted Halley method (QDWH). The QDWH algorithm may be used as the first computational step toward solving symmetric eigenvalue problems and the singular value decomposition. Sukkari et al. (ACM TOMS, 2017) have shown that QDWH may benefit from rectangular instead of square processor grid topologies, which directly impact the performance of the underlying ScaLAPACK algorithms. In this work, we experiment an extensive combination of grid topologies and rank reorderings for different matrix sizes and number of nodes, and use QDWH as a proxy for advanced compute-bound linear algebra operations, since it is rich in dense linear solvers and factorizations. A performance improvement of up to 54% can be observed for QDWH on 800 nodes of a Cray XC system, thanks to an optimal combination, especially in strong scaling mode of operation, for which communication overheads may become dominant. We perform a thorough application profiling to analyze the impact of reordering and grid topologies on the various linear algebra components of the QDWH algorithm. It turns out that point- to-point communications may be considerably reduced thanks to a judicious choice of grid topology, while properly setting the rank reordering using the features from the cray-mpich library.
  • Ubiquitous Asynchronous Computations for Solving the Acoustic Wave Propagation Equation

    Akbudak, Kadir; Ltaief, Hatem; Etienne, Vincent; Abdelkhalak, Rached; Tonellot, Thierry; Keyes, David E. (2018) [Technical Report]
    This paper designs and implements an ubiquitous asynchronous computational scheme for solving the acoustic wave propagation equation with Absorbing Boundary Conditions (ABCs) in the context of seismic imaging applications. While the Convolutional Perfectly Matched Layer (CPML) is typically used as ABCs in the oil and gas industry, its formulation further stresses memory accesses and decreases the arithmetic intensity at the physical domain boundaries. The challenges with CPML are twofold: (1) the strong, inherent data dependencies imposed on the explicit time stepping scheme render asynchronous time integration cumbersome and (2) the idle time is further exacerbated by the load imbalance introduced among processing units. In fact, the CPML formulation of the ABCs requires expensive synchronization points, which may hinder parallel performance of the overall asynchronous time integration. In particular, when deployed in conjunction with the Multicore-optimized Wavefront Diamond (MWD) tiling approach for the inner domain points, it results into a major performance slow down. To relax CPML’s synchrony and mitigate the resulting load imbalance, we embed CPML’s calculation into MWD’s inner loop and carry on the time integration with fine-grained computations in an asynchronous, holistic way. This comes at the price of storing transient results to alleviate dependencies from critical data hazards, while maintaining the numerical accuracy of the original scheme. Performance results on various x86 architectures demonstrate the superiority of MWD with CPML against the standard spatial blocking. To our knowledge, this is the first practical study, which highlights the consolidation of CPML ABCs with asynchronous temporal blocking stencil computations.
  • Borehole Tool for the Comprehensive Characterization of Hydrate-bearing Sediments

    Dai, Sheng; Santamarina, Carlos (Office of Scientific and Technical Information (OSTI), 2017-12-30) [Technical Report]
    Reservoir characterization and simulation require reliable parameters to anticipate hydrate deposits responses and production rates. The acquisition of the required fundamental properties currently relies on wireline logging, pressure core testing, and/or laboratory ob-servations of synthesized specimens, which are challenged by testing capabilities and in-nate sampling disturbances. The project reviews hydrate-bearing sediments, properties, and inherent sampling effects, albeit lessen with the developments in pressure core technology, in order to develop robust correlations with index parameters. The resulting information is incorporated into a tool for optimal field characterization and parameter selection with un-certainty analyses. Ultimately, the project develops a borehole tool for the comprehensive characterization of hydrate-bearing sediments at in situ, with the design recognizing past developments and characterization experience and benefited from the inspiration of nature and sensor miniaturization.
  • HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    Litvinenko, Alexander (2017-09-26) [Technical Report]
    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\H$-) matrix format with computational cost $\mathcal{O}(k^2n \log^2 n/p)$ and storage $\mathcal{O}(kn \log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
  • Low-SNR Capacity of MIMO Optical Intensity Channels

    Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim (2017-09-18) [Technical Report]
    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.
  • Partial inversion of elliptic operator to speed up computation of likelihood in Bayesian inference

    Litvinenko, Alexander (2017-08-09) [Technical Report]
    In this paper, we speed up the solution of inverse problems in Bayesian settings. By computing the likelihood, the most expensive part of the Bayesian formula, one compares the available measurement data with the simulated data. To get simulated data, repeated solution of the forward problem is required. This could be a great challenge. Often, the available measurement is a functional $F(u)$ of the solution $u$ or a small part of $u$. Typical examples of $F(u)$ are the solution in a point, solution on a coarser grid, in a small subdomain, the mean value in a subdomain. It is a waste of computational resources to evaluate, first, the whole solution and then compute a part of it. In this work, we compute the functional $F(u)$ direct, without computing the full inverse operator and without computing the whole solution $u$. The main ingredients of the developed approach are the hierarchical domain decomposition technique, the finite element method and the Schur complements. To speed up computations and to reduce the storage cost, we approximate the forward operator and the Schur complement in the hierarchical matrix format. Applying the hierarchical matrix technique, we reduced the computing cost to $\mathcal{O}(k^2n \log^2 n)$, where $k\ll n$ and $n$ is the number of degrees of freedom. Up to the $\H$-matrix accuracy, the computation of the functional $F(u)$ is exact. To reduce the computational resources further, we can approximate $F(u)$ on, for instance, multiple coarse meshes. The offered method is well suited for solving multiscale problems. A disadvantage of this method is the assumption that one has to have access to the discretisation and to the procedure of assembling the Galerkin matrix.
  • Application of Bayesian Networks for Estimation of Individual Psychological Characteristics

    Litvinenko, Alexander; Litvinenko, Natalya (2017-07-19) [Technical Report]
    In this paper we apply Bayesian networks for developing more accurate final overall estimations of psychological characteristics of an individual, based on psychological test results. Psychological tests which identify how much an individual possesses a certain factor are very popular and quite common in the modern world. We call this value for a given factor -- the final overall estimation. Examples of factors could be stress resistance, the readiness to take a risk, the ability to concentrate on certain complicated work and many others. An accurate qualitative and comprehensive assessment of human potential is one of the most important challenges in any company or collective. The most common way of studying psychological characteristics of each single person is testing. Psychologists and sociologists are constantly working on improvement of the quality of their tests. Despite serious work, done by psychologists, the questions in tests often do not produce enough feedback due to the use of relatively poor estimation systems. The overall estimation is usually based on personal experiences and the subjective perception of a psychologist or a group of psychologists about the investigated psychological personality factors.
  • On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    Sapenov, Yerzhan; Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim (2017-07-06) [Technical Report]
    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.
  • On the Fast and Precise Evaluation of the Outage Probability of Diversity Receivers Over Generalized Fading Channels

    Ben Issaid, Chaouki; Alouini, Mohamed-Slim; Tempone, Raul (2017-01) [Technical Report]
  • Appendices for: Improper Signaling in Two-Path Relay Channels

    Gaafar, Mohamed; Amin, Osama; Schaefer, Rafael F.; Alouini, Mohamed-Slim (2016-12-01) [Technical Report]
    This document contains the appendices for the work in “Improper Signaling in Two-Path Relay Channels,” which is submitted to 2017 IEEE International Conference on Communications (ICC) Workshop on Full-Duplex Communications for Future Wireless Networks, Paris, France.
  • Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    Sukkari, Dalal; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E. (2016-10-25) [Technical Report]
    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.
  • Efficient Outage Probability Evaluation of Diversity Receivers Over Generalized Gamma Channels

    Ben Issaid, Chaouki; Alouini, Mohamed-Slim; Tempone, Raul (2016-10) [Technical Report]
    In this paper, we are interested in determining the cumulative distribution function of the sum of generalized Gamma in the setting of rare event simulations. To this end, we present an efficient importance sampling estimator. The main result of this work is the bounded relative property of the proposed estimator. This result is used to accurately estimate the outage probability of multibranch maximum ratio combining and equal gain combining diversity receivers over generalized Gamma fading channels. Selected numerical simulations are discussed to show the robustness of our estimator compared to naive Monte Carlo.
  • A High Performance QDWH-SVD Solver using Hardware Accelerators

    Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E. (2016-08-15) [Technical Report]
    This paper describes a new high performance implementation of the QR-based Dynamically Weighted Halley Singular Value Decomposition (QDWH-SVD) solver on multicore architecture enhanced with multiple GPUs. The standard QDWH-SVD algorithm was introduced by Nakatsukasa and Higham (SIAM SISC, 2013) and combines three successive computational stages: (1) the polar decomposition calculation of the original matrix using the QDWH algorithm, (2) the symmetric eigendecomposition of the resulting polar factor to obtain the singular values and the right singular vectors and (3) the matrix-matrix multiplication to get the associated left singular vectors. A comprehensive test suite highlights the numerical robustness of the QDWH-SVD solver. Although it performs up to two times more flops when computing all singular vectors compared to the standard SVD solver algorithm, our new high performance implementation on single GPU results in up to 3.8x improvements for asymptotic matrix sizes, compared to the equivalent routines from existing state-of-the-art open-source and commercial libraries. However, when only singular values are needed, QDWH-SVD is penalized by performing up to 14 times more flops. The singular value only implementation of QDWH-SVD on single GPU can still run up to 18% faster than the best existing equivalent routines. Integrating mixed precision techniques in the solver can additionally provide up to 40% improvement at the price of losing few digits of accuracy, compared to the full double precision floating point arithmetic. We further leverage the single GPU QDWH-SVD implementation by introducing the first multi-GPU SVD solver to study the scalability of the QDWH-SVD framework.

View more