Recent Submissions

  • Complex systems engineering theory is a scientific theory

    Feron, Eric (2022-12-26) [Technical Report]
    Complex systems engineering and associated challenges become increasingly important for the well-being and safety of our society of humans. Motivated by this push towards ever more complex systems of all sizes, spectacular failures, and decades of questioning in a variety of contexts and endeavors, this report presents a theory of complex systems engineering, that is, a scientific theory in which an engineered system can be seen as a validated scientific hypothesis arising from a convergent mix of mathematical and validated experimental constructs. In its simplest form, a complex engineered system is a manufactured, validated scientific hypothesis arising from a mathematical theorem similar to those found in theoretical physics. This observation provides suggestions for improving system design, especially system architecture, by leveraging advanced mathematical and / or scientific concepts. In return, mathematicians and computer scientists can benefit from this bridge to engineering by bringing to bear many of their automated and manual theorem proving techniques to help with the design of complex systems. Clear classifications of what is "hard" and what is "easy" in mathematical proofs can instantaneously map onto similar appreciations for system design and its reliance on engineers’ creativity. Last, understanding system design from the mathematical-scientific viewpoint can help the system engineer think more maturely about organizing the multitude of tasks required by systems engineering. Following these conclusions, a limited set of experiments is presented to try and invalidate the proposed systems engineering theory by confronting it to existing educational programs in systems engineering in the United States of America. Concurrent with these invalidation efforts, this report argues that there is a significant lack of education in basic mathematics and/or engineering science in many systems engineering programs. Such weaknesses challenge the current and future industrial efficiency of all corporate or government institutions engaged in the pursuit of complex engineered systems excellence.
  • Complex systems engineering theory is a scientific theory

    Feron, Eric (2022-12-05) [Technical Report]
    The proper design of complex engineering systems is what allows corporations and nations to distinguish themselves in a global competition for technical excellence and economic well-being. After quickly reviewing the central elements of systems engineering, we map all of them onto concepts of mathematics such as theorems and proofs, and onto scientific theories. This mapping allows the protagonists of complex systems engineering and design to map existing techniques from one field to the others; it provides a surprising number of suggestions for improving system design, especially system architecture, by leveraging advanced mathematical and / or scientific concepts in a productive way. In return, mathematicians and computer scientists can benefit from this bridge by bringing to bear many of their automated theorem provers to help with the design of complex systems. Clear classifications of what is "hard" and what is "easy" in mathematical proofs can instantaneously map onto similar appreciations for system design and its reliance on engineers’ creativity. Last, understanding system design from the mathematical-scientific viewpoint can help the system engineer think more maturely about organizing the multitude of tasks required by systems engineering.
  • Nonstandard Finite Element Methods

    Boffi, Daniele; Carstensen, Carsten; Ern, Alexandre; Hu, Jun (Oberwolfach Reports, European Mathematical Society - EMS - Publishing House GmbH, 2022-03-14) [Meeting Report]
    Finite element methodologies dominate the computational approaches for the solution to partial differential equations and nonstandard finite element schemes most urgently require mathematical insight in their design. The hybrid workshop vividly enlightened and discussed innovative nonconforming and polyhedral methods, discrete complex-based finite element methods for tensor-problems, fast solvers and adaptivity, as well as applications to challenging ill-posed and nonlinear problems.
  • Accelerating Geostatistical Modeling and Prediction With Mixed-Precision Computations: A High-Productivity Approach with PaRSEC

    Abdulah, Sameh; Cao, Qinglei; Pei, Yu; Bosilca, George; Dongarra, Jack; Genton, Marc G.; Keyes, David E.; Ltaief, Hatem; Sun, Ying (2021-05-06) [Technical Report]
    Geostatistical modeling, one of the prime motivating applications for exascale computing, is a technique for predicting desired quantities from geographically distributed data, based on statistical models and optimization of parameters. Spatial data is assumed to possess properties of stationarity or non-stationarity via a kernel fitted to a covariance matrix. A primary workhorse of stationary spatial statistics is Gaussian maximum log-likelihood estimation (MLE), whose central data structure is a dense, symmetric positive definite covariance matrix of dimension of the number of correlated observations. Two essential operations in MLE are the application of the inverse and evaluation of the determinant of the covariance matrix. These can be rendered through the Cholesky decomposition and triangular solution. In this contribution, we reduce the precision of weakly correlated locations to single- or half- precision based on distance. We thus exploit mathematical structure to migrate MLE to a three-precision approximation that takes advantage of contemporary architectures offering BLAS3-like operations in a single instruction that are extremely fast for reduced precision. We illustrate application-expected accuracy worthy of double-precision from a majority half-precision computation, in a context where uniform single precision is by itself insufficient. In tackling the complexity and imbalance caused by the mixing of three precisions, we deploy the PaRSEC runtime system. PaRSEC delivers on-demand casting of precisions while orchestrating tasks and data movement in a multi-GPU distributed-memory environment within a tile-based Cholesky factorization. Application-expected accuracy is maintained while achieving up to 1.59 by mixing FP64/FP32 operations on 1536 nodes of HAWK or 4096 nodes of Shaheen-II, and up to 2.64X by mixing FP64/FP32/FP16 operations on 128 nodes of Summit, relative to FP64-only operations, This translates into up to 4.5, 4.7, and 9.1 (mixed) PFlop/s sustained performance, respectively, demonstrating a synergistic combination of exascale architecture, dynamic runtime software, and algorithmic adaptation applied to challenging environmental problems.
  • Ariadne: A common-sense thread for enabling provable safety in air mobility systems with unreliable components

    Sanni, Olatunde; Mote, Mark; Delahaye, Daniel; Gariel, Maxime; Khamvilai, Thanakorn; Feron, Eric; Saber, Safa (2021-01-04) [Technical Report]
    Commercial air travel is by far the safest transportation modality available to humanity today. It has achieved this enviable status by deploying thousands of professionals, including pilots, dispatchers, and air traffic controllers to operate very reliable air vehicles, bringing them and their passengers safely from origin to destination while managing dangerousweather, other traffic and system failures for decades. The air transportation has been undergoing undeniable and continuous progress and modernization since its inception. Thanks to advances in navigation capabilities, such as satellite-based navigation systems, aircraft can fly increasingly complex trajectories, including final approaches. The same aircraft are envisioned to fly in formation relatively soon. More daring moves include the recent introduction of "Free Flight" operations. Despite all these impressive improvements, they remain largely incremental in nature and they hit a "wall of complexity" that makes it somewhat difficult to incorporate more automation, such as the elusive, and perhaps infeasible, goal of achieving fully automated air traffic control, and to design and insert autonomous vehicles, small and large, in cities and at high altitudes. We introduce Ariadne, a thread to accelerate the productivity gains achieved by air traffic services providers around the globe. Ariadne is an engineered version of the common sense practice of always keeping a "Plan B", and possibly "plans C, D, E, and F" against unexpected events when any decision is made by pilots, air traffic controllers, dispatchers, and any other safety-critical actor of the air transportation system. The name "Ariadne" was chosen to honor the mythical character Ariadne, daughter of Minos the king of Crete, who conceived the "Plan B" mechanism that would allow her lover to exit Daedalus’ Labyrinth after killing the Minotaur. Ariadne and its informal definition as "Plan B engineering" offer surprising opportunities and properties, including not only provable operations safety with unproven components, but also a thread that can inherently be scaled up and quickly adapt to new air traffic scenarios, including the transition to free flight and accommodation of unmanned aviation. It also supports existing operations and therefore it does not conflict with current air traffic control practices. Modern computational capabilities and powerful AI algorithms make its implementation increasingly feasible to address more aspects of air traffic management.
  • Unified Finite Series Approximation of FSO Performance over Strong Turbulence Combined with Various Pointing Error Conditions

    Jung, Kug-Jin; Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai (IEEE Transactions on Communications, Institute of Electrical and Electronics Engineers (IEEE), 2020-07-10) [Article]
    In this paper, we investigate both the bit error rate (BER) and outage performance of free-space optical (FSO) links over strong turbulence combined with various pointing error conditions. Considering atmospheric turbulence and pointing errors as main factors that deteriorate the quality of an optical link, we obtain a unified finite series approximation of the composite probability density function, which embraces generalized pointing error models. This approximation leads to new unified formulas for the BER and outage capacity of an FSO link, which account for the two possible detection mechanisms of intensity modulation/direct detection and heterodyne detection. Selected simulation results confirm that the newly derived approximations can give precise predictions of both the average BER and the outage capacity of FSO communication that are generally applicable to all environments.
  • Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation

    Xu, Hang; Ho, Chen-Yu; Abdelmoniem, Ahmed M.; Dutta, Aritra; Bergou, El Houcine; Karatsenidis, Konstantinos; Canini, Marco; Kalnis, Panos (2020) [Technical Report]
    Powerful computer clusters are used nowadays to train complex deep neural networks (DNN) on large datasets. Distributed training workloads increasingly become communication bound. For this reason, many lossy compression techniques have been proposed to reduce the volume of transferred data. Unfortunately, it is difficult to argue about the behavior of compression methods, because existing work relies on inconsistent evaluation testbeds and largely ignores the performance impact of practical system configurations. In this paper, we present a comprehensive survey of the most influential compressed communication methods for DNN training, together with an intuitive classification (i.e., quantization, sparsification, hybrid and low-rank). We also propose a unified framework and API that allows for consistent and easy implementation of compressed communication on popular machine learning toolkits. We instantiate our API on TensorFlow and PyTorch, and implement 16 such methods. Finally, we present a thorough quantitative evaluation with a variety of DNNs (convolutional and recurrent), datasets and system configurations. We show that the DNN architecture affects the relative performance among methods. Interestingly, depending on the underlying communication library and computational cost of compression/decompression, we demonstrate that some methods may be impractical.
  • PETSc Users Manual: Revision 3.10

    Balay, S.; Abhyankar, S.; Adams, M.; Brown, J.; Brune, P.; Buschelman, K.; Dalcin, Lisandro; Dener, A.; Eijkhout, V.; Gropp, W.; Karpeyev, D.; Kaushik, D.; Knepley, M.; May, D.; McInnes, L. Curfman; Mills, R.; Munson, T.; Rupp, K.; Sanan, P.; Smith, B.; Zampini, Stefano; Zhang, H.; Zhang, H. (Office of Scientific and Technical Information (OSTI), 2018-09-01) [Technical Report]
    This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit forScientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
  • Exploiting Data Sparsity for Large-Scale Matrix Computations

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E. (2018-02-24) [Technical Report]
    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.
  • Batched Tile Low-Rank GEMM on GPUs

    Charara, Ali; Keyes, David E.; Ltaief, Hatem (2018-02) [Technical Report]
    Dense General Matrix-Matrix (GEMM) multiplication is a core operation of the Basic Linear Algebra Subroutines (BLAS) library, and therefore, often resides at the bottom of the traditional software stack for most of the scientific applications. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract the hardware performance. With the emergence of big data applications involving large data-sparse, hierarchically low-rank matrices, the off-diagonal tiles can be compressed to reduce the algorithmic complexity and the memory footprint. The resulting tile low-rank (TLR) data format is composed of small data structures, which retains the most significant information for each tile. However, to operate on low-rank tiles, a new GEMM operation and its corresponding API have to be designed on GPUs so that it can exploit the data sparsity structure of the matrix while leveraging the underlying TLR compression format. The main idea consists in aggregating all operations onto a single kernel launch to compensate for their low arithmetic intensities and to mitigate the data transfer overhead on GPUs. The new TLR GEMM kernel outperforms the cuBLAS dense batched GEMM by more than an order of magnitude and creates new opportunities for TLR advance algorithms.
  • Ubiquitous Asynchronous Computations for Solving the Acoustic Wave Propagation Equation

    Akbudak, Kadir; Ltaief, Hatem; Etienne, Vincent; Abdelkhalak, Rached; Tonellot, Thierry; Keyes, David E. (2018) [Technical Report]
    This paper designs and implements an ubiquitous asynchronous computational scheme for solving the acoustic wave propagation equation with Absorbing Boundary Conditions (ABCs) in the context of seismic imaging applications. While the Convolutional Perfectly Matched Layer (CPML) is typically used as ABCs in the oil and gas industry, its formulation further stresses memory accesses and decreases the arithmetic intensity at the physical domain boundaries. The challenges with CPML are twofold: (1) the strong, inherent data dependencies imposed on the explicit time stepping scheme render asynchronous time integration cumbersome and (2) the idle time is further exacerbated by the load imbalance introduced among processing units. In fact, the CPML formulation of the ABCs requires expensive synchronization points, which may hinder parallel performance of the overall asynchronous time integration. In particular, when deployed in conjunction with the Multicore-optimized Wavefront Diamond (MWD) tiling approach for the inner domain points, it results into a major performance slow down. To relax CPML’s synchrony and mitigate the resulting load imbalance, we embed CPML’s calculation into MWD’s inner loop and carry on the time integration with fine-grained computations in an asynchronous, holistic way. This comes at the price of storing transient results to alleviate dependencies from critical data hazards, while maintaining the numerical accuracy of the original scheme. Performance results on various x86 architectures demonstrate the superiority of MWD with CPML against the standard spatial blocking. To our knowledge, this is the first practical study, which highlights the consolidation of CPML ABCs with asynchronous temporal blocking stencil computations.
  • Performance Impact of Rank-Reordering on Advanced Polar Decomposition Algorithms

    Esposito, Aniello; Keyes, David E.; Ltaief, Hatem; Sukkari, Dalal (2018) [Technical Report]
    We demonstrate the importance of both MPI rank reordering and choice of processor grid topology in the context of advanced dense linear algebra (DLA) applications for distributed-memory systems. In particular, we focus on the advanced polar decomposition (PD) algorithm, based on the QR-based Dynamically Weighted Halley method (QDWH). The QDWH algorithm may be used as the first computational step toward solving symmetric eigenvalue problems and the singular value decomposition. Sukkari et al. (ACM TOMS, 2017) have shown that QDWH may benefit from rectangular instead of square processor grid topologies, which directly impact the performance of the underlying ScaLAPACK algorithms. In this work, we experiment an extensive combination of grid topologies and rank reorderings for different matrix sizes and number of nodes, and use QDWH as a proxy for advanced compute-bound linear algebra operations, since it is rich in dense linear solvers and factorizations. A performance improvement of up to 54% can be observed for QDWH on 800 nodes of a Cray XC system, thanks to an optimal combination, especially in strong scaling mode of operation, for which communication overheads may become dominant. We perform a thorough application profiling to analyze the impact of reordering and grid topologies on the various linear algebra components of the QDWH algorithm. It turns out that point- to-point communications may be considerably reduced thanks to a judicious choice of grid topology, while properly setting the rank reordering using the features from the cray-mpich library.
  • Borehole Tool for the Comprehensive Characterization of Hydrate-bearing Sediments

    Dai, Sheng; Santamarina, Carlos (Office of Scientific and Technical Information (OSTI), 2017-12-30) [Technical Report]
    Reservoir characterization and simulation require reliable parameters to anticipate hydrate deposits responses and production rates. The acquisition of the required fundamental properties currently relies on wireline logging, pressure core testing, and/or laboratory ob-servations of synthesized specimens, which are challenged by testing capabilities and in-nate sampling disturbances. The project reviews hydrate-bearing sediments, properties, and inherent sampling effects, albeit lessen with the developments in pressure core technology, in order to develop robust correlations with index parameters. The resulting information is incorporated into a tool for optimal field characterization and parameter selection with un-certainty analyses. Ultimately, the project develops a borehole tool for the comprehensive characterization of hydrate-bearing sediments at in situ, with the design recognizing past developments and characterization experience and benefited from the inspiration of nature and sensor miniaturization.
  • HLIBCov: Parallel Hierarchical Matrix Approximation of Large Covariance Matrices and Likelihoods with Applications in Parameter Identification

    Litvinenko, Alexander (2017-09-26) [Technical Report]
    The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community. We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\times 2M$ can be computed on a modern multi-core desktop in few minutes. Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Mat\'ern covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\H$-) matrix format with computational cost $\mathcal{O}(k^2n \log^2 n/p)$ and storage $\mathcal{O}(kn \log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known. For reproducibility we provide the C++ code, the documentation, and the synthetic data.
  • Low-SNR Capacity of MIMO Optical Intensity Channels

    Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim (2017-09-18) [Technical Report]
    The capacity of the multiple-input multiple-output (MIMO) optical intensity channel is studied, under both average and peak intensity constraints. We focus on low SNR, which can be modeled as the scenario where both constraints proportionally vanish, or where the peak constraint is held constant while the average constraint vanishes. A capacity upper bound is derived, and is shown to be tight at low SNR under both scenarios. The capacity achieving input distribution at low SNR is shown to be a maximally-correlated vector-binary input distribution. Consequently, the low-SNR capacity of the channel is characterized. As a byproduct, it is shown that for a channel with peak intensity constraints only, or with peak intensity constraints and individual (per aperture) average intensity constraints, a simple scheme composed of coded on-off keying, spatial repetition, and maximum-ratio combining is optimal at low SNR.
  • PETSc Users Manual Revision 3.8

    Balay, S.; Abhyankar, S.; Adams, M.; Brown, J.; Brune, P.; Buschelman, K.; Dalcin, Lisandro; Eijkhout, V.; Gropp, W.; Kaushik, D.; Knepley, M.; May, D.; McInnes, L. Curfman; Munson, T.; Rupp, K.; Sanan, P.; Smith, B.; Zampini, Stefano; Zhang, H.; Zhang, H. (Office of Scientific and Technical Information (OSTI), 2017-09-01) [Technical Report]
    This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication.
  • Partial inversion of elliptic operator to speed up computation of likelihood in Bayesian inference

    Litvinenko, Alexander (2017-08-09) [Technical Report]
    In this paper, we speed up the solution of inverse problems in Bayesian settings. By computing the likelihood, the most expensive part of the Bayesian formula, one compares the available measurement data with the simulated data. To get simulated data, repeated solution of the forward problem is required. This could be a great challenge. Often, the available measurement is a functional $F(u)$ of the solution $u$ or a small part of $u$. Typical examples of $F(u)$ are the solution in a point, solution on a coarser grid, in a small subdomain, the mean value in a subdomain. It is a waste of computational resources to evaluate, first, the whole solution and then compute a part of it. In this work, we compute the functional $F(u)$ direct, without computing the full inverse operator and without computing the whole solution $u$. The main ingredients of the developed approach are the hierarchical domain decomposition technique, the finite element method and the Schur complements. To speed up computations and to reduce the storage cost, we approximate the forward operator and the Schur complement in the hierarchical matrix format. Applying the hierarchical matrix technique, we reduced the computing cost to $\mathcal{O}(k^2n \log^2 n)$, where $k\ll n$ and $n$ is the number of degrees of freedom. Up to the $\H$-matrix accuracy, the computation of the functional $F(u)$ is exact. To reduce the computational resources further, we can approximate $F(u)$ on, for instance, multiple coarse meshes. The offered method is well suited for solving multiscale problems. A disadvantage of this method is the assumption that one has to have access to the discretisation and to the procedure of assembling the Galerkin matrix.
  • Efficient Simulation of the Outage Probability of Multihop Systems

    Ben Issaid, Chaouki; Alouini, Mohamed-Slim; Tempone, Raul (2017-08) [Technical Report]
    In this paper, we present an efficient importance sampling estimator for the evaluation of the outage probability of multihop systems with amplify-and-forward channel state-information-assisted. The proposed estimator is endowed with the bounded relative error property. Simulation results show a significant reduction in terms of number of simulation runs compared to naive Monte Carlo.
  • Application of Bayesian Networks for Estimation of Individual Psychological Characteristics

    Litvinenko, Alexander; Litvinenko, Natalya (2017-07-19) [Technical Report]
    In this paper we apply Bayesian networks for developing more accurate final overall estimations of psychological characteristics of an individual, based on psychological test results. Psychological tests which identify how much an individual possesses a certain factor are very popular and quite common in the modern world. We call this value for a given factor -- the final overall estimation. Examples of factors could be stress resistance, the readiness to take a risk, the ability to concentrate on certain complicated work and many others. An accurate qualitative and comprehensive assessment of human potential is one of the most important challenges in any company or collective. The most common way of studying psychological characteristics of each single person is testing. Psychologists and sociologists are constantly working on improvement of the quality of their tests. Despite serious work, done by psychologists, the questions in tests often do not produce enough feedback due to the use of relatively poor estimation systems. The overall estimation is usually based on personal experiences and the subjective perception of a psychologist or a group of psychologists about the investigated psychological personality factors.
  • On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    Sapenov, Yerzhan; Chaaban, Anas; Rezki, Zouheir; Alouini, Mohamed-Slim (2017-07-06) [Technical Report]
    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

View more