Applied Mathematics and Computational Science Program
For more information visit: https://cemse.kaust.edu.sa/amcs
Recent Submissions

Waveguiding via Transformation Optics(IEEE, 20220809) [Conference Paper]We demonstrate that it is possible to surpass current limitations of nanophotonics and plasmonics by designing an artificial material which can emulate userdefined spatial refractive index distribution. The effective optical property of the material is engineered through the deformation of reflective substrate via transformation optics approach. We provide one of possible applications  subwavelength optical waveguide coupler device based on this technique.

Multilevel Importance Sampling for McKeanVlasov Stochastic Differential Equation(arXiv, 20220805) [Preprint]This work combines multilevel Monte Carlo methods with importance sampling (IS) to estimate rare event quantities that can be expressed as the expectation of a Lipschitz observable of the solution to the McKeanVlasov stochastic differential equation. We first extend the double loop Monte Carlo (DLMC) estimator, introduced in this context in our previous work [Ben Rached et al. 2022], to the multilevel setting. We formulate a novel multilevel DLMC (MLDLMC) estimator, and perform a comprehensive workerror analysis yielding new and improved complexity results. Crucially, we also devise an antithetic sampler to estimate level differences that guarantees reduced work complexity for the MLDLMC estimator compared with the single level DLMC estimator. To tackle rare events, we apply the same single level IS scheme, obtained via stochastic optimal control in [Ben Rached et al. 2022], over all levels of the MLDLMC estimator. Combining IS and MLDLMC not only reduces computational complexity by one order, but also drastically reduces the associated constant, ensuring feasible estimates for rare event quantities. We illustrate effectiveness of proposed MLDLMC estimator on the Kuramoto model from statistical physics with Lipschitz observables, confirming reduced complexity from O(TOL−4r) for the single level DLMC estimator to O(TOL−3r) while providing feasible estimation for rare event quantities up to the prescribed relative error tolerance TOLr.

A Fractional Image Inpainting Model Using a Variant of MumfordShah Model(arXiv, 20220805) [Preprint]In this paper, we propose a fourth order PDE model for image inpainting based on a variant of the famous MumfordShah (MS) image segmentation model. Convexity splitting is used to discrtised the time and we replace the Laplacian by its fractional counterpart in the time discretised scheme. Fourier spectral method is used for space discretization. Consistency, stability and convergence of the time discretised model has been carried out. The model is tested on some standard test images and compared them with the result of some models existed in the literature.

HighPerformance Spatial Data Compression for Scientific Applications(Springer International Publishing, 20220801) [Book Chapter]We implement an efficient data compression algorithm that reduces the memory footprint of spatial datasets generated during scientific simulations. Storing regularly these datasets is typically needed for checkpoint/restart or for postprocessing purposes. Our lossy compression approach, codenamed HLRcompress (https://gitlab.mis.mpg.de/rok/HLRcompress), combines a hierarchical lowrank approximation technique with binary compression. This novel hybrid method is agnostic to the particular domain of application. We study the impact of HLRcompress on accuracy using synthetic datasets to demonstrate the software capabilities, including robustness and versatility. We assess different algebraic compression methods and report performance results on various parallel architectures. We then integrate it into a workflow of a direct numerical simulation solver for turbulent combustion on distributedmemory systems. We compress the generated snapshots during time integration using accuracy thresholds for each individual chemical species, without degrading the practical accuracy of the overall pressure and temperature. We eventually compare against stateoftheart compression software. Our implementation achieves on average greater than 100fold compression of the original size of the datasets.

Maximum Principle Preserving Space and Time Flux Limiting for Diagonally Implicit Runge–Kutta Discretizations of Scalar Convectiondiffusion Equations(Journal of Scientific Computing, Springer Science and Business Media LLC, 20220801) [Article]We provide a framework for highorder discretizations of nonlinear scalar convectiondiffusion equations that satisfy a discrete maximum principle. The resulting schemes can have arbitrarily high order accuracy in time and space, and can be stable and maximumprinciplepreserving (MPP) with no step size restriction. The schemes are based on a twotiered limiting strategy, starting with a highorder limiterbased method that may have small oscillations or maximumprinciple violations, followed by an additional limiting step that removes these violations while preserving high order accuracy. The desirable properties of the resulting schemes are demonstrated through several numerical examples.

Approximating Hessian matrices using Bayesian inference: a new approach for quasiNewton methods in stochastic optimization(arXiv, 20220731) [Preprint]Using quasiNewton methods in stochastic optimization is not a trivial task. In deterministic optimization, these methods are often a common choice due to their excellent performance regardless of the problem's condition number. However, standard quasiNewton methods fail to extract curvature information from noisy gradients in stochastic optimization. Moreover, preconditioning noisy gradient observations tend to amplify the noise by a factor given by the largest eigenvalue of the preconditioning matrix. We propose a Bayesian approach to obtain a Hessian matrix approximation for stochastic optimization that minimizes the secant equations residue while retaining the smallest eigenvalue above a specified limit. Thus, the proposed approach assists stochastic gradient descent to converge to local minima without augmenting gradient noise. The prior distribution is modeled as the exponential of the negative squared distance to the previous Hessian approximation, in the Frobenius sense, with logarithmic barriers imposing extreme eigenvalues constraints. The likelihood distribution is derived from the secant equations, i.e., the probability of obtaining the observed gradient evaluations for a given candidate Hessian approximation. We propose maximizing the logposterior using the NewtonCG method. Numerical results on a stochastic quadratic function and a ℓ2 regularized logistic regression problem are presented. In all the cases tested, our approach improves the convergence of stochastic gradient descent, compensating for the overhead of solving the logposterior maximization. In particular, preconditioning the stochastic gradient with the inverse of our Hessian approximation becomes more advantageous the larger the condition number of the problem is.

Designing Asymptotic Geodesic Hybrid Gridshells(ComputerAided Design, Elsevier BV, 20220730) [Article]Fabrication and assembly of freeform shells can be simplified significantly when controlling the curvature of structural elements during the design phase. This approach has produced fundamental insights to bendingactive construction, using the elastic property of elements to form efficient loadbearing structures. This paper is focused on gridshells that are built from straight and flat slats. The slats are combined in two orientations, tangential and normal to the design surface, to create robust and versatile triangulated grids. For this purpose, we generate hybrid webs of asymptotic and geodesic paths on freeform surfaces. The research combines geometric computing with architectural building practice. We present a computational workflow for the design and interactive modification of hybrid asymptotic geodesic webs. At its core are discrete models that are based on concepts of differential geometry and allow to compute constrained structures within an optimization framework. The resulting webs are tested for architectural applications. We derive a strategy for the elastic erection process, in which geodesic lamellas are used as a guide and bracing of the spatial structure. Two architectural scenarios, a timber roof and a steel facade are presented. Their feasibility for construction is verified through prototypical joints and physical models. The research introduces a new class of networks and related surfaces and offers insights into the practical challenges of freeform construction from elastic slats.

On a Dynamic Variant of the Iteratively Regularized GaussNewton Method with Sequential Data(arXiv, 20220727) [Preprint]For numerous parameter and state estimation problems, assimilating new data as they become available can help produce accurate and fast inference of unknown quantities. While most existing algorithms for solving those kind of illposed inverse problems can only be used with a single instance of the observed data, in this work we propose a new framework that enables existing algorithms to invert multiple instances of data in a sequential fashion. Specifically we will work with the wellknown iteratively regularized GaussNewton method (IRGNM), a variational methodology for solving nonlinear inverse problems. We develop a theory of convergence analysis for a proposed dynamic IRGNM algorithm in the presence of Gaussian white noise. We combine this algorithm with the classical IRGNM to deliver a practical (hybrid) algorithm that can invert data sequentially while producing fast estimates. Our work includes the proof of welldefinedness of the proposed iterative scheme, as well as various error bounds that rely on standard assumptions for nonlinear inverse problems. We use several numerical experiments to verify our theoretical findings, and to highlight the benefits of incorporating sequential data. The context of the numerical experiments comprises various parameter identification problems including a Darcy flow elliptic PDE example, and that of electrical impedance tomography.

DESAmyloidoses “Amyloidoses through the lookingglass”: A knowledgebase developed for exploring and linking information related to human amyloidrelated diseases(PLOS ONE, Public Library of Science (PLoS), 20220725) [Article]More than 30 types of amyloids are linked to close to 50 diseases in humans, the most prominent being Alzheimer’s disease (AD). AD is brainrelated local amyloidosis, while another amyloidosis, such as AA amyloidosis, tends to be more systemic. Therefore, we need to know more about the biological entities’ influencing these amyloidosis processes. However, there is currently no support system developed specifically to handle this extraordinarily complex and demanding task. To acquire a systematic view of amyloidosis and how this may be relevant to the brain and other organs, we needed a means to explore "amyloid network systems" that may underly processes that leads to an amyloidrelated disease. In this regard, we developed the DESAmyloidoses knowledgebase (KB) to obtain fast and relevant information regarding the biological network related to amyloid proteins/peptides and amyloidrelated diseases. This KB contains information obtained through text and data mining of available scientific literature and other public repositories. The information compiled into the DESAmyloidoses system based on 19 topicspecific dictionaries resulted in 796,409 associations between terms from these dictionaries. Users can explore this information through various options, including enriched concepts, enriched pairs, and semantic similarity. We show the usefulness of the KB using an example focused on inflammasomeamyloid associations. To our knowledge, this is the only KB dedicated to human amyloidrelated diseases derived primarily through literature text mining and complemented by data mining that provides a novel way of exploring information relevant to amyloidoses.

InverseDesigned Metaphotonics for Hypersensitive Detection(ACS Nanoscience Au, American Chemical Society (ACS), 20220725) [Article]Controlling the flow of broadband electromagnetic energy at the nanoscale remains a critical challenge in optoelectronics. Surface plasmon polaritons (or plasmons) provide subwavelength localization of light but are affected by significant losses. On the contrary, dielectrics lack a sufficiently robust response in the visible to trap photons similar to metallic structures. Overcoming these limitations appears elusive. Here we demonstrate that addressing this problem is possible if we employ a novel approach based on suitably deformed reflective metaphotonic structures. The complex geometrical shape engineered in these reflectors emulates nondispersive index responses, which can be inversedesigned following arbitrary form factors. We discuss the realization of essential components such as resonators with an ultrahigh refractive index of n = 100 in diverse profiles. These structures support the localization of light in the form of bound states in the continuum (BIC), fully localized in air, in a platform in which all refractive index regions are physically accessible. We discuss our approach to sensing applications, designing a class of sensors where the analyte directly contacts areas of ultrahigh refractive index. Leveraging this feature, we report an optical sensor with sensitivity two times higher than the closest competitor with a similar micrometer footprint. Inversely designed reflective metaphotonics offers a flexible technology for controlling broadband light, supporting optoelectronics’ integration with large bandwidths in circuitry with miniaturized footprints.

Double Loop Monte Carlo Estimator with Importance Sampling for McKeanVlasov Stochastic Differential Equation(arXiv, 20220720) [Preprint]This paper investigates Monte Carlo methods to estimate probabilities of rare events associated with solutions to the ddimensional McKeanVlasov stochastic differential equation. The equation is usually approximated using a stochastic interacting Pparticle system, a set of P coupled ddimensional stochastic differential equations (SDEs). Importance sampling (IS) is a common technique to reduce high relative variance of Monte Carlo estimators of rare event probabilities. In the SDE context, optimal measure change is derived using stochastic optimal control theory to minimize estimator variance, which when applied to stochastic particle systems yields a P×ddimensional partial differential control equation, which is cumbersome to solve. The work in [15] circumvented this problem by a decoupling approach, producing a ddimensional control PDE. Based on the decoupling approach, we develop a computationally efficient double loop Monte Carlo (DLMC) estimator. We offer a systematic approach to our DLMC estimator by providing a comprehensive error and work analysis and formulating optimal computational complexity. Subsequently, we propose an adaptive DLMC method combined with IS to estimate rare event probabilities, significantly reducing relative variance and computational runtimes required to achieve a given relative tolerance compared with standard Monte Carlo estimators without IS. The proposed estimator has O(TOL−4) computational complexity with significantly reduced constant. Numerical experiments, which are performed on the Kuramoto model from statistical physics, show substantial computational gains achieved by our estimator.

Smallnoise approximation for Bayesian optimal experimental design with nuisance uncertainty(Computer Methods in Applied Mechanics and Engineering, Elsevier BV, 20220715) [Article]Calculating the expected information gain in optimal Bayesian experimental design typically relies on nested Monte Carlo sampling. When the model also contains nuisance parameters, which are parameters that contribute to the overall uncertainty of the system but are of no interest in the Bayesian design framework, this introduces a second inner loop. We propose and derive a smallnoise approximation for this additional inner loop. The computational cost of our method can be further reduced by applying a Laplace approximation to the remaining inner loop. Thus, we present two methods, the smallnoise doubleloop Monte Carlo and smallnoise Monte Carlo Laplace methods. Moreover, we demonstrate that the total complexity of these two approaches remains comparable to the case without nuisance uncertainty. To assess the efficiency of these methods, we present three examples, and the last example includes the partial differential equation for the electrical impedance tomography experiment for composite laminate materials.

On the matching of eigensolutions to parametric partial differential equations(arXiv, 20220714) [Preprint]In this paper a novel numerical approximation of parametric eigenvalue problems is presented. We motivate our study with the analysis of a POD reduced order model for a simple one dimensional example. In particular, we introduce a new algorithm capable to track the matching of eigenvalues when the parameters vary.

Fast, Robust, Iterative Riemann Solvers for the Shallow Water and Euler Equations(20220712) [Thesis]
Advisor: Ketcheson, David I.
Committee members: Tzavaras, Athanasios; Truscott, T. T.Riemann problems are of prime importance in computational fluid dynamics simulations using finite elements or finite volumes discretizations. In some applications, billions of Riemann problems might need to be solved in a single simulation, therefore it is important to have reliable and computationally efficient algorithms to do so. Given the nonlinearity of the flux function in most systems considered in practice, to obtain an exact solution for the Riemann problem explicitly is often not possible, and iterative solvers are required. However, because of issues found with existing iterative solvers like lack of convergence and high computational cost, their use is avoided and approximate solvers are preferred. In this thesis work, motivated by the advances in computer hardware and algorithms in the last years, we revisit the possibility of using iterative solvers to compute the exact solution for Riemann problems. In particular, we focus on the development, implementation, and performance comparison of iterative Riemann solvers for the shallow water and Euler equations. In a onedimensional homogeneous framework for these systems, we consider several initial guesses and iterative methods for the computation of the Riemann solution. We find that efficient and reliable iterative solvers can be obtained by using recent estimates on the Riemann solution to modify and combine wellknown methods. Finally, we consider the application of these solvers in finite volume simulations using the wave propagation algorithms implemented in Clawpack. 
Parallel spacetime likelihood optimization for air pollution prediction on largescale systems(ACM, 20220712) [Conference Paper]Gaussian geostatistical spacetime modeling is an effective tool for performing statistical inference of field data evolving in space and time, generalizing spatial modeling alone at the cost of the greater complexity of operations and storage, and pushing geostatistical modeling even further into the arms of highperformance computing. It makes inferences for missing data by leveraging spacetime measurements of one or more fields. We propose a highperformance implementation of a widely applied spacetime model for largescale systems using a twolevel parallelization technique. At the inner level, we rely on stateoftheart dense linear algebra libraries and parallel runtime systems to perform complex matrix operations required to evaluate the maximum likelihood estimation (MLE). At the outer level, we parallelize the optimization process using a distributed implementation of the particle swarm optimization (PSO) algorithm. At this level, parallelization is accomplished using MPI subcommunicators, such that the nodes in each subcommunicator perform a single MLE iteration at a time. To evaluate the effectiveness of the proposed methodology, we assess the accuracy of the newly implemented spacetime model on a set of largescale synthetic spacetime datasets. Moreover, we use the proposed implementation to model two air pollution datasets from the Middle East and US regions with 550 spatial locations X730 time slots and 945 spatial locations X500 time slots, respectively. The evaluation shows that the proposed approach satisfies high prediction accuracy on both synthetic datasets and real particulate matter (PM) datasets in the context of the air pollution problem. We achieve up to 757.16 TFLOPS/s using 1024 nodes (75% of the peak performance) using 490K geospatial locations on ShaheenII Cray XC40 system.

MeanField Games on Networks and Wardrop Equilibria(20220706) [Dissertation]
Advisor: Gomes, Diogo A.
Committee members: Markowich, Peter A.; Santamarina, Carlos; Di Fazio, GiuseppeThis thesis consists of three main parts. In the first part, we discuss firstorder stationary meanfield games (MFGs) on networks. We derive the mathematical formulation of firstorder MFGs on networks with particular emphasis on the conditions at the vertices both for the HamiltonJacobi equation and for the transport equation. Then, we review the current method, which, for the stationary case, allows us to convert the MFG into a system of algebraic equations and inequalities. Finally, we discuss in more detail the travel cost and its properties. In the second part, we discuss the Wardrop equilibrium model on networks with flowdependent costs and its connection with stationary MFGs. First, we build the Wardrop model on networks. Second, we show how to convert the MFG model into a Wardrop model. Next, we recover the MFG solution from the Wardrop solution. Finally, we study the calibration of MFGs with Wardrop travel cost problems. In the third part, we explain the algorithm for solving the algebraic system associated with the MFG numerically, then, we present some examples and numerical results. 
Firstorder meanfield games on networks and Wardrop equilibrium(arXiv, 20220705) [Preprint]Here, we examine the Wardrop equilibrium model on networks with flowdependent costs and its connection with stationary meanfield games (MFG). In the first part of this paper, we present the Wardrop and the firstorder MFG models on networks. Then, we show how to reformulate the MFG problem into a Wardrop problem and prove that the MFG solution is the Wardrop equilibrium for the corresponding Wardrop problem. Moreover, we prove that the solution of the MFG problem can be recovered using the solution to the associated Wardrop problem. Finally, we study the cost properties and the calibration of MFG with Wardrop travel cost problems. We describe a novel approach to the calibration of MFGs. Further, we show that even simple travel costs can give rise to nonmonotone MFGs.

Rough analysis of computation trees(Discrete Applied Mathematics, Elsevier BV, 20220702) [Article]This paper deals with computation trees over an arbitrary structure consisting of a set along with collections of functions and predicates that are defined on it. It is devoted to the comparative analysis of three parameters of problems with n input variables over this structure: the complexity of a problem description, the minimum complexity of a computation tree solving this problem deterministically, and the minimum complexity of a computation tree solving this problem nondeterministically. Rough classification of relationships among these parameters is considered and all possible seven types of these relations are enumerated. The changes of relation types with the growth of the number n of input variables are studied.

Multilevel Ensemble Kalman–Bucy Filters(SIAM/ASA Journal on Uncertainty Quantification, Society for Industrial & Applied Mathematics (SIAM), 20220627) [Article]In this article we consider the linear filtering problem in continuous time. We develop and apply multilevel Monte Carlo (MLMC) strategies for ensemble Kalman–Bucy filters (EnKBFs). These filters can be viewed as approximations of conditional McKean–Vlasovtype diffusion processes. They are also interpreted as the continuoustime analogue of the ensemble Kalman filter, which has proven to be successful due to its applicability and computational cost. We prove that an ideal version of our multilevel EnKBF can achieve a mean square error (MSE) of \(\mathcal{O}(\epsilon ^2)\) , \(\epsilon > 0\) , with a cost of order \(\mathcal{O}(\epsilon ^{2}\log (\epsilon )^2)\) . In order to prove this result we provide a Monte Carlo convergence and approximation bounds associated to timediscretized EnKBFs. This implies a reduction in cost compared to the (single level) EnKBF which requires a cost of \(\mathcal{O}(\epsilon ^{3})\) to achieve an MSE of \(\mathcal{O}(\epsilon ^2)\) . We test our theory on a linear Ornstein–Uhlenbeck process, which we motivate through highdimensional examples of order \(\sim \mathcal{O}(10^4)\) and \(\mathcal{O}(10^5)\) , where we also numerically test an alternative deterministic counterpart of the EnKBF.

Architectural Structures from Quad Meshes with Planar Parameter Lines(Elsevier BV, 20220623) [Preprint]We address the computational design of architectural structures which are based on a frame of intersecting beams that are aligned with the parameter lines of a quad mesh. While previous work mainly put a planarity constraint onto the faces of the mesh, we focus on the planarity of longrange supporting beams which follow selected polylines in the underlying mesh. In addition to that, we impose further constraints including planarity of faces, right node angles and static equilibrium, and discuss in which way these may be combined. Some of the studied meshes are discrete counterparts of certain wellknown surfaces in classical geometry, whose knowledge is helpful for initializing the proposed optimization algorithms.