Conference on Advances in Uncertainty Quantification Methods, Algorithms and Applications (UQAW 2014)
Recent Submissions

Optimal Design and Model Validation for Combustion Experiments in a Shock Tube(20140106)We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Center. The unknown parameters are the preexponential parameters and the activation energies in the reaction rate functions. The control parameters are the initial hydrogen concentration and the temperature. First, we build a polynomial based surrogate model for the observable related to the reactions in the shock tube. Second, we use a novel MAP based approach to estimate the expected information gain in the proposed experiments and select the best experimental setups corresponding to the optimal expected information gains. Third, we use the synthetic data to carry out virtual validation of our methodology.

Size Estimates in Inverse Problems(20140106)Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

Analysis and Computation of Acoustic and Elastic Wave Equations in Random Media(20140106)We propose stochastic collocation methods for solving the second order acoustic and elastic wave equations in heterogeneous random media and subject to deterministic boundary and initial conditions [1, 4]. We assume that the medium consists of nonoverlapping subdomains with smooth interfaces. In each subdomain, the materials coefficients are smooth and given or approximated by a finite number of random variable. One important example is wave propagation in multilayered media with smooth interfaces. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems [2, 3], the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence is only algebraic. A fast spectral rate of convergence is still possible for some quantities of interest and for the wave solutions with particular types of data. We also show that the semidiscrete solution is analytic with respect to the random variables with the radius of analyticity proportional to the grid/mesh size h. We therefore obtain an exponential rate of convergence which deteriorates as the quantity h p gets smaller, with p representing the polynomial degree in the stochastic space. We have shown that analytical results and numerical examples are consistent and that the stochastic collocation method may be a valid alternative to the more traditional Monte Carlo method. Here we focus on the stochastic acoustic wave equation. Similar results are obtained for stochastic elastic equations.

Modeling of MAI in UWB System Using MGGD(20140106)Multivariate generalized Gaussian density (MGGD) is used to approximate the multiple access interference (MAI) and additive white Gaussian noise in pulsebased ultrawide bandwidth (UWB) system. The MGGD probability density function (pdf) is shown to be a better approximation of a UWB system as compared to Gaussian, Laplacian and GaussianLaplacian mixture (GLM). The similarity between the simulated and the approximated pdf is measured with the help of modified KullbackLeibler distance (KLD). It is also shown that MGGD has the smallest KLD as compared to Gaussian, Laplacian and GLM densities. Finally, a receiver based on the principles of minimum bit error rate is designed for the MGGD pdf.

Collocation methods for uncertainty quanti cation in PDE models with random data(20140106)In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be overconstraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

Cooperative HARQ with Poisson Interference and Opportunistic Routing(20140106)This presentation considers reliable transmission of data from a source to a destination, aided cooperatively by wireless relays selected opportunistically and utilizing hybrid forward error correction/detection, and automatic repeat request (Hybrid ARQ, or HARQ). Specifically, we present a performance analysis of the cooperative HARQ protocol in a wireless adhoc multihop network employing spatial ALOHA. We model the nodes in such a network by a homogeneous 2D Poisson point process. We study the tradeoff between the perhop rate, spatial density and range of transmissions inherent in the network by optimizing the transport capacity with respect to the network design parameters, HARQ coding rate and medium access probability. We obtain an approximate analytic expression for the expected progress of opportunistic routing and optimize the capacity approximation by convex optimization. By way of numerical results, we show that the network design parameters obtained by optimizing the analytic approximation of transport capacity closely follows that of Monte Carlo based exact transport capacity optimization. As a result of the analysis, we argue that the optimal HARQ coding rate and medium access probability are independent of the node density in the network.

Advances in Spectral Methods for UQ in Incompressible NavierStokes Equations(20140106)In this talk, I will present two recent contributions to the development of efficient methodologies for uncertainty propagation in the incompressible NavierStokes equations. The first one concerns the reduced basis approximation of stochastic steady solutions, using Proper Generalized Decompositions (PGD). An Arnoldi problem is projected to obtain a low dimensional Galerkin problem. The construction then amounts to the resolution of a sequence of uncoupled deterministic NavierStokes like problem and simple quadratic stochastic problems, followed by the resolution of a lowdimensional coupled quadratic stochastic problem, with a resulting complexity which has to be contrasted with the dimension of the whole Galerkin problem for classical spectral approaches. An efficient algorithm for the approximation of the stochastic pressure field is also proposed. Computations are presented for uncertain viscosity and forcing term to demonstrate the effectiveness of the reduced method. The second contribution concerns the computation of stochastic periodic solutions to the NavierStokes equations. The objective is to circumvent the wellknown limitation of spectral methods for longtime integration. We propose to directly determine the stochastic limitcycles through the definition of its stochastic period and an initial condition over the cycle. A modified Newton method is constructed to compute iteratively both the period and initial conditions. Owing to the periodic character of the solution, and by introducing an appropriate timescaling, the solution can be approximated using lowdegree polynomial expansions with large computational saving as a result. The methodology is illustrated for the vonKarman flow around a cylinder with stochastic inflow conditions.

Multilevel Hybrid Chernoff TauLeap(20140106)Markovian pure jump processes can model many phenomena, e.g. chemical reactions at molecular level, protein transcription and translation, spread of epidemics diseases in small populations and in wireless communication networks, among many others. In this work [6] we present a novel multilevel algorithm for the Chernoffbased hybrid tauleap algorithm. This variance reduction technique allows us to: (a) control the global exit probability of any simulated trajectory, (b) obtain accurate and computable estimates for the expected value of any smooth observable of the process with minimal computational work.

Hybrid Chernoff TauLeap(20140106)Markovian pure jump processes can model many phenomena, e.g. chemical reactions at molecular level, protein transcription and translation, spread of epidemics diseases in small populations and in wireless communication networks among many others. In this work we present a novel hybrid algorithm for simulating individual trajectories which adaptively switches between the SSA and the Chernoff tauleap methods. This allows us to: (a) control the global exit probability of any simulated trajectory, (b) obtain accurate and computable estimates for the expected value of any smooth observable of the process with minimal computational work.

Kriging accelerated by orders of magnitude: combining lowrank with FFT techniques(20140106)Kriging algorithms based on FFT, the separability of certain covariance functions and lowrank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and nonseparable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.

Multivariate MaxStable Spatial Processes(20140106)Analysis of spatial extremes is currently based on univariate processes. Maxstable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of maxstable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Studentt cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremalGaussian and extremalt, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.

Inverse Problems and Uncertainty Quantification(20140106)In a Bayesian setting, inverse problems and uncertainty quantification (UQ)  the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time consuming and slowly convergent Monte Carlo sampling. The developed sampling free nonlinear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

Optimal Design of Shock Tube Experiments for Parameter Inference(20140106)We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the preexponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental setups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.