Conference on Advances in Uncertainty Quantification Methods, Algorithms and Applications (UQAW 2015)
Recent Submissions

Comparison of quasioptimal and adaptive sparsegrids for groundwaterflowproblems(20150107) [Poster]

Quasioptimal sparsegrid approximations for random elliptic PDEs(20150107) [Poster]

Fullydistributed randomized cooperation in wireless sensor networks(20150107) [Poster]When marrying randomized distributed spacetime coding (RDSTC) to geographical routing, new performance horizons can be created. In order to reach those horizons however, routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geographical routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes.

On the Symbol Error Rate of Mary MPSK over Generalized Fading Channels with Additive Laplacian Noise(20150107) [Poster]This work considers the symbol error rate of Mary phase shift keying (MPSK) constellations over extended GeneralizedK fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox’s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1].

Response Surface in Tensor Train Format for Uncertainty Quantification(20150107) [Poster]

GoalOriented Compression of Random Fields(20150107) [Poster]

A Novel Time Domain Method for Characterizing Plasmonic Field Interactions(20150107) [Poster]

EnergyEfficient Power Allocation of Cognitive Radio Systems without CSI at the Transmitter(20150107) [Poster]Two major issues are facing today’s wireless communications evolution: Spectrum scarcity: Need for more bandwidth. As a solution, the Cognitive Radio (CR) paradigm, where secondary users (unlicensed) share the spectrum with licensed users, was introduced. Energy consumption and CO2 emission: The ICT produce 2% of global CO2 emission (equivalent to the aviation industry emission). The cellular networks produces 0.2%. As solution energy efficient systems should be designed rather than traditional spectral efficient systems. In this work, we aim to determine the optimal energy efficient power allocation of CR when the channel state information at the transmitter CSIT is not available.

TimeLapse Seismic Data assisted History Matching of the Norne Field(20150107) [Poster]

Multiphase flows in complex geometries: a UQ perspective(20150107) [Presentation]Nowadays computer simulations are widely used in many multiphase flow applications involving interphases, dispersed particles, and complex geometries. Most of these problems are solved with mixed models composed of fundamental physical laws, rigorous mathematical upscaling, and empirical correlations/closures. This means that classical inference techniques or forward parametric studies, for example, becomes computationally prohibitive and must take into account the physical meaning and constraints of the equations. However mathematical techniques commonly used in Uncertainty Quantification can come to the aid for the (i) modeling, (ii) simulation, and (iii) validation steps. Two relevant applications for environmental, petroleum, and chemical engineering will be presented to highlight these aspects and the importance of bridging the gaps between engineering applications, computational physics and mathematical methods. The first example is related to the mathematical modeling of subgrid/subscale information with Probability Density Function (PDF) models in problems involving flow, mixing, and reaction in random environment. After a short overview of the research field, some connections and similarities with Polynomial Chaos techniques, will be investigated. In the second example, averaged correlations laws and effective parameters for multiphase flow and their statistical fluctuations, will be considered and efficient computational techniques, borrowed from highdimensional stochastic PDE problems, will be applied. In presence of interfacial flow, where small spatial scales and fast time scales are neglected, the assessment of robustness and predictive capabilities are studied. These illustrative examples are inspired by common problems arising, for example, from the modeling and simulation of turbulent and porous media flows.

Transport maps and dimension reduction for Bayesian computation(20150107) [Presentation]We introduce a new framework for efficient sampling from complex probability distributions, using a combination of optimal transport maps and the MetropolisHastings rule. The core idea is to use continuous transportation to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods) into nonGaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—an approximation of the KnotheRosenblatt rearrangement—using information from previous MCMC states, via the solution of an optimization problem. This optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using a Newton method that requires no gradient information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems show orderofmagnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per target density evaluation and per unit of wallclock time. We will also discuss adaptive methods for the construction of transport maps in high dimensions, where use of a nonadapted basis (e.g., a total order polynomial expansion) can become computationally prohibitive. If only samples of the target distribution, rather than density evaluations, are available, then we can construct highdimensional transformations by composing sparsely parameterized transport maps with rotations of the parameter space. If evaluations of the target density and its gradients are available, then one can exploit the structure of the variational problem used for map construction. In both settings, we will show links to recent ideas for dimension reduction in inverse problems.

Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients(20150107) [Presentation]The Monte Carlo (and Multilevel Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow(20150107) [Presentation]The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatiallyvarying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shearthinning fluid, can be used to infer this uncertain basal boundary condition. We cast this illposed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for largescale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter and data dimensions

Surrogate models and optimal design of experiments for chemical kinetics applications(20150107) [Presentation]Kinetic models for reactive flow applications comprise hundreds of reactions describing the complex interaction among many chemical species. The detailed knowledge of the reaction parameters is a key component of the design cycle of nextgeneration combustion devices, which aim at improving conversion efficiency and reducing pollutant emissions. Shock tubes are a laboratory scale experimental configuration, which is widely used for the study of reaction rate parameters. Important uncertainties exist in the values of the thousands of parameters included in the most advanced kinetic models. This talk discusses the application of uncertainty quantification (UQ) methods to the analysis of shock tube data as well as the design of shock tube experiments. Attention is focused on a spectral framework in which uncertain inputs are parameterized in terms of canonical random variables, and quantities of interest (QoIs) are expressed in terms of a meansquare convergent series of orthogonal polynomials acting on these variables. We outline the implementation of a recent spectral collocation approach for determining the unknown coefficients of the expansion, namely using a sparse, adaptive pseudospectral construction that enables us to obtain surrogates for the QoIs accurately and efficiently. We first discuss the utility of the resulting expressions in quantifying the sensitivity of QoIs to uncertain inputs, and in the Bayesian inference key physical parameters from experimental measurements. We then discuss the application of these techniques to the analysis of shocktube data and the optimal design of shocktube experiments for two key reactions in combustion kinetics: the chainbrancing reaction H + O2 ←→ OH + O and the reaction of Furans with the hydroxyl radical OH.

Discrete least squares polynomial approximation with random evaluations  application to PDEs with Random parameters(20150107) [Presentation]We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parametertosolution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parametertosolution map u(y) from random noisefree or noisy observations in random points by discrete least squares on polynomial spaces. The noisefree case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either apriori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

NonIntrusive Solution of Stochastic and Parametric Equations(20150107) [Presentation]Many problems depend on parameters, which may be a finite set of numerical values, or mathematically more complicated objects like for example processes or fields. We address the situation where we have an equation which depends on parameters; stochastic equations are a special case of such parametric problems where the parameters are elements from a probability space. One common way to represent this dependability on parameters is by evaluating the state (or solution) of the system under investigation for different values of the parameters. But often one wants to evaluate the solution quickly for a new set of parameters where it has not been sampled. In this situation it may be advantageous to express the parameter dependent solution with an approximation which allows for rapid evaluation of the solution. Such approximations are also called proxy or surrogate models, response functions, or emulators. All these methods may be seen as functional approximations—representations of the solution by an “easily computable” function of the parameters, as opposed to pure samples. The most obvious methods of approximation used are based on interpolation, in this context often labelled as collocation. In the frequent situation where one has a “solver” for the equation for a given parameter value, i.e. a software component or a program, it is evident that this can be used to independently—if desired in parallel—solve for all the parameter values which subsequently may be used either for the interpolation or in the quadrature for the projection. Such methods are therefore uncoupled for each parameter value, and they additionally often carry the label “nonintrusive”. Without much argument all other methods— which produce a coupled system of equations–are almost always labelled as “intrusive”, meaning that one cannot use the original solver. We want to show here that this not necessarily the case. Another approach is to choose some other projection onto the subspace spanned by the approximating functions. Usually this will involve minimising some norm of the difference between the true parametric solution and the approximation. Such methods are sometimes called pseudospectral projections, or regression solutions. On the other hand, methods which try to ensure that the approximation satisfies the parametric equation as well as possible are often based on a RayleighRitz or Galerkin type of “ansatz”, which leads to a coupled system for the unknown coefficients. This is often taken as an indication that the original solver can not be used, i.e. that these methods are “intrusive”. But in many circumstances these methods may as well be used in a nonintrusive fashion. Some very effective new methods based on lowrank approximations fall in the class of “not obviously nonintrusive” methods; hence it is important to show here how this may be computed nonintrusively.

Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks(20150107) [Presentation]Stochastic reaction networks (SRNs) is a class of continuoustime Markov chains intended to describe, from the kinetic point of view, the timeevolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a userselected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (E (g(X(T))) − M ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid pathsimulation scheme based on the wellknown stochastic simulation algorithm (SSA)[3] and the tauleap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

MultiIndex Monte Carlo (MIMC) When sparsity meets sampling(20150107) [Presentation]This talk focuses into our newest method: Multi Index Monte Carlo (MIMC). The MIMC method uses a stochastic combination technique to solve the given approximation problem, generalizing the notion of standard MLMC levels into a set of multi indices that should be properly chosen to exploit the available regularity. Indeed, instead of using firstorder differences as in standard MLMC, MIMC uses highorder differences to reduce the variance of the hierarchical differences dramatically. This in turn gives a new improved complexity result that increases the domain of the problem parameters for which the method achieves the optimal convergence rate, O(TOL2). Using optimal index sets that we determined, MIMC achieves a better rate for the computational complexity does not depend on the dimensionality of the underlying problem, up to logarithmic factors. We present numerical results related to a three dimensional PDE with random coefficients to substantiate some of the derived computational complexity rates. Finally, using the LindebergFeller theorem, we also show the asymptotic normality of the statistical error in the MIMC estimator and justify in this way our error estimate that allows prescribing both the required accuracy and confidence in the final result

Adaptive Surrogate Modeling for Response Surface Approximations with Application to Bayesian Inference(20150107) [Presentation]The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynoldsaveraged NavierStokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the SpalartAllmaras RANS model for turbulence simulation.