## Search

Now showing items 1-10 of 72

JavaScript is disabled for your browser. Some features of this site may not work without it.

AuthorTempone, Raul (26)Alouini, Mohamed-Slim (11)Bagci, Hakan (8)Litvinenko, Alexander (6)Moraes, Alvaro (6)View MoreDepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division (59)Applied Mathematics and Computational Science Program (29)Electrical Engineering Program (19)Physical Sciences and Engineering (PSE) Division (6)Mechanical Engineering Program (4)View MorePublisherSRI UQ KAUST (1)SubjectUncertainty Quantification (1)View MoreTypePoster (53)Presentation (18)Meetings and Proceedings (1)Year (Issue Date)2015 (72)Item AvailabilityOpen Access (72)

Now showing items 1-10 of 72

- List view
- Grid view
- Sort Options:
- Relevance
- Title Asc
- Title Desc
- Issue Date Asc
- Issue Date Desc
- Submit Date Asc
- Submit Date Desc
- Results Per Page:
- 5
- 10
- 20
- 40
- 60
- 80
- 100

Bayesian Inversion for Large Scale Antarctic Ice Sheet Flow

Ghattas, Omar (2015-01-07) [Presentation]

The flow of ice from the interior of polar ice sheets is the primary contributor to projected sea level rise. One of the main difficulties faced in modeling ice sheet flow is the uncertain spatially-varying Robin boundary condition that describes the resistance to sliding at the base of the ice. Satellite observations of the surface ice flow velocity, along with a model of ice as a creeping incompressible shear-thinning fluid, can be used to infer this uncertain basal boundary condition. We cast this ill-posed inverse problem in the framework of Bayesian inference, which allows us to infer not only the basal sliding parameters, but also the associated uncertainty. To overcome the prohibitive nature of Bayesian methods for large-scale inverse problems, we exploit the fact that, despite the large size of observational data, they typically provide only sparse information on model parameters. We show results for Bayesian inversion of the basal sliding parameter field for the full Antarctic continent, and demonstrate that the work required to solve the inverse problem, measured in number of forward (and adjoint) ice sheet model solves, is independent of the parameter and data dimensions

Computational error estimates for Monte Carlo finite element approximation with log normal diffusion coefficients

Sandberg, Mattias (2015-01-07) [Presentation]

The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.

Metropolis-Hastings Algorithms in Function Space for Bayesian Inverse Problems

Ernst, Oliver (2015-01-07) [Presentation]

We consider Markov Chain Monte Carlo methods adapted to a Hilbert space setting. Such algorithms occur in Bayesian inverse problems where the solution is a probability measure on a function space according to which one would like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce and analyze a generalization of the existing pCN-proposal. This new proposal allows to exploit the geometry or anisotropy of the target measure which in turn might improve the statistical efficiency of the corresponding MCMC method. Numerical experiments for a real-world problem confirm the improvement.

Simulation of conditional diffusions via forward-reverse stochastic representations

Bayer, Christian (2015-01-07) [Presentation]

We derive stochastic representations for the finite dimensional distributions of a multidimensional diffusion on a fixed time interval,conditioned on the terminal state. The conditioning can be with respect to a fixed measurement point or more generally with respect to some subset. The representations rely on a reverse process connected with the given (forward) diffusion as introduced by Milstein, Schoenmakers and Spokoiny in the context of density estimation. The corresponding Monte Carlo estimators have essentially root-N accuracy, and hence they do not suffer from the curse of dimensionality. We also present an application in statistics, in the context of the EM algorithm.

Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

Nobile, Fabio (2015-01-07) [Presentation]

We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

Transport maps and dimension reduction for Bayesian computation

Marzouk, Youssef (2015-01-07) [Presentation]

We introduce a new framework for efficient sampling from complex probability distributions, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use continuous transportation to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—an approximation of the Knothe-Rosenblatt rearrangement—using information from previous MCMC states, via the solution of an optimization problem. This optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using a Newton method that requires no gradient information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems show order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per target density evaluation and per unit of wallclock time. We will also discuss adaptive methods for the construction of transport maps in high dimensions, where use of a non-adapted basis (e.g., a total order polynomial expansion) can become computationally prohibitive. If only samples of the target distribution, rather than density evaluations, are available, then we can construct high-dimensional transformations by composing sparsely parameterized transport maps with rotations of the parameter space. If evaluations of the target density and its gradients are available, then one can exploit the structure of the variational problem used for map construction. In both settings, we will show links to recent ideas for dimension reduction in inverse problems.

Non-Intrusive Solution of Stochastic and Parametric Equations

Matthies, Hermann (2015-01-07) [Presentation]

Many problems depend on parameters, which may be a finite set of numerical values, or mathematically more complicated objects like for example processes or fields. We address the situation where we have an equation which depends on parameters; stochastic equations are a special case of such parametric problems where the parameters are elements from a probability space. One common way to represent this dependability on parameters is by evaluating the state (or solution) of the system under investigation for different values of the parameters. But often one wants to evaluate the solution quickly for a new set of parameters where it has not been sampled. In this situation it may be advantageous to express the parameter dependent solution with an approximation which allows for rapid evaluation of the solution. Such approximations are also called proxy or surrogate models, response functions, or emulators. All these methods may be seen as functional approximations—representations of the solution by an “easily computable” function of the parameters, as opposed to pure samples. The most obvious methods of approximation used are based on interpolation, in this context often labelled as collocation. In the frequent situation where one has a “solver” for the equation for a given parameter value, i.e. a software component or a program, it is evident that this can be used to independently—if desired in parallel—solve for all the parameter values which subsequently may be used either for the interpolation or in the quadrature for the projection. Such methods are therefore uncoupled for each parameter value, and they additionally often carry the label “non-intrusive”. Without much argument all other methods— which produce a coupled system of equations–are almost always labelled as “intrusive”, meaning that one cannot use the original solver. We want to show here that this not necessarily the case. Another approach is to choose some other projection onto the subspace spanned by the approximating functions. Usually this will involve minimising some norm of the difference between the true parametric solution and the approximation. Such methods are sometimes called pseudo-spectral projections, or regression solutions. On the other hand, methods which try to ensure that the approximation satisfies the parametric equation as well as possible are often based on a Rayleigh-Ritz or Galerkin type of “ansatz”, which leads to a coupled system for the unknown coefficients. This is often taken as an indication that the original solver can not be used, i.e. that these methods are “intrusive”. But in many circumstances these methods may as well be used in a non-intrusive fashion. Some very effective new methods based on low-rank approximations fall in the class of “not obviously non-intrusive” methods; hence it is important to show here how this may be computed non-intrusively.

Adaptive Surrogate Modeling for Response Surface Approximations with Application to Bayesian Inference

Prudhomme, Serge (2015-01-07) [Presentation]

The need for surrogate models and adaptive methods can be best appreciated if one is interested in parameter estimation using a Bayesian calibration procedure for validation purposes. We extend here our latest work on error decomposition and adaptive refinement for response surfaces to the development of surrogate models that can be substituted for the full models to estimate the parameters of Reynolds-averaged Navier-Stokes models. The error estimates and adaptive schemes are driven here by a quantity of interest and are thus based on the approximation of an adjoint problem. We will focus in particular to the accurate estimation of evidences to facilitate model selection. The methodology will be illustrated on the Spalart-Allmaras RANS model for turbulence simulation.

Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks

Moraes, Alvaro (2015-01-07) [Presentation]

Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that,
P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η;
even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.

Multi-Index Monte Carlo (MIMC) When sparsity meets sampling

Tempone, Raul (2015-01-07) [Presentation]

This talk focuses into our newest method: Multi Index Monte Carlo (MIMC). The MIMC method uses a stochastic combination technique to solve the given approximation problem, generalizing the notion of standard MLMC levels into a set of multi indices that should be properly chosen to exploit the available regularity. Indeed, instead of using first-order differences as in standard MLMC, MIMC uses high-order differences to reduce the variance of the hierarchical differences dramatically. This in turn gives a new improved complexity result that increases the domain of the problem parameters for which the method achieves the optimal convergence rate, O(TOL-2). Using optimal index sets that we determined, MIMC achieves a better rate for the computational complexity does not depend on the dimensionality of the underlying problem, up to logarithmic factors. We present numerical results related to a three dimensional PDE with random coefficients to substantiate some of the derived computational complexity rates. Finally, using the Lindeberg-Feller theorem, we also show the asymptotic normality of the statistical error in the MIMC estimator and justify in this way our error estimate that allows prescribing both the required accuracy and confidence in the final result

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. For anonymous users the allowed maximum amount is 50 search results.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.