Conference on Advances in Uncertainty Quantification Methods, Algorithms and Applications (UQAW 2014): Recent submissions
Now showing items 120 of 65

Solution of Stochastic Nonlinear PDEs Using Automated WienerHermite Expansion(20140106) [Poster]The solution of the stochastic differential equations (SDEs) using WienerHermite expansion (WHE) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. The main statistics, such as the mean, covariance, and higher order statistical moments, can be calculated by simple formulae involving only the deterministic WienerHermite coefficients. In WHE approach, there is no randomness directly involved in the computations. One does not have to rely on pseudo random number generators, and there is no need to solve the SDEs repeatedly for many realizations. Instead, the deterministic system is solved only once. For previous research efforts see [2, 4].

Dynamical low rank approximation of time dependent PDEs with random data(20140106) [Poster]

Optimal Experimental Design for LargeScale Bayesian Inverse Problems(20140106) [Presentation]We develop a Bayesian framework for the optimal experimental design of the shock tube experiments which are being carried out at the KAUST Clean Combustion Research Center. The unknown parameters are the preexponential parameters and the activation energies in the reaction rate expressions. The control parameters are the initial mixture composition and the temperature. The approach is based on first building a polynomial based surrogate model for the observables relevant to the shock tube experiments. Based on these surrogates, a novel MAP based approach is used to estimate the expected information gain in the proposed experiments, and to select the best experimental setups yielding the optimal expected information gains. The validity of the approach is tested using synthetic data generated by sampling the PC surrogate. We finally outline a methodology for validation using actual laboratory experiments, and extending experimental design methodology to the cases where the control parameters are noisy.

Multilevel variance estimators in MLMC and application for random obstacle problems(20140106) [Presentation]The Multilevel Monte Carlo Method (MLMC) is a recently established sampling approach for uncertainty propagation for problems with random parameters. In this talk we present new convergence theorems for the multilevel variance estimators. As a result, we prove that under certain assumptions on the parameters, the variance can be estimated at essentially the same cost as the mean, and consequently as the cost required for solution of one forward problem for a fixed deterministic set of parameters. We comment on fast and stable evaluation of the estimators suitable for parallel large scale computations. The suggested approach is applied to a class of scalar random obstacle problems, a prototype of contact between deformable bodies. In particular, we are interested in rough random obstacles modelling contact between car tires and variable road surfaces. Numerical experiments support and complete the theoretical analysis.

Collocation methods for uncertainty quanti cation in PDE models with random data(20140106) [Presentation]In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be overconstraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

Higherorder Solution of Stochastic Diffusion equation with Nonlinear Losses Using WHEP technique(20140106) [Poster]Using WienerHermite expansion with perturbation (WHEP) technique in the solution of the stochastic partial differential equations (SPDEs) has the advantage of converting the problem to a system of deterministic equations that can be solved efficiently using the standard deterministic numerical methods [1]. The WienerHermite expansion is the only known expansion that handles the white/colored noise exactly. The main statistics, such as the mean, covariance, and higher order statistical moments, can be calculated by simple formulae involving only the deterministic WienerHermite coefficients. In this poster, the WHEP technique is used to solve the 2D diffusion equation with nonlinear losses and excited with white noise. The solution will be obtained numerically and will be validated and compared with the analytical solution that can be obtained from any symbolic mathematics package such as Mathematica.

DataDriven Model Order Reduction for Bayesian Inverse Problems(20140106) [Poster]One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a datadriven projection based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.

Cooperative HARQ with Poisson Interference and Opportunistic Routing(20140106) [Presentation]This presentation considers reliable transmission of data from a source to a destination, aided cooperatively by wireless relays selected opportunistically and utilizing hybrid forward error correction/detection, and automatic repeat request (Hybrid ARQ, or HARQ). Specifically, we present a performance analysis of the cooperative HARQ protocol in a wireless adhoc multihop network employing spatial ALOHA. We model the nodes in such a network by a homogeneous 2D Poisson point process. We study the tradeoff between the perhop rate, spatial density and range of transmissions inherent in the network by optimizing the transport capacity with respect to the network design parameters, HARQ coding rate and medium access probability. We obtain an approximate analytic expression for the expected progress of opportunistic routing and optimize the capacity approximation by convex optimization. By way of numerical results, we show that the network design parameters obtained by optimizing the analytic approximation of transport capacity closely follows that of Monte Carlo based exact transport capacity optimization. As a result of the analysis, we argue that the optimal HARQ coding rate and medium access probability are independent of the node density in the network.

Numerical Methods for Bayesian Inverse Problems(20140106) [Presentation]We present recent results on Bayesian inversion for a groundwater flow problem with an uncertain conductivity field. In particular, we show how direct and indirect measurements can be used to obtain a stochastic model for the unknown. The main tool here is Bayes’ theorem which merges the indirect data with the stochastic prior model for the conductivity field obtained by the direct measurements. Further, we demonstrate how the resulting posterior distribution of the quantity of interest, in this case travel times of radionuclide contaminants, can be obtained by Markov Chain Monte Carlo (MCMC) simulations. Moreover, we investigate new, promising MCMC methods which exploit geometrical features of the posterior and which are suited to infinite dimensions.

Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging(20140106) [Poster]Newtontype algorithms have been extensively studied in nonlinear microwave imaging due to their quadratic convergence rate and ability to recover images with high contrast values. In the past, Newton methods have been implemented in conjunction with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm is formulated and implemented in conjunction with a linear sparse optimization scheme. A novel preconditioning technique is proposed to increase the convergence rate of the optimization problem. Numerical results demonstrate that the proposed framework produces sharper and more accurate images when applied in sparse/sparsified domains.

Advances in Spectral Methods for UQ in Incompressible NavierStokes Equations(20140106) [Presentation]In this talk, I will present two recent contributions to the development of efficient methodologies for uncertainty propagation in the incompressible NavierStokes equations. The first one concerns the reduced basis approximation of stochastic steady solutions, using Proper Generalized Decompositions (PGD). An Arnoldi problem is projected to obtain a low dimensional Galerkin problem. The construction then amounts to the resolution of a sequence of uncoupled deterministic NavierStokes like problem and simple quadratic stochastic problems, followed by the resolution of a lowdimensional coupled quadratic stochastic problem, with a resulting complexity which has to be contrasted with the dimension of the whole Galerkin problem for classical spectral approaches. An efficient algorithm for the approximation of the stochastic pressure field is also proposed. Computations are presented for uncertain viscosity and forcing term to demonstrate the effectiveness of the reduced method. The second contribution concerns the computation of stochastic periodic solutions to the NavierStokes equations. The objective is to circumvent the wellknown limitation of spectral methods for longtime integration. We propose to directly determine the stochastic limitcycles through the definition of its stochastic period and an initial condition over the cycle. A modified Newton method is constructed to compute iteratively both the period and initial conditions. Owing to the periodic character of the solution, and by introducing an appropriate timescaling, the solution can be approximated using lowdegree polynomial expansions with large computational saving as a result. The methodology is illustrated for the vonKarman flow around a cylinder with stochastic inflow conditions.

Multivariate MaxStable Spatial Processes(20140106) [Presentation]Analysis of spatial extremes is currently based on univariate processes. Maxstable processes allow the spatial dependence of extremes to be modelled and explicitly quantified, they are therefore widely adopted in applications. For a better understanding of extreme events of real processes, such as environmental phenomena, it may be useful to study several spatial variables simultaneously. To this end, we extend some theoretical results and applications of maxstable processes to the multivariate setting to analyze extreme events of several variables observed across space. In particular, we study the maxima of independent replicates of multivariate processes, both in the Gaussian and Studentt cases. Then, we define a Poisson process construction in the multivariate setting and introduce multivariate versions of the Smith Gaussian extremevalue, the Schlather extremalGaussian and extremalt, and the BrownResnick models. Inferential aspects of those models based on composite likelihoods are developed. We present results of various Monte Carlo simulations and of an application to a dataset of summer daily temperature maxima and minima in Oklahoma, U.S.A., highlighting the utility of working with multivariate models in contrast to the univariate case. Based on joint work with Simone Padoan and Huiyan Sang.

On the Predictability of Computer simulations: Advances in Verification and Validation(20140106) [Presentation]We will present recent advances on the topics of Verification and Validation in order to assess the reliability and predictability of computer simulations. The first part of the talk will focus on goaloriented error estimation for nonlinear boundaryvalue problems and nonlinear quantities of interest, in which case the error representation consists of two contributions: 1) a first contribution, involving the residual and the solution of the linearized adjoint problem, which quantifies the discretization or modeling error; and 2) a second contribution, combining higherorder terms that describe the linearization error. The linearization error contribution is in general neglected with respect to the discretization or modeling error. However, when nonlinear effects are significant, it is unclear whether ignoring linearization effects may produce poor convergence of the adaptive process. The objective will be to show how both contributions can be estimated and employed in an adaptive scheme that simultaneously controls the two errors in a balanced manner. In the second part of the talk, we will present novel approach for calibration of model parameters. The proposed inverse problem not only involves the minimization of the misfit between experimental observables and their theoretical estimates, but also an objective function that takes into account some design goals on specific design scenarios. The method can be viewed as a regularization approach of the inverse problem, one, however, that best respects some design goals for which mathematical models are intended. The inverse problem is solved by a Bayesian method to account for uncertainties in the data. We will show that it shares the same structure as the deterministic problem that one would obtain by multiobjective optimization theory. The method is illustrated on an example of heat transfer in a twodimensional fin. The proposed approach has the main benefit that it increases the confidence in predictive capabilities of mathematical models.

Mean field games(20140106) [Presentation]In this talk we will report on new results concerning the existence of smooth solutions for time dependent meanfield games. This new result is established through a combination of various tools including several apriori estimates for timedependent meanfield games combined with new techniques for the regularity of HamiltonJacobi equations.

Inverse Problems and Uncertainty Quantification(20140106) [Presentation]In a Bayesian setting, inverse problems and uncertainty quantification (UQ)  the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time consuming and slowly convergent Monte Carlo sampling. The developed sampling free nonlinear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.