## Search

Now showing items 1-5 of 5

JavaScript is disabled for your browser. Some features of this site may not work without it.

Author

Keyes, David E. (5)

Litvinenko, Alexander (5)

Chavez Chavez, Gustavo Ivan (2)Genton, Marc G. (2)Ltaief, Hatem (2)View MoreDepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division (4)Extreme Computing Research Center (3)Applied Mathematics and Computational Science Program (2)Computer Science Program (1)SRI Uncertainty Quantification Center (1)View MoreSubjecthierarchical matrices (2)Uncertainty Quantification (2)contamination (1)generalized PCE (1)graundwater flow (1)View MoreTypePoster (5)Year (Issue Date)2017 (3)2015 (2)Item Availability
Open Access (5)

Now showing items 1-5 of 5

- List view
- Grid view
- Sort Options:
- Relevance
- Title Asc
- Title Desc
- Issue Date Asc
- Issue Date Desc
- Submit Date Asc
- Submit Date Desc
- Results Per Page:
- 5
- 10
- 20
- 40
- 60
- 80
- 100

Risk assessment of salt contamination of groundwater under uncertain aquifer properties

Litvinenko, Alexander; Keyes, David E.; Logashenko, Dmitry; Tempone, Raul; Wittum, Gabriel (2017-10-01) [Poster]

One of the central topics in hydrogeology and environmental science is the investigation of salinity-driven groundwater flow in heterogeneous porous media. Our goals are to model and to predict pollution of water resources.
We simulate a density driven groundwater flow with uncertain porosity and permeability. This strongly non-linear model describes the unstable transport of salt water with building ‘fingers’-shaped patterns. The computation requires
a very fine unstructured mesh and, therefore, high computational resources.
We run the highly-parallel multigrid solver, based on ug4, on supercomputer Shaheen II. A MPI-based parallelization is done in the geometrical as well as in the stochastic spaces. Every scenario is computed on 32 cores and
requires a mesh with ~8M grid points and 1500 or more time steps. 200 scenarios are computed concurrently. The total number of cores in parallel computation is 200x32=6400. The main goal of this work is to estimate propagation of uncertainties through the model, to investigate sensitivity of the solution to the input uncertain parameters. Additionally, we demonstrate how the multigrid ug4-based solver can be applied as a black-box in the uncertainty quantification framework.

Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

Litvinenko, Alexander; Chavez Chavez, Gustavo Ivan; Keyes, David E.; Ltaief, Hatem; Yokota, Rio (2015-01-05) [Poster]

H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R. Kriemann, 2005. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E. (2017-11-01) [Poster]

The main goal of this article is to introduce the parallel hierarchical matrix library HLIBpro to the statistical community.
We describe the HLIBCov package, which is an extension of the HLIBpro library for approximating large covariance matrices and maximizing likelihood functions. We show that an approximate Cholesky factorization of a dense matrix of size $2M\times 2M$ can be computed on a modern multi-core desktop in few minutes.
Further, HLIBCov is used for estimating the unknown parameters such as the covariance length, variance and smoothness parameter of a Matérn covariance function by maximizing the joint Gaussian log-likelihood function. The computational bottleneck here is expensive linear algebra arithmetics due to large and dense covariance matrices. Therefore covariance matrices are approximated in the hierarchical ($\H$-) matrix format with computational cost $\mathcal{O}(k^2n \log^2 n/p)$ and storage $\mathcal{O}(kn \log n)$, where the rank $k$ is a small integer (typically $k<25$), $p$ the number of cores and $n$ the number of locations on a fairly general mesh. We demonstrate a synthetic example, where the true values of known parameters are known.
For reproducibility we provide the C++ code, the documentation, and the synthetic data.

Likelihood Approximation With Parallel Hierarchical Matrices For Large Spatial Datasets

Litvinenko, Alexander; Sun, Ying; Genton, Marc G.; Keyes, David E. (2017-03-13) [Poster]

Scalable Hierarchical Algorithms for stochastic PDEs and UQ

Litvinenko, Alexander; Chavez Chavez, Gustavo Ivan; Keyes, David E.; Ltaief, Hatem; Yokota, Rio (2015-01-07) [Poster]

H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by Kriemann [1,2]. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. For anonymous users the allowed maximum amount is 50 search results.

To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.