Now showing items 89-108 of 195

• #### A Matrix Splitting Method for Composite Function Minimization

(arXiv, 2016-12-07)
Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.
• #### Measurement of the surface susceptibility and the surface conductivity of atomically thin by spectroscopic ellipsometry

(arXiv, 2017-10-01)
We show how to correctly extract from the ellipsometric data the surface susceptibility and the surface conductivity that describe the optical properties of monolayer $\rm MoS_2$. Theoretically, these parameters stem from modelling a single-layer two-dimensional crystal as a surface current, a truly two-dimensional model. Currently experimental practice is to consider this model equivalent to a homogeneous slab with an effective thickness given by the interlayer spacing of the exfoliating bulk material. We prove that the error in the evaluation of the surface susceptibility of monolayer $\rm MoS_2$, owing to the use of the slab model, is at least 10% or greater, a significant discrepancy in the determination of the optical properties of this material.
• #### Measurement Selection: A Random Matrix Theory Approach

(Institute of Electrical and Electronics Engineers (IEEE), 2018-05-15)
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
• #### Measuring Canopy Structure and Condition Using Multi-Spectral UAS Imagery in a Horticultural Environment

(MDPI AG, 2018-12-29)
Tree condition, pruning and orchard management practices within intensive horticultural tree crop systems can be determined via measurements of tree structure. Multi-spectral imagery acquired from an unmanned aerial system (UAS) has been demonstrated as an accurate and efficient platform for measuring various tree structural attributes, but research in complex horticultural environments has been limited. This research established a methodology for accurately estimating tree crown height, extent, plant projective cover (PPC) and condition of avocado tree crops, from a UAS platform. Individual tree crowns were delineated using object-based image analysis. In comparison to field measured canopy heights, an image-derived canopy height model provided a coefficient of determination (R2) of 0.65 and relative root mean squared error of 6%. Tree crown length perpendicular to the hedgerow was accurately mapped. PPC was measured using spectral and textural image information and produced an R2 value of 0.62 against field data. A random forest classifier was applied to assign tree condition into four categories in accordance with industry standards, producing out-of-bag accuracies >96%. Our results demonstrate the potential of UAS-based mapping for the provision of information to support the horticulture industry and facilitate orchard-based assessment and management.
• #### Meta-analysis reveals host-dependent nitrogen recycling as a mechanism of symbiont control in Aiptasia

(Cold Spring Harbor Laboratory, 2018-02-22)
The metabolic symbiosis with photosynthetic algae of the genus Symbiodinium allows corals to thrive in the oligotrophic environments of tropical seas. Many aspects of this relationship have been investigated using transcriptomic analyses in the emerging model organism Aiptasia. However, previous studies identified thousands of putatively symbiosis-related genes, making it difficult to disentangle symbiosis-induced responses from undesired experimental parameters. Using a meta-analysis approach, we identified a core set of 731 high-confidence symbiosis-associated genes that reveal host-dependent recycling of waste ammonium and amino acid synthesis as central processes in this relationship. Combining transcriptomic and metabolomic analyses, we show that symbiont-derived carbon enables host recycling of ammonium into nonessential amino acids. We propose that this provides a regulatory mechanism to control symbiont growth through a carbon-dependent negative feedback of nitrogen availability to the symbiont. The dependence of this mechanism on symbiont-derived carbon highlights the susceptibility of this symbiosis to changes in carbon translocation, as imposed by environmental stress.
• #### Microscopic Origin of Interfacial Dzyaloshinskii-Moriya Interaction

(arXiv, 2017-04-10)
Chiral spin textures at the interface between ferromagnetic and heavy nonmagnetic metals, such as Neel-type domain walls and skyrmions, have been studied intensively because of their great potential for future nanomagnetic devices. The Dyzaloshinskii-Moriya interaction (DMI) is an essential phenomenon for the formation of such chiral spin textures. In spite of recent theoretical progress aiming at understanding the microscopic origin of the DMI, an experimental investigation unravelling the physics at stake is still required. Here, we experimentally demonstrate the close correlation of the DMI with the anisotropy of the orbital magnetic moment and with the magnetic dipole moment of the ferromagnetic metal. The density functional theory and the tight-binding model calculations reveal that asymmetric electron occupation in orbitals gives rise to this correlation.
• #### Model-based Quantile Regression for Discrete Data

(arXiv, 2018-04-10)
Quantile regression is a class of methods voted to the modelling of conditional quantiles. In a Bayesian framework quantile regression has typically been carried out exploiting the Asymmetric Laplace Distribution as a working likelihood. Despite the fact that this leads to a proper posterior for the regression coefficients, the resulting posterior variance is however affected by an unidentifiable parameter, hence any inferential procedure beside point estimation is unreliable. We propose a model-based approach for quantile regression that considers quantiles of the generating distribution directly, and thus allows for a proper uncertainty quantification. We then create a link between quantile regression and generalised linear models by mapping the quantiles to the parameter of the response variable, and we exploit it to fit the model with R-INLA. We extend it also in the case of discrete responses, where there is no 1-to-1 relationship between quantiles and distribution's parameter, by introducing continuous generalisations of the most common discrete variables (Poisson, Binomial and Negative Binomial) to be exploited in the fitting.
• #### Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

(arXiv, 2017-10-29)
Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.
• #### Modeling high dimensional multichannel brain signals

(arXiv, 2017-03-27)
In this paper, our goal is to model functional and effective (directional) connectivity in network of multichannel brain physiological signals (e.g., electroencephalograms, local field potentials). The primary challenges here are twofold: first, there are major statistical and computational difficulties for modeling and analyzing high dimensional multichannel brain signals; second, there is no set of universally-agreed measures for characterizing connectivity. To model multichannel brain signals, our approach is to fit a vector autoregressive (VAR) model with sufficiently high order so that complex lead-lag temporal dynamics between the channels can be accurately characterized. However, such a model contains a large number of parameters. Thus, we will estimate the high dimensional VAR parameter space by our proposed hybrid LASSLE method (LASSO+LSE) which is imposes regularization on the first step (to control for sparsity) and constrained least squares estimation on the second step (to improve bias and mean-squared error of the estimator). Then to characterize connectivity between channels in a brain network, we will use various measures but put an emphasis on partial directed coherence (PDC) in order to capture directional connectivity between channels. PDC is a directed frequency-specific measure that explains the extent to which the present oscillatory activity in a sender channel influences the future oscillatory activity in a specific receiver channel relative all possible receivers in the network. Using the proposed modeling approach, we have achieved some insights on learning in a rat engaged in a non-spatial memory task.
• #### Modeling of Viral Aerosol Transmission and Detection

(2018)
The objective of this work is to investigate the spread mechanism of diseases in the atmosphere as an engineering problem. Among the viral transmission mechanisms that do not include physical contact, aerosol transmission is the most significant mode of transmission where virus-laden droplets are carried over long distances by wind. In this work, we focus on aerosol transmission of virus and introduce the idea of viewing virus transmission through aerosols and their transport as a molecular communication problem, where one has no control over transmission source but a robust receiver can be designed using nano-biosensors. To investigate this idea, a complete system is presented and end-toend mathematical model for the aerosol transmission channel is derived under certain constraints and boundary conditions. In addition to transmitter and channel, a receiver architecture composed of air sampler and Silicon Nanowire field effect transistor is also discussed. Furthermore, a detection problem is formulated for which maximum likelihood decision rule and the corresponding missed detection probability is discussed. At the end, simulation results are presented to investigate the parameters that affect the performance and justify the feasibility of proposed setup in related applications.
• #### Modeling soil organic carbon with Quantile Regression: Dissecting predictors' effects on carbon stocks

(arXiv, 2017-08-13)
Soil Organic Carbon (SOC) estimation is crucial to manage both natural and anthropic ecosystems and has recently been put under the magnifying glass after the Paris agreement 2016 due to its relationship with greenhouse gas. Statistical applications have dominated the SOC stock mapping at regional scale so far. However, the community has hardly ever attempted to implement Quantile Regression (QR) to spatially predict the SOC distribution. In this contribution, we test QR to estimate SOC stock (0-30 $cm$ depth) in the agricultural areas of a highly variable semi-arid region (Sicily, Italy, around 25,000 $km2$) by using topographic and remotely sensed predictors. We also compare the results with those from available SOC stock measurement. The QR models produced robust performances and allowed to recognize dominant effects among the predictors with respect to the considered quantile. This information, currently lacking, suggests that QR can discern predictor influences on SOC stock at specific sub-domains of each predictors. In this work, the predictive map generated at the median shows lower errors than those of the Joint Research Centre and International Soil Reference, and Information Centre benchmarks. The results suggest the use of QR as a comprehensive and effective method to map SOC using legacy data in agro-ecosystems. The R code scripted in this study for QR is included.
• #### Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

(arXiv, 2017-12-27)
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
• #### Monotone numerical methods for finite-state mean-field games

(arXiv, 2017-04-29)
Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.
• #### Motif signatures of transcribed enhancers

(Cold Spring Harbor Laboratory, 2017-09-14)
In mammalian cells, transcribed enhancers (TrEn) play important roles in the initiation of gene expression and maintenance of gene expression levels in spatiotemporal manner. One of the most challenging questions in biology today is how the genomic characteristics of enhancers relate to enhancer activities. This is particularly critical, as several recent studies have linked enhancer sequence motifs to specific functional roles. To date, only a limited number of enhancer sequence characteristics have been investigated, leaving space for exploring the enhancers genomic code in a more systematic way. To address this problem, we developed a novel computational method, TELS, aimed at identifying predictive cell type/tissue specific motif signatures. We used TELS to compile a comprehensive catalog of motif signatures for all known TrEn identified by the FANTOM5 consortium across 112 human primary cells and tissues. Our results confirm that distinct cell type/tissue specific motif signatures characterize TrEn. These signatures allow discriminating successfully a) TrEn from random controls, proxy of non-enhancer activity, and b) cell type/tissue specific TrEn from enhancers expressed and transcribed in different cell types/tissues. TELS codes and datasets are publicly available at http://www.cbrc.kaust.edu.sa/TELS.
• #### Multi-Branch Fully Convolutional Network for Face Detection

(arXiv, 2017-07-20)
Face detection is a fundamental problem in computer vision. It is still a challenging task in unconstrained conditions due to significant variations in scale, pose, expressions, and occlusion. In this paper, we propose a multi-branch fully convolutional network (MB-FCN) for face detection, which considers both efficiency and effectiveness in the design process. Our MB-FCN detector can deal with faces at all scale ranges with only a single pass through the backbone network. As such, our MB-FCN model saves computation and thus is more efficient, compared to previous methods that make multiple passes. For each branch, the specific skip connections of the convolutional feature maps at different layers are exploited to represent faces in specific scale ranges. Specifically, small faces can be represented with both shallow fine-grained and deep powerful coarse features. With this representation, superior improvement in performance is registered for the task of detecting small faces. We test our MB-FCN detector on two public face detection benchmarks, including FDDB and WIDER FACE. Extensive experiments show that our detector outperforms state-of-the-art methods on all these datasets in general and by a substantial margin on the most challenging among them (e.g. WIDER FACE Hard subset). Also, MB-FCN runs at 15 FPS on a GPU for images of size 640 x 480 with no assumption on the minimum detectable face size.
• #### A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

(arXiv, 2017-12-06)
Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.
• #### Multi-Scale Factor Analysis of High-Dimensional Brain Signals

(arXiv, 2017-05-18)
In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.
• #### Multilevel Double Loop Monte Carlo and Stochastic Collocation Methods with Importance Sampling for Bayesian Optimal Experimental Design

(arXiv, 2018-11-28)
An optimal experimental set-up maximizes the value of data for statistical inference and prediction, which is particularly important for experiments that are time consuming or expensive to perform. In the context of partial differential equations (PDEs), multilevel methods have been proven in many cases to dramatically reduce the computational complexity of their single-level counterparts. Here, two multilevel methods are proposed to efficiently compute the expected information gain using a Kullback-Leibler divergence measure in simulation-based Bayesian optimal experimental design. The first method is a multilevel double loop Monte Carlo (MLDLMC) with importance sampling, which greatly reduces the computational work of the inner loop. The second proposed method is a multilevel double loop stochastic collocation (MLDLSC) with importance sampling, which is high-dimensional integration by deterministic quadrature on sparse grids. In both methods, the Laplace approximation is used as an effective means of importance sampling, and the optimal values for method parameters are determined by minimizing the average computational work subject to a desired error tolerance. The computational efficiencies of the methods are demonstrated for computing the expected information gain for Bayesian inversion to infer the fiber orientation in composite laminate materials by an electrical impedance tomography experiment, given a particular set-up of the electrode configuration. MLDLSC shows a better performance than MLDLMC by exploiting the regularity of the underlying computational model with respect to the additive noise and the unknown parameters to be statistically inferred.
• #### Multilevel ensemble Kalman filtering for spatio-temporal processes

(arXiv, 2018-02-02)
This work concerns state-space models, in which the state-space is an infinite-dimensional spatial field, and the evolution is in continuous time, hence requiring approximation in space and time. The multilevel Monte Carlo (MLMC) sampling strategy is leveraged in the Monte Carlo step of the ensemble Kalman filter (EnKF), thereby yielding a multilevel ensemble Kalman filter (MLEnKF) for spatio-temporal models, which has provably superior asymptotic error/cost ratio. A practically relevant stochastic partial differential equation (SPDE) example is presented, and numerical experiments with this example support our theoretical findings.
• #### Multilevel Monte Carlo Acceleration of Seismic Wave Propagation under Uncertainty

(arXiv, 2018-11-28)
We interpret uncertainty in a model for seismic wave propagation by treating the model parameters as random variables, and apply the Multilevel Monte Carlo (MLMC) method to reduce the cost of approximating expected values of selected, physically relevant, quantities of interest (QoI) with respect to the random variables. Targeting source inversion problems, where the source of an earthquake is inferred from ground motion recordings on the Earth's surface, we consider two QoI that measure the discrepancies between computed seismic signals and given reference signals: one QoI, QoI_E, is defined in terms of the L^2-misfit, which is directly related to maximum likelihood estimates of the source parameters; the other, QoI_W, is based on the quadratic Wasserstein distance between probability distributions, and represents one possible choice in a class of such misfit functions that have become increasingly popular to solve seismic inversion in recent years. We simulate seismic wave propagation, including seismic attenuation, using a publicly available code in widespread use, based on the spectral element method. Using random coefficients and deterministic initial and boundary data, we present benchmark numerical experiments with synthetic data in a two-dimensional physical domain and a one-dimensional velocity model where the assumed parameter uncertainty is motivated by realistic Earth models. Here, the computational cost of the standard Monte Carlo method was reduced by up to 97% for QoI_E, and up to 78% for QoI_W, using a relevant range of tolerances. Shifting to three-dimensional domains is straight-forward and will further increase the relative computational work reduction.