Browsing Other/General Submission by Title
Now showing items 120 of 201

A fast and costeffective microsampling protocol incorporating reduced animal usage for timeseries transcriptomics in rodent malaria parasites(Cold Spring Harbor Laboratory, 20180621)The transcriptional regulation occurring in malaria parasites during the clinically important life stages within host erythrocytes can be studied in vivo with rodent malaria parasites propagated in mice. Timeseries transcriptome profiling commonly involves the euthanasia of groups of mice at specific time points followed by the extraction of parasite RNA from whole blood samples. Current methodologies for parasite RNA extraction involve several steps and when multiple time points are profiled, these protocols are laborious, time consuming, and require the euthanisation of large cohorts of mice. We designed a simplified protocol for parasite RNA extraction from blood volumes as low as 20 microliters (microsamples), serially bled from mice via tail snips and directly lysed with TRIzol reagent. Gene expression data derived from microsampling using RNAseq were closely matched to those derived from larger volumes of leucocytedepleted and saponintreated blood obtained from euthanized mice and also tightly correlated between biological replicates. Transcriptome profiling of microsamples taken at different time points during the intraerythrocytic developmental cycle of the rodent malaria parasite Plasmodium vinckei revealed the transcriptional cascade commonly observed in malaria parasites. Microsampling is a quick, robust and costefficient approach to sample collection for in vivo timeseries transcriptomic studies in rodent malaria parasites.

Ab initio Algorithmic Causal Deconvolution of Intertwined Programs and Networks by Generative Mechanism(arXiv, 20180218)To extract and learn representations leading to generative mechanisms from data, especially without making arbitrary decisions and biased assumptions, is a central challenge in most areas of scientific research particularly in connection to current major limitations of influential topics and methods of machine and deep learning as they have often lost sight of the model component. Complex data is usually produced by interacting sources with different mechanisms. Here we introduce a parameterfree modelbased approach, based upon the seminal concept of Algorithmic Probability, that decomposes an observation and signal into its most likely algorithmic generative mechanisms. Our methods use a causal calculus to infer model representations. We demonstrate the method ability to distinguish interacting mechanisms and deconvolve them, regardless of whether the objects produce strings, spacetime evolution diagrams, images or networks. We numerically test and evaluate our method and find that it can disentangle observations from discrete dynamic systems, random and complex networks. We think that these causal inference techniques can contribute as key pieces of information for estimations of probability distributions complementing other more statisticaloriented techniques that otherwise lack model inference capabilities.

Absolute spectroscopy near 7.8 {\mu} m with a comblocked extendedcavity quantumcascadelaser(arXiv, 20170731)We report the first experimental demonstration of frequencylocking of an extendedcavity quantumcascadelaser (ECQCL) to a nearinfrared frequency comb. The locking scheme is applied to carry out absolute spectroscopy of N2O lines near 7.87 {\mu}m with an accuracy of ~60 kHz. Thanks to a single mode operation over more than 100 cm^{1}, the comblocked ECQCL shows great potential for the accurate retrieval of line center frequencies in a spectral region that is currently outside the reach of broadly tunable cw sources, either based on difference frequency generation or optical parametric oscillation. The approach described here can be straightforwardly extended up to 12 {\mu}m, which is the current wavelength limit for commercial cw ECQCLs.

Accelerated Optimization in the PDE Framework: Formulations for the Active Contour Case(arXiv, 20171127)Following the seminal work of Nesterov, accelerated optimization methods have been used to powerfully boost the performance of firstorder, gradientbased parameter estimation in scenarios where secondorder optimization strategies are either inapplicable or impractical. Not only does accelerated gradient descent converge considerably faster than traditional gradient descent, but it also performs a more robust local search of the parameter space by initially overshooting and then oscillating back as it settles into a final configuration, thereby selecting only local minimizers with a basis of attraction large enough to contain the initial overshoot. This behavior has made accelerated and stochastic gradient search methods particularly popular within the machine learning community. In their recent PNAS 2016 paper, Wibisono, Wilson, and Jordan demonstrate how a broad class of accelerated schemes can be cast in a variational framework formulated around the Bregman divergence, leading to continuum limit ODE's. We show how their formulation may be further extended to infinite dimension manifolds (starting here with the geometric space of curves and surfaces) by substituting the Bregman divergence with inner products on the tangent space and explicitly introducing a distributed mass model which evolves in conjunction with the object of interest during the optimization process. The coevolving mass model, which is introduced purely for the sake of endowing the optimization with helpful dynamics, also links the resulting class of accelerated PDE based optimization schemes to fluid dynamical formulations of optimal mass transport.

Accelerated Optimization in the PDE Framework: Formulations for the Manifold of Diffeomorphisms(arXiv, 20180404)We consider the problem of optimization of cost functionals on the infinitedimensional manifold of diffeomorphisms. We present a new class of optimization methods, valid for any optimization problem setup on the space of diffeomorphisms by generalizing Nesterov accelerated optimization to the manifold of diffeomorphisms. While our framework is general for infinite dimensional manifolds, we specifically treat the case of diffeomorphisms, motivated by optical flow problems in computer vision. This is accomplished by building on a recent variational approach to a general class of accelerated optimization methods by Wibisono, Wilson and Jordan, which applies in finite dimensions. We generalize that approach to infinite dimensional manifolds. We derive the surprisingly simple continuum evolution equations, which are partial differential equations, for accelerated gradient descent, and relate it to simple mechanical principles from fluid mechanics. Our approach has natural connections to the optimal mass transport problem. This is because one can think of our approach as an evolution of an infinite number of particles endowed with mass (represented with a mass density) that moves in an energy landscape. The mass evolves with the optimization variable, and endows the particles with dynamics. This is different than the finite dimensional case where only a single particle moves and hence the dynamics does not depend on the mass. We derive the theory, compute the PDEs for accelerated optimization, and illustrate the behavior of these new accelerated optimization schemes.

Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster SecondOrder Optimization(arXiv, 20180212)We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\em first accelerated (deterministic and stochastic) quasiNewton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speedups over classical nonaccelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

An accurate and rapid continuous wavelet dynamic time warping algorithm for unbalanced global mapping in nanopore sequencing(Cold Spring Harbor Laboratory, 20171224)Longreads, pointofcare, and PCRfree are the promises brought by nanopore sequencing. Among various steps in nanopore data analysis, the global mapping between the raw electrical current signal sequence and the expected signal sequence from the pore model serves as the key building block to base calling, reads mapping, variant identification, and methylation detection. However, the ultralong reads of nanopore sequencing and an order of magnitude difference in the sampling speeds of the two sequences make the classical dynamic time warping (DTW) and its variants infeasible to solve the problem. Here, we propose a novel multilevel DTW algorithm, cwDTW, based on continuous wavelet transforms with different scales of the two signal sequences. Our algorithm starts from lowresolution wavelet transforms of the two sequences, such that the transformed sequences are short and have similar sampling rates. Then the peaks and nadirs of the transformed sequences are extracted to form feature sequences with similar lengths, which can be easily mapped by the original DTW. Our algorithm then recursively projects the warping path from a lowerresolution level to a higherresolution one by building a contextdependent boundary and enabling a constrained search for the warping path in the latter. Comprehensive experiments on two real nanopore datasets on human and on Pandoraea pnomenusa, as well as two benchmark datasets from previous studies, demonstrate the efficiency and effectiveness of the proposed algorithm. In particular, cwDTW can almost always generate warping paths that are very close to the original DTW, which are remarkably more accurate than the stateoftheart methods including FastDTW and PrunedDTW. Meanwhile, on the real nanopore datasets, cwDTW is about 440 times faster than FastDTW and 3000 times faster than the original DTW. Our program is available at https://github.com/realbigws/cwDTW.

Action Search: Learning to Search for Human Activities in Untrimmed Videos(arXiv, 20170613)Traditional approaches for action detection use trimmed data to learn sophisticated action detector models. Although these methods have achieved great success at detecting human actions, we argue that huge information is discarded when ignoring the process, through which this trimmed data is obtained. In this paper, we propose Action Search, a novel approach that mimics the way people annotate activities in video sequences. Using a Recurrent Neural Network, Action Search can efficiently explore a video and determine the time boundaries during which an action occurs. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently but also accurately find human activities, outperforming stateoftheart methods.

An Adjointbased Numerical Method for a class of nonlinear FokkerPlanck Equations(arXiv, 20170322)Here, we introduce a numerical approach for a class of FokkerPlanck (FP) equations. These equations are the adjoint of the linearization of HamiltonJacobi (HJ) equations. Using this structure, we show how to transfer the properties of schemes for HJ equations to the FP equations. Hence, we get numerical schemes with desirable features such as positivity and masspreservation. We illustrate this approach in examples that include meanfield games and a crowd motion model.

Advanced Multilevel Monte Carlo Methods(arXiv, 20170424)This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

Aiptasia as a model to study metabolic diversity and specificity in cnidariandinoflagellate symbioses(Cold Spring Harbor Laboratory, 20171123)The symbiosis between cnidarian hosts and microalgae of the genus Symbiodinium provides the foundation of coral reefs in oligotrophic waters. Understanding the nutrientexchange between these partners is key to identifying the fundamental mechanisms behind this symbiosis. However, deciphering the individual role of host and algal partners in the uptake and cycling of nutrients has proven difficult, given the endosymbiotic nature of this relationship. In this study, we highlight the advantages of the emerging model system Aiptasia to investigate the metabolic diversity and specificity of cnidariandinoflagellate symbiosis. For this, we combined traditional measurements with nanoscale secondary ion mass spectrometry (NanoSIMS) and stable isotope labeling to investigate carbon and nitrogen cycling both at the organismal scale and the cellular scale. Our results suggest that the individual nutrient assimilation by hosts and symbionts depends on the identity of their respective symbiotic partner. Further, δ13C enrichment patterns revealed that alterations in carbon fixation rates only affected carbon assimilation in the cnidarian host but not the algal symbiont, suggesting a 'selfish' character of this symbiotic association. Based on our findings, we identify new venues for future research regarding the role and regulation of nutrient exchange in the cnidariandinoflagellate symbiosis. In this context, the model system approach outlined in this study constitutes a powerful tool set to address these questions.

An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems(Cold Spring Harbor Laboratory, 20170908)We introduce a conceptual framework and an interventional calculus to steer and manipulate systems based on their intrinsic algorithmic probability using the universal principles of the theory of computability and algorithmic information. By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations to algorithmic complexity (the size of the shortest generating mechanism) and causality induces a sequence of perturbations ranking the network elements by the steering capabilities that each of them is capable of. This new dimension unmasks a separation between causal and noncausal components providing a suite of powerful parameterfree algorithms of wide applicability ranging from optimal dimension reduction, maximal randomness analysis and system control. We introduce methods for reprogramming systems that do not require the full knowledge or access to the system's actual kinetic equations or any probability distributions. A causal interventional analysis of synthetic and regulatory biological networks reveals how the algorithmic reprogramming qualitatively reshapes the system's dynamic landscape. For example, during cellular differentiation we find a decrease in the number of elements corresponding to a transition away from randomness and a combination of the system's intrinsic properties and its intrinsic capabilities to be algorithmically reprogrammed can reconstruct an epigenetic landscape. The interventional calculus is broadly applicable to predictive causal inference of systems such as networks and of relevance to a variety of machine and causal learning techniques driving modelbased approaches to better understanding and manipulate complex systems.

Algorithmic Information Dynamics of Persistent Patterns and Colliding Particles in the Game of Life(arXiv, 20180218)We demonstrate the way to apply and exploit the concept of \textit{algorithmic information dynamics} in the characterization and classification of dynamic and persistent patterns, motifs and colliding particles in, without loss of generalization, Conway's Game of Life (GoL) cellular automaton as a case study. We analyze the distribution of prevailing motifs that occur in GoL from the perspective of algorithmic probability. We demonstrate how the tools introduced are an alternative to computable measures such as entropy and compression algorithms which are often nonsensitive to small changes and features of nonstatistical nature in the study of evolving complex systems and their emergent structures.

An analog of Hölder's inequality for the spectral radius of Hadamard products(arXiv, 20171203)We prove new inequalities related to the spectral radius ρ of Hadamard products (denoted by ◦ ) of complex matrices. Let p, q ∈ [1 , ∞ ] satisfy 1/p + 1/q = 1, we show an analog of Hölder’s inequality on the space of n × n complex matrices ρ ( A ◦ B ) ≤ ρ (  A ^( ◦ p) ) ^(1/p) ρ (  B ^( ◦ q) ) ^(1/q) for all A, B ∈ C n × n , where · denotes entrywise absolute values, and ( · ) ^ (◦ p) represents the entrywise Hadamard power. We derive a sharper inequality for the special case p = q = 2. Given A, B ∈ C ^(n × n) , for some β ∈ (0 , 1] depending on A and B , ρ ( A ◦ B ) ≤ βρ (  A ◦ A  ) ^(1/2) ρ (  B ◦ B  )^ (1/2) . Analysis for another special case p = 1 and q = ∞ is also included.

An approximate fractional Gaussian noise model with computational cost(arXiv, 20170918)Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihoodbased approach is ${\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent firstorder autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

Assessing Potential Wind Energy Resources in Saudi Arabia with a Skewt Distribution(arXiv, 20170313)Facing increasing domestic energy consumption from population growth and industrialization, Saudi Arabia is aiming to reduce its reliance on fossil fuels and to broaden its energy mix by expanding investment in renewable energy sources, including wind energy. A preliminary task in the development of wind energy infrastructure is the assessment of wind energy potential, a key aspect of which is the characterization of its spatiotemporal behavior. In this study we examine the impact of internal climate variability on seasonal wind power density fluctuations using 30 simulations from the Large Ensemble Project (LENS) developed at the National Center for Atmospheric Research. Furthermore, a spatiotemporal model for daily wind speed is proposed with neighborbased crosstemporal dependence, and a multivariate skewt distribution to capture the spatial patterns of higher order moments. The model can be used to generate synthetic time series over the entire spatial domain that adequately reproduces the internal variability of the LENS dataset.

A BatchIncremental Video Background Estimation Model using Weighted LowRank Approximation of Matrices(arXiv, 20170702)Principal component pursuit (PCP) is a stateoftheart approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batchincremental background estimation model using a special weighted lowrank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the stateoftheart background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

Bayesian Modeling of Air Pollution Extremes Using Nested Multivariate MaxStable Processes(arXiv, 20180318)Capturing the potentially strong dependence among the peak concentrations of multiple air pollutants across a spatial region is crucial for assessing the related public health risks. In order to investigate the multivariate spatial dependence properties of air pollution extremes, we introduce a new class of multivariate maxstable processes. Our proposed model admits a hierarchical treebased formulation, in which the data are conditionally independent given some latent nested $\alpha$stable random factors. The hierarchical structure facilitates Bayesian inference and offers a convenient and interpretable characterization. We fit this nested multivariate maxstable model to the maxima of air pollution concentrations and temperatures recorded at a number of sites in the Los Angeles area, showing that the proposed model succeeds in capturing their complex tail dependence structure.

Bayesian Parameter Estimation via Filtering and Functional Approximations(arXiv, 20161125)The inverse problem of determining parameters in a model by comparing some output of the model with observations is addressed. This is a description for what hat to be done to use the GaussMarkovKalman filter for the Bayesian estimation and updating of parameters in a computational model. This is a filter acting on random variables, and while its Monte Carlo variant  the Ensemble Kalman Filter (EnKF)  is fairly straightforward, we subsequently only sketch its implementation with the help of functional representations.

Blind Measurement Selection: A Random Matrix Theory Approach(arXiv, 20161214)This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closedform the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a lowcomplexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channelaware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multiuser systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channelaware algorithms.