• Login
    Search 
    •   Home
    • Research
    • Articles
    • Search
    •   Home
    • Research
    • Articles
    • Search
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Filter by Category

    AuthorGao, Xin (5)Han, Renmin (2)Li, Yu (2)Alazmi, Meshari (1)Chen, Wei (1)View MoreDepartmentComputational Bioscience Research Center (CBRC) (5)Computer Science Program (5)Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division (5)JournalBioinformatics (5)KAUST Grant Number
    URF/1/2602-01 (5)
    URF/1/3007-01 (5)URF/1/3450-01 (4)FCC/1/1976-04 (3)URF/1/3412-01 (3)View MorePublisher
    Oxford University Press (OUP) (5)
    TypeArticle (5)Year (Issue Date)2018 (4)2017 (1)Item AvailabilityOpen Access (5)

    Browse

    All of KAUSTCommunitiesIssue DateSubmit DateThis CollectionIssue DateSubmit Date

    My Account

    Login

    Quick Links

    Open Access PolicyORCID LibguidePlumX LibguideSubmit an Item

    Statistics

    Display statistics
     

    Search

    Show Advanced FiltersHide Advanced Filters

    Filters

    Now showing items 1-5 of 5

    • List view
    • Grid view
    • Sort Options:
    • Relevance
    • Title Asc
    • Title Desc
    • Issue Date Asc
    • Issue Date Desc
    • Submit Date Asc
    • Submit Date Desc
    • Results Per Page:
    • 5
    • 10
    • 20
    • 40
    • 60
    • 80
    • 100

    • 5CSV
    • 5RefMan
    • 5EndNote
    • 5BibTex
    • Selective Export
    • Select All
    • Help
    Thumbnail

    A fast fiducial marker tracking model for fully automatic alignment in electron tomography

    Han, Renmin; Zhang, Fa; Gao, Xin (Bioinformatics, Oxford University Press (OUP), 2017-10-20) [Article]
    Automatic alignment, especially fiducial marker-based alignment, has become increasingly important due to the high demand of subtomogram averaging and the rapid development of large-field electron microscopy. Among the alignment steps, fiducial marker tracking is a crucial one that determines the quality of the final alignment. Yet, it is still a challenging problem to track the fiducial markers accurately and effectively in a fully automatic manner.In this paper, we propose a robust and efficient scheme for fiducial marker tracking. Firstly, we theoretically prove the upper bound of the transformation deviation of aligning the positions of fiducial markers on two micrographs by affine transformation. Secondly, we design an automatic algorithm based on the Gaussian mixture model to accelerate the procedure of fiducial marker tracking. Thirdly, we propose a divide-and-conquer strategy against lens distortions to ensure the reliability of our scheme. To our knowledge, this is the first attempt that theoretically relates the projection model with the tracking model. The real-world experimental results further support our theoretical bound and demonstrate the effectiveness of our algorithm. This work facilitates the fully automatic tracking for datasets with a massive number of fiducial markers.The C/C ++ source code that implements the fast fiducial marker tracking is available at https://github.com/icthrm/gmm-marker-tracking. Markerauto 1.6 version or later (also integrated in the AuTom platform at http://ear.ict.ac.cn/) offers a complete implementation for fast alignment, in which fast fiducial marker tracking is available by the
    Thumbnail

    DeeReCT-PolyA: a robust and generic deep learning method for PAS identification

    Xia, Zhihao; Li, Yu; Zhang, Bin; Li, Zhongxiao; Hu, Yuhui; Chen, Wei; Gao, Xin (Bioinformatics, Oxford University Press (OUP), 2018-11-30) [Article]
    Motivation \nPolyadenylation is a critical step for gene expression regulation during the maturation of mRNA. An accurate and robust method for poly(A) signals (PAS) identification is not only desired for the purpose of better transcripts’ end annotation, but can also help us gain a deeper insight of the underlying regulatory mechanism. Although many methods have been proposed for PAS recognition, most of them are PAS motif-specific and human-specific, which leads to high risks of overfitting, low generalization power, and inability to reveal the connections between the underlying mechanisms of different mammals. \nResults \nIn this work, we propose a robust, PAS motif agnostic, and highly interpretable and transferrable deep learning model for accurate PAS recognition, which requires no prior knowledge or human-designed features. We show that our single model trained over all human PAS motifs not only outperforms the state-of-theart methods trained on specific motifs, but can also be generalized well to two mouse data sets. Moreover, we further increase the prediction accuracy by transferring the deep learning model trained on the data of one species to the data of a different species. Several novel underlying poly(A) patterns are revealed through the visualization of important oligomers and positions in our trained models. Finally, we interpret the deep learning models by converting the convolutional filters into sequence logos and quantitatively compare the sequence logos between human and mouse datasets.
    Thumbnail

    Systematic selection of chemical fingerprint features improves the Gibbs energy prediction of biochemical reactions

    Alazmi, Meshari; Kuwahara, Hiroyuki; Soufan, Othman; Ding, Lizhong; Gao, Xin (Bioinformatics, Oxford University Press (OUP), 2018-12-24) [Article]
    Motivation \nAccurate and wide-ranging prediction of thermodynamic parameters for biochemical reactions can facilitate deeper insights into the workings and the design of metabolic systems. \n \nResults \nHere, we introduce a machine learning method with chemical fingerprint-based features for the prediction of the Gibbs free energy of biochemical reactions. From a large pool of 2D fingerprint-based features, this method systematically selects a small number of relevant ones and uses them to construct a regularized linear model. Since a manual selection of 2D structurebased features can be a tedious and time-consuming task, requiring expert knowledge about the structure-activity relationship of chemical compounds, the systematic feature selection step in our method offers a convenient means to identify relevant 2D fingerprint-based features. By comparing our method with state-of-the-art linear regression-based methods for the standard Gibbs free energy prediction, we demonstrated that its prediction accuracy and prediction coverage are most favorable. Our results show direct evidence that a number of 2D fingerprints collectively provide useful information about the Gibbs free energy of biochemical reactions and that our systematic feature selection procedure provides a convenient way to identify them.
    Thumbnail

    DLBI: deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy

    Li, Yu; Xu, Fan; Zhang, Fa; Xu, Pingyong; Zhang, Mingshu; Fan, Ming; Li, Lihua; Gao, Xin; Han, Renmin (Bioinformatics, Oxford University Press (OUP), 2018-06-27) [Article]
    Super-resolution fluorescence microscopy with a resolution beyond the diffraction limit of light, has become an indispensable tool to directly visualize biological structures in living cells at a nanometer-scale resolution. Despite advances in high-density super-resolution fluorescent techniques, existing methods still have bottlenecks, including extremely long execution time, artificial thinning and thickening of structures, and lack of ability to capture latent structures.Here, we propose a novel deep learning guided Bayesian inference (DLBI) approach, for the time-series analysis of high-density fluorescent images. Our method combines the strength of deep learning and statistical inference, where deep learning captures the underlying distribution of the fluorophores that are consistent with the observed time-series fluorescent images by exploring local features and correlation along time-axis, and statistical inference further refines the ultrastructure extracted by deep learning and endues physical meaning to the final image. In particular, our method contains three main components. The first one is a simulator that takes a high-resolution image as the input, and simulates time-series low-resolution fluorescent images based on experimentally calibrated parameters, which provides supervised training data to the deep learning model. The second one is a multi-scale deep learning module to capture both spatial information in each input low-resolution image as well as temporal information among the time-series images. And the third one is a Bayesian inference module that takes the image from the deep learning module as the initial localization of fluorophores and removes artifacts by statistical inference. Comprehensive experimental results on both real and simulated datasets demonstrate that our method provides more accurate and realistic local patch and large-field reconstruction than the state-of-the-art method, the 3B analysis, while our method is more than two orders of magnitude faster.The main program is available at https://github.com/lykaust15/DLBI.Supplementary data are available at Bioinformatics online.
    Thumbnail

    OPA2Vec: combining formal and informal content of biomedical ontologies to improve similarity-based prediction

    Smaili, Fatima Z.; Gao, Xin; Hoehndorf, Robert (Bioinformatics, Oxford University Press (OUP), 2018-11-08) [Article]
    Motivation:Ontologies are widely used in biology for data annotation, integration, and analysis. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation axioms which provide valuable pieces of information that characterize ontology classes. Annotation axioms commonly used in ontologies include class labels, descriptions, or synonyms. Despite being a rich source of semantic information, the ontology meta-data are generally unexploited by ontology-based analysis methods such. Results:We propose a novel method, OPA2Vec, to generate vector representations of biological entities in ontologies by combining formal ontology axioms and annotation axioms from the ontology metadata. We apply a Word2Vec model that has been pre-trained on either a corpus or abstracts or full-text articles to produce feature vectors from our collected data. We validate our method in two different ways: first, we use the obtained vector representations of proteins in a similarity measure to predict protein-protein interaction on two different datasets. Second, we evaluate our method on predicting gene-disease associations based on phenotype similarity by generating vector representations of genes and diseases using a phenotype ontology, and applying the obtained vectors to predict gene-disease associations using mouse model phenotypes. We demonstrate that OPA2Vec significantly outperforms existing methods for predicting gene-disease associations. Using evidence from mouse models, we apply OPA2Vec to identify candidate genes for several thousand rare and orphan diseases. OPA2Vec can be used to produce vector representations of any biomedical entity given any type of biomedical ontology. Availability:https://github.com/bio-ontology-research-group/opa2vec.
    DSpace software copyright © 2002-2019  DuraSpace
    Quick Guide | Contact Us | Send Feedback
    Open Repository is a service hosted by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items. For anonymous users the allowed maximum amount is 50 search results.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.