Feature selection and multi-kernel learning for sparse representation on a manifold

Handle URI:
http://hdl.handle.net/10754/575708
Title:
Feature selection and multi-kernel learning for sparse representation on a manifold
Authors:
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin ( 0000-0002-7108-3574 )
Abstract:
Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Computational Bioscience Research Center (CBRC); Computer Science Program; Structural and Functional Bioinformatics Group
Publisher:
Elsevier BV
Journal:
Neural Networks
Issue Date:
Mar-2014
DOI:
10.1016/j.neunet.2013.11.009
PubMed ID:
24333479
Type:
Article
ISSN:
08936080
Sponsors:
The study was supported by grants from Chongqing Key Laboratory of Computational Intelligence, China (Grant No. CQ-LCI-2013-02), Tianjin Key Laboratory of Cognitive Computing and Application, China, 2011 Qatar Annual Research Forum Award (Grant no. ARF2011), and King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
Appears in Collections:
Articles; Structural and Functional Bioinformatics Group; Structural and Functional Bioinformatics Group; Computer Science Program; Computer Science Program; Computational Bioscience Research Center (CBRC); Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorWang, Jim Jing-Yanen
dc.contributor.authorBensmail, Halimaen
dc.contributor.authorGao, Xinen
dc.date.accessioned2015-08-24T08:36:17Zen
dc.date.available2015-08-24T08:36:17Zen
dc.date.issued2014-03en
dc.identifier.issn08936080en
dc.identifier.pmid24333479en
dc.identifier.doi10.1016/j.neunet.2013.11.009en
dc.identifier.urihttp://hdl.handle.net/10754/575708en
dc.description.abstractSparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.en
dc.description.sponsorshipThe study was supported by grants from Chongqing Key Laboratory of Computational Intelligence, China (Grant No. CQ-LCI-2013-02), Tianjin Key Laboratory of Cognitive Computing and Application, China, 2011 Qatar Annual Research Forum Award (Grant no. ARF2011), and King Abdullah University of Science and Technology (KAUST), Saudi Arabia.en
dc.publisherElsevier BVen
dc.subjectData representationen
dc.subjectFeature selectionen
dc.subjectManifolden
dc.subjectMultiple kernel learningen
dc.subjectSparse codingen
dc.titleFeature selection and multi-kernel learning for sparse representation on a manifolden
dc.typeArticleen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentComputational Bioscience Research Center (CBRC)en
dc.contributor.departmentComputer Science Programen
dc.contributor.departmentStructural and Functional Bioinformatics Groupen
dc.identifier.journalNeural Networksen
dc.contributor.institutionChongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, Chinaen
dc.contributor.institutionQatar Computing Research Institute, Doha 5825, Qataren
kaust.authorWang, Jim Jing-Yanen
kaust.authorGao, Xinen

Related articles on PubMed

All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.