Show simple item record

dc.contributor.authorWang, Jingbin
dc.contributor.authorZhou, Yihua
dc.contributor.authorDuan, Kanghong
dc.contributor.authorWang, Jim Jing-Yan
dc.contributor.authorBensmail, Halima
dc.date.accessioned2016-05-11T09:12:01Z
dc.date.available2016-05-11T09:12:01Z
dc.date.issued2016-01-15
dc.identifier.doi10.1109/SMC.2015.329
dc.identifier.urihttp://hdl.handle.net/10754/609037
dc.description.abstractIn this paper we study the problem of learning from multiple modal data for purpose of document classification. In this problem, each document is composed two different modals of data, i.e., An image and a text. Cross-modal factor analysis (CFA) has been proposed to project the two different modals of data to a shared data space, so that the classification of a image or a text can be performed directly in this space. A disadvantage of CFA is that it has ignored the supervision information. In this paper, we improve CFA by incorporating the supervision information to represent and classify both image and text modals of documents. We project both image and text data to a shared data space by factor analysis, and then train a class label predictor in the shared space to use the class label information. The factor analysis parameter and the predictor parameter are learned jointly by solving one single objective function. With this objective function, we minimize the distance between the projections of image and text of the same document, and the classification error of the projection measured by hinge loss function. The objective function is optimized by an alternate optimization strategy in an iterative algorithm. Experiments in two different multiple modal document data sets show the advantage of the proposed algorithm over other CFA methods.
dc.description.sponsorshipThe research reported in this publication was supported by competitive research funding from King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urlhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7379461
dc.rights(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
dc.subjectCross-modal factor analysis
dc.subjectMultiple modal learning
dc.subjectSupervised learning
dc.titleSupervised Cross-Modal Factor Analysis for Multiple Modal Data Classification
dc.typeConference Paper
dc.contributor.departmentComputational Bioscience Research Center (CBRC)
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.identifier.journal2015 IEEE International Conference on Systems, Man, and Cybernetics
dc.conference.date9-12 Oct. 2015
dc.conference.nameSystems, Man, and Cybernetics (SMC), 2015 IEEE International Conference on
dc.conference.locationKowloon
dc.eprint.versionPost-print
dc.contributor.institutionNational Time Service Center Chinese Academy of Sciences, Xi’ an 710600 , China Graduate University of Chinese Academy of Sciences Beijing 100039, China
dc.contributor.institutionDepartment of Mechanical Engineering and Mechanics Lehigh University Bethlehem, PA 18015, USA
dc.contributor.institutionNorth China Sea Marine Technical Support Center, State Oceanic Administration Qingdao 266033, China
dc.contributor.institutionQatar Computing Research Institute Doha 5825, Qatar
kaust.personWang, Jim Jing-Yan
refterms.dateFOA2018-06-13T11:49:51Z
dc.date.published-online2016-01-15
dc.date.published-print2015-10


Files in this item

Thumbnail
Name:
final.pdf
Size:
88.76Kb
Format:
PDF
Description:
Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record