Show simple item record

dc.contributor.authorLi, Huibin
dc.contributor.authorDing, Huaxiong
dc.contributor.authorHuang, Di
dc.contributor.authorWang, Yunhong
dc.contributor.authorZhao, Xi
dc.contributor.authorMorvan, Jean-Marie
dc.contributor.authorChen, Liming
dc.date.accessioned2015-08-02T10:24:32Z
dc.date.available2015-08-02T10:24:32Z
dc.date.issued2015-07-29
dc.identifier.citationAn Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition 2015 Computer Vision and Image Understanding
dc.identifier.issn10773142
dc.identifier.doi10.1016/j.cviu.2015.07.005
dc.identifier.urihttp://hdl.handle.net/10754/561399
dc.description.abstractWe present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.
dc.language.isoen
dc.publisherElsevier BV
dc.relation.urlhttp://linkinghub.elsevier.com/retrieve/pii/S1077314215001587
dc.rightsNOTICE: this is the author’s version of a work that was accepted for publication in Computer Vision and Image Understanding. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Computer Vision and Image Understanding, 29 July 2015. DOI: 10.1016/j.cviu.2015.07.005
dc.subjectFacial expression recognition
dc.subjectLocal texture descriptor
dc.subjectLocal shape descriptor
dc.subjectMultimodal fusion
dc.titleAn Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition
dc.typeArticle
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentVisual Computing Center (VCC)
dc.identifier.journalComputer Vision and Image Understanding
dc.eprint.versionPost-print
dc.contributor.institutionSchool of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, China
dc.contributor.institutionEcole Centrale de Lyon, LIRIS UMR5205, Lyon, France
dc.contributor.institutionState Key Laboratory of Software Development Environment, School of Computer Science and Engineering, Beihang University, Beijing, China
dc.contributor.institutionSchool of Management, Xi’an Jiaotong University, Xi’an, China
dc.contributor.institutionUniversité Lyon 1, Institut Camille Jordan, Lyon, France
dc.contributor.affiliationKing Abdullah University of Science and Technology (KAUST)
kaust.personMorvan, Jean-Marie
dc.date.published-online2015-07-29
dc.date.published-print2015-11


Files in this item

Thumbnail
Name:
1-s2.0-S1077314215001587-main.pdf
Size:
1.898Mb
Format:
PDF
Description:
Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record