Show simple item record

dc.contributor.authorQi, Yuankai
dc.contributor.authorQin, Lei
dc.contributor.authorZhang, Jian
dc.contributor.authorZhang, Shengping
dc.contributor.authorHuang, Qingming
dc.contributor.authorYang, Ming-Hsuan
dc.date.accessioned2018-02-01T11:45:54Z
dc.date.available2018-02-01T11:45:54Z
dc.date.issued2018-01-24
dc.identifier.citationQi Y, Qin L, Zhang J, Zhang S, Huang Q, et al. (2018) Structure-aware Local Sparse Coding for Visual Tracking. IEEE Transactions on Image Processing: 1–1. Available: http://dx.doi.org/10.1109/tip.2018.2797482.
dc.identifier.issn1057-7149
dc.identifier.issn1941-0042
dc.identifier.doi10.1109/tip.2018.2797482
dc.identifier.urihttp://hdl.handle.net/10754/627018
dc.description.abstractSparse coding has been applied to visual tracking and related vision problems with demonstrated success in recent years. Existing tracking methods based on local sparse coding sample patches from a target candidate and sparsely encode these using a dictionary consisting of patches sampled from target template images. The discriminative strength of existing methods based on local sparse coding is limited as spatial structure constraints among the template patches are not exploited. To address this problem, we propose a structure-aware local sparse coding algorithm which encodes a target candidate using templates with both global and local sparsity constraints. For robust tracking, we show local regions of a candidate region should be encoded only with the corresponding local regions of the target templates that are the most similar from the global view. Thus, a more precise and discriminative sparse representation is obtained to account for appearance changes. To alleviate the issues with tracking drifts, we design an effective template update scheme. Extensive experiments on challenging image sequences demonstrate the effectiveness of the proposed algorithm against numerous stateof- the-art methods.
dc.description.sponsorshipThis work was supported in part by National Natural Science Foundation of China: 61620106009, 61332016, U1636214, 61650202, 61572465, 61390510, 61732007, 61672188; in part by Key Research Program of Frontier Sciences, CAS: QYZDJ-SSWSYS013; in part by the NSF CAREER Grant 1149783, and gifts from Adobe, Verisk, and Nvidia.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urlhttp://ieeexplore.ieee.org/document/8268563/
dc.rights(c) 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
dc.subjectVisual tracking
dc.subjectlocal sparse coding
dc.subjectspatial structure information
dc.subjecttemplate update
dc.titleStructure-aware Local Sparse Coding for Visual Tracking
dc.typeArticle
dc.contributor.departmentVisual Computing Center (VCC)
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.identifier.journalIEEE Transactions on Image Processing
dc.eprint.versionPost-print
dc.contributor.institutionSchool of Computer Science and Technology, Harbin Institute of Technology, Harbin, 150001, China.
dc.contributor.institutionKey Laboratory of Intelligent Information Processing, Institute of Computing Technology of Chinese Academy of Sciences, Beijing, 100190, China.
dc.contributor.institutionSchool of Computer Science and Technology, Harbin Institute of Technology, Weihai, 264209, China.
dc.contributor.institutionSchool of Computer and Control Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China.
dc.contributor.institutionSchool of Engineering, University of California at Merced, Merced, CA 95344 USA.
kaust.personZhang, Jian
refterms.dateFOA2018-06-13T12:11:14Z
dc.date.published-online2018-01-24
dc.date.published-print2018-08


Files in this item

Thumbnail
Name:
08268563.pdf
Size:
16.59Mb
Format:
PDF
Description:
Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record