Robust Visual Tracking via Exclusive Context Modeling

Handle URI:
http://hdl.handle.net/10754/556124
Title:
Robust Visual Tracking via Exclusive Context Modeling
Authors:
Zhang, Tianzhu; Ghanem, Bernard ( 0000-0002-5534-587X ) ; Liu, Si; Xu, Changsheng; Ahuja, Narendra
Abstract:
In this paper, we formulate particle filter-based object tracking as an exclusive sparse learning problem that exploits contextual information. To achieve this goal, we propose the context-aware exclusive sparse tracker (CEST) to model particle appearances as linear combinations of dictionary templates that are updated dynamically. Learning the representation of each particle is formulated as an exclusive sparse representation problem, where the overall dictionary is composed of multiple {group} dictionaries that can contain contextual information. With context, CEST is less prone to tracker drift. Interestingly, we show that the popular L₁ tracker [1] is a special case of our CEST formulation. The proposed learning problem is efficiently solved using an accelerated proximal gradient method that yields a sequence of closed form updates. To make the tracker much faster, we reduce the number of learning problems to be solved by using the dual problem to quickly and systematically rank and prune particles in each frame. We test our CEST tracker on challenging benchmark sequences that involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that CEST consistently outperforms state-of-the-art trackers.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Citation:
Robust Visual Tracking via Exclusive Context Modeling 2015:1 IEEE Transactions on Cybernetics
Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Journal:
IEEE Transactions on Cybernetics
Issue Date:
9-Feb-2015
DOI:
10.1109/TCYB.2015.2393307
Type:
Article
ISSN:
2168-2267; 2168-2275
Additional Links:
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7036101
Appears in Collections:
Articles; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorZhang, Tianzhuen
dc.contributor.authorGhanem, Bernarden
dc.contributor.authorLiu, Sien
dc.contributor.authorXu, Changshengen
dc.contributor.authorAhuja, Narendraen
dc.date.accessioned2015-06-01T14:59:54Zen
dc.date.available2015-06-01T14:59:54Zen
dc.date.issued2015-02-09en
dc.identifier.citationRobust Visual Tracking via Exclusive Context Modeling 2015:1 IEEE Transactions on Cyberneticsen
dc.identifier.issn2168-2267en
dc.identifier.issn2168-2275en
dc.identifier.doi10.1109/TCYB.2015.2393307en
dc.identifier.urihttp://hdl.handle.net/10754/556124en
dc.description.abstractIn this paper, we formulate particle filter-based object tracking as an exclusive sparse learning problem that exploits contextual information. To achieve this goal, we propose the context-aware exclusive sparse tracker (CEST) to model particle appearances as linear combinations of dictionary templates that are updated dynamically. Learning the representation of each particle is formulated as an exclusive sparse representation problem, where the overall dictionary is composed of multiple {group} dictionaries that can contain contextual information. With context, CEST is less prone to tracker drift. Interestingly, we show that the popular L₁ tracker [1] is a special case of our CEST formulation. The proposed learning problem is efficiently solved using an accelerated proximal gradient method that yields a sequence of closed form updates. To make the tracker much faster, we reduce the number of learning problems to be solved by using the dual problem to quickly and systematically rank and prune particles in each frame. We test our CEST tracker on challenging benchmark sequences that involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that CEST consistently outperforms state-of-the-art trackers.en
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.relation.urlhttp://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7036101en
dc.rights(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.en
dc.titleRobust Visual Tracking via Exclusive Context Modelingen
dc.typeArticleen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.identifier.journalIEEE Transactions on Cyberneticsen
dc.eprint.versionPost-printen
dc.contributor.institutionAdvanced Digital Sciences Center, Singaporeen
dc.contributor.institutionNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, Chinaen
dc.contributor.institutionInstitute of Information Engineering, Chinese Academy of Sciences, Beijing 100190, Chinaen
dc.contributor.institutionCoordinated Science Laboratory, Department of Electrical and Computer Engineering, Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USAen
kaust.authorGhanem, Bernarden
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.