Robust visual tracking via multi-task sparse learning

Handle URI:
http://hdl.handle.net/10754/564560
Title:
Robust visual tracking via multi-task sparse learning
Authors:
Zhang, Tianzhu; Ghanem, Bernard ( 0000-0002-5534-587X ) ; Liu, Si; Ahuja, Narendra
Abstract:
In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Electrical Engineering Program; Visual Computing Center (VCC); VCC Analytics Research Group
Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Journal:
2012 IEEE Conference on Computer Vision and Pattern Recognition
Conference/Event name:
2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012
Issue Date:
Jun-2012
DOI:
10.1109/CVPR.2012.6247908
Type:
Conference Paper
ISSN:
10636919
ISBN:
9781467312264
Appears in Collections:
Conference Papers; Electrical Engineering Program; Visual Computing Center (VCC); Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorZhang, Tianzhuen
dc.contributor.authorGhanem, Bernarden
dc.contributor.authorLiu, Sien
dc.contributor.authorAhuja, Narendraen
dc.date.accessioned2015-08-04T07:03:59Zen
dc.date.available2015-08-04T07:03:59Zen
dc.date.issued2012-06en
dc.identifier.isbn9781467312264en
dc.identifier.issn10636919en
dc.identifier.doi10.1109/CVPR.2012.6247908en
dc.identifier.urihttp://hdl.handle.net/10754/564560en
dc.description.abstractIn this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.en
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en
dc.titleRobust visual tracking via multi-task sparse learningen
dc.typeConference Paperen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentElectrical Engineering Programen
dc.contributor.departmentVisual Computing Center (VCC)en
dc.contributor.departmentVCC Analytics Research Groupen
dc.identifier.journal2012 IEEE Conference on Computer Vision and Pattern Recognitionen
dc.conference.date16 June 2012 through 21 June 2012en
dc.conference.name2012 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2012en
dc.conference.locationProvidence, RIen
dc.contributor.institutionAdvanced Digital Sciences Center of Illinois, Singapore, Singaporeen
dc.contributor.institutionInstitute of Automation, Chinese Academy of Sciences, Chinaen
dc.contributor.institutionUniversity of Illinois at Urbana-Champaign, Urbana, IL, United Statesen
kaust.authorGhanem, Bernarden
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.