KAUST DepartmentElectrical Engineering Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Physical Science and Engineering (PSE) Division
Permanent link to this recordhttp://hdl.handle.net/10754/556107
MetadataShow full item record
AbstractSparse representation has been applied to visual tracking by finding the best target candidate with minimal reconstruction error by use of target templates. However, most sparse representation based trackers only consider holistic or local representations and do not make full use of the intrinsic structure among and inside target candidates, thereby making the representation less effective when similar objects appear or under occlusion. In this paper, we propose a novel Structural Sparse Tracking (SST) algorithm, which not only exploits the intrinsic relationship among target candidates and their local patches to learn their sparse representations jointly, but also preserves the spatial layout structure among the local patches inside each target candidate. We show that our SST algorithm accommodates most existing sparse trackers with the respective merits. Both qualitative and quantitative evaluations on challenging benchmark image sequences demonstrate that the proposed SST algorithm performs favorably against several state-of-the-art methods.
CitationZhang, T., Liu, S., Xu, C., Shuicheng Yan, Ghanem, B., Ahuja, N., & Yang, M.-H. (2015). Structural Sparse Tracking. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/cvpr.2015.7298610
SponsorsThis study is supported by the research grant for the Human Sixth Sense Programme at the Advanced Digital Sciences Center from Singapore’s Agency for Science, Technology and Research (A∗STAR). C. Xu is supported by 973 Program Project No. 2012CB316304 and NSFC 61225009, 61432019, 61303173, U1435211, 173211KYSB20130018. M.-H. Yang is supported in part by NSF CAREER Grant #1149783 and NSF IIS Grant #1152576.
Conference/Event nameIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015