Type
Conference PaperKAUST Department
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) DivisionElectrical Engineering Program
Visual Computing Center (VCC)
Date
2017-11-09Online Publication Date
2017-11-09Print Publication Date
2017-07Permanent link to this record
http://hdl.handle.net/10754/626983
Metadata
Show full item recordAbstract
Despite the recent advances in large-scale video analysis, action detection remains as one of the most challenging unsolved problems in computer vision. This snag is in part due to the large volume of data that needs to be analyzed to detect actions in videos. Existing approaches have mitigated the computational cost, but still, these methods lack rich high-level semantics that helps them to localize the actions quickly. In this paper, we introduce a Semantic Cascade Context (SCC) model that aims to detect action in long video sequences. By embracing semantic priors associated with human activities, SCC produces high-quality class-specific action proposals and prune unrelated activities in a cascade fashion. Experimental results in ActivityNet unveils that SCC achieves state-of-the-art performance for action detection while operating at real time.Citation
Heilbron FC, Barrios W, Escorcia V, Ghanem B (2017) SCC: Semantic Context Cascade for Efficient Action Detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Available: http://dx.doi.org/10.1109/CVPR.2017.338.Sponsors
Research in this publication was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research.Conference/Event name
30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Additional Links
http://ieeexplore.ieee.org/document/8099821/ae974a485f413a2113503eed53cd6c53
10.1109/CVPR.2017.338