KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Electrical Engineering Program
Visual Computing Center (VCC)
Online Publication Date2017-11-09
Print Publication Date2017-07
Permanent link to this recordhttp://hdl.handle.net/10754/626983
MetadataShow full item record
AbstractDespite the recent advances in large-scale video analysis, action detection remains as one of the most challenging unsolved problems in computer vision. This snag is in part due to the large volume of data that needs to be analyzed to detect actions in videos. Existing approaches have mitigated the computational cost, but still, these methods lack rich high-level semantics that helps them to localize the actions quickly. In this paper, we introduce a Semantic Cascade Context (SCC) model that aims to detect action in long video sequences. By embracing semantic priors associated with human activities, SCC produces high-quality class-specific action proposals and prune unrelated activities in a cascade fashion. Experimental results in ActivityNet unveils that SCC achieves state-of-the-art performance for action detection while operating at real time.
CitationHeilbron FC, Barrios W, Escorcia V, Ghanem B (2017) SCC: Semantic Context Cascade for Efficient Action Detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Available: http://dx.doi.org/10.1109/CVPR.2017.338.
SponsorsResearch in this publication was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research.
Conference/Event name30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)