AuthorsLeón Alcázar, Juan
Bravo, María A.
Thabet, Ali Kassem
KAUST DepartmentComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
Electrical and Computer Engineering Program
VCC Analytics Research Group
Visual Computing Center (VCC)
Preprint Posting Date2019-04-11
Online Publication Date2021-06-24
Print Publication Date2021-09
Embargo End Date2023-06-24
Permanent link to this recordhttp://hdl.handle.net/10754/660665
MetadataShow full item record
AbstractInstance-level video segmentation requires a solid integration of spatial and temporal information. However, current methods rely mostly on domain-specific information (online learning) to produce accurate instance-level segmentations. We propose a novel approach that relies exclusively on the integration of generic spatio-temporal attention cues. Our strategy, named Multi-Attention Instance Network (MAIN), overcomes challenging segmentation scenarios over arbitrary videos without modelling sequence- or instance-specific knowledge. We design MAIN to segment multiple instances in a single forward pass, and optimize it with a novel loss function that favors class agnostic predictions and assigns instance-specific penalties. We achieve state-of-the-art performance on the challenging Youtube-VOS dataset and benchmark, improving the unseen Jaccard and F-Metric by 6.8% and 12.7% respectively, while operating at real-time (30.3 FPS).
CitationLeón Alcázar, J., Bravo, M. A., Jeanneret, G., Thabet, A. K., Brox, T., Arbeláez, P., & Ghanem, B. (2021). MAIN: Multi-Attention Instance Network for video segmentation. Computer Vision and Image Understanding, 103240. doi:10.1016/j.cviu.2021.103240
SponsorsThis work was partially supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research, and by the German-Colombian Academic Cooperation between the German Research Foundation (DFG grant BR 3815/9-1) and Universidad de los Andes , Colombia.