Type
Conference PaperKAUST Department
Applied Mathematics & Computational SciComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
Computer Science
Computer Science Program
Visual Computing Center (VCC)
Electrical and Computer Engineering Program
Date
2021Permanent link to this record
http://hdl.handle.net/10754/670874
Metadata
Show full item recordAbstract
We consider the problem of filling in missing spatiotemporal regions of a video. We provide a novel flow-based solution by introducing a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images. We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings. This ensures consistency of the frame-to-frame flows generated to the underlying scene, reducing geometric distortions in flow based inpainting. The template is mapped to the missing regions in the video by a new (L$^{2}$-L$^{1}$) interpolation scheme, creating crisp inpaintings and reducing common blur and distortion artifacts. We show on two benchmark datasets that our approach out-performs state-of-the-art quantitatively and in user studies.$^{1}$Citation
Lao, D., Zhu, P., Wonka, P., & Sundaramoorthi, G. (2021). Flow-Guided Video Inpainting with Scene Templates. 2021 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/iccv48922.2021.01433Publisher
IEEEConference/Event name
2021 IEEE/CVF International Conference on Computer Vision (ICCV)ISBN
978-1-6654-2813-2arXiv
2108.12845Additional Links
https://ieeexplore.ieee.org/document/9710220/https://ieeexplore.ieee.org/document/9710220/
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9710220
http://arxiv.org/pdf/2108.12845
ae974a485f413a2113503eed53cd6c53
10.1109/ICCV48922.2021.01433