Self-Occlusions and Disocclusions in Causal Video Object Segmentation
Type
Conference PaperKAUST Department
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) DivisionElectrical Engineering Program
Visual Computing Center (VCC)
Date
2016-02-19Online Publication Date
2016-02-19Print Publication Date
2015-12Permanent link to this record
http://hdl.handle.net/10754/621290
Metadata
Show full item recordAbstract
We propose a method to detect disocclusion in video sequences of three-dimensional scenes and to partition the disoccluded regions into objects, defined by coherent deformation corresponding to surfaces in the scene. Our method infers deformation fields that are piecewise smooth by construction without the need for an explicit regularizer and the associated choice of weight. It then partitions the disoccluded region and groups its components with objects by leveraging on the complementarity of motion and appearance cues: Where appearance changes within an object, motion can usually be reliably inferred and used for grouping. Where appearance is close to constant, it can be used for grouping directly. We integrate both cues in an energy minimization framework, incorporate prior assumptions explicitly into the energy, and propose a numerical scheme. © 2015 IEEE.Citation
Yang Y, Sundaramoorthi G, Soatto S (2015) Self-Occlusions and Disocclusions in Causal Video Object Segmentation. 2015 IEEE International Conference on Computer Vision (ICCV). Available: http://dx.doi.org/10.1109/ICCV.2015.501.Conference/Event name
15th IEEE International Conference on Computer Vision, ICCV 2015ae974a485f413a2113503eed53cd6c53
10.1109/ICCV.2015.501