Modeling self-occlusions in dynamic shape and appearance tracking
Type
Conference PaperKAUST Department
Electrical Engineering ProgramComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Visual Computing Center (VCC)
Date
2013-12Permanent link to this record
http://hdl.handle.net/10754/564821
Metadata
Show full item recordAbstract
We present a method to track the precise shape of a dynamic object in video. Joint dynamic shape and appearance models, in which a template of the object is propagated to match the object shape and radiance in the next frame, are advantageous over methods employing global image statistics in cases of complex object radiance and cluttered background. In cases of complex 3D object motion and relative viewpoint change, self-occlusions and disocclusions of the object are prominent, and current methods employing joint shape and appearance models are unable to accurately adapt to new shape and appearance information, leading to inaccurate shape detection. In this work, we model self-occlusions and dis-occlusions in a joint shape and appearance tracking framework. Experiments on video exhibiting occlusion/dis-occlusion, complex radiance and background show that occlusion/dis-occlusion modeling leads to superior shape accuracy compared to recent methods employing joint shape/appearance models or employing global statistics. © 2013 IEEE.Citation
Yang, Y., & Sundaramoorthi, G. (2013). Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking. 2013 IEEE International Conference on Computer Vision. doi:10.1109/iccv.2013.32Conference/Event name
2013 14th IEEE International Conference on Computer Vision, ICCV 2013ISBN
9781479928392ae974a485f413a2113503eed53cd6c53
10.1109/ICCV.2013.32