Modeling Self-Occlusions/Disocclusions in Dynamic Shape and Appearance Tracking for Obtaining Precise Shape

Handle URI:
http://hdl.handle.net/10754/292405
Title:
Modeling Self-Occlusions/Disocclusions in Dynamic Shape and Appearance Tracking for Obtaining Precise Shape
Authors:
Yang, Yanchao
Abstract:
We present a method to determine the precise shape of a dynamic object from video. This problem is fundamental to computer vision, and has a number of applications, for example, 3D video/cinema post-production, activity recognition and augmented reality. Current tracking algorithms that determine precise shape can be roughly divided into two categories: 1) Global statistics partitioning methods, where the shape of the object is determined by discriminating global image statistics, and 2) Joint shape and appearance matching methods, where a template of the object from the previous frame is matched to the next image. The former is limited in cases of complex object appearance and cluttered background, where global statistics cannot distinguish between the object and background. The latter is able to cope with complex appearance and a cluttered background, but is limited in cases of camera viewpoint change and object articulation, which induce self-occlusions and self-disocclusions of the object of interest. The purpose of this thesis is to model self-occlusion/disocclusion phenomena in a joint shape and appearance tracking framework. We derive a non-linear dynamic model of the object shape and appearance taking into account occlusion phenomena, which is then used to infer self-occlusions/disocclusions, shape and appearance of the object in a variational optimization framework. To ensure robustness to other unmodeled phenomena that are present in real-video sequences, the Kalman filter is used for appearance updating. Experiments show that our method, which incorporates the modeling of self-occlusion/disocclusion, increases the accuracy of shape estimation in situations of viewpoint change and articulation, and out-performs current state-of-the-art methods for shape tracking.
Advisors:
Sundaramoorthi, Ganesh
Committee Member:
Alouini, Mohamed-Slim ( 0000-0003-4827-1793 ) ; Pottmann, Helmut ( 0000-0002-3195-9316 )
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Program:
Electrical Engineering
Issue Date:
May-2013
Type:
Thesis
Appears in Collections:
Theses; Electrical Engineering Program; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.advisorSundaramoorthi, Ganeshen
dc.contributor.authorYang, Yanchaoen
dc.date.accessioned2013-05-20T13:12:44Z-
dc.date.available2013-05-20T13:12:44Z-
dc.date.issued2013-05en
dc.identifier.urihttp://hdl.handle.net/10754/292405en
dc.description.abstractWe present a method to determine the precise shape of a dynamic object from video. This problem is fundamental to computer vision, and has a number of applications, for example, 3D video/cinema post-production, activity recognition and augmented reality. Current tracking algorithms that determine precise shape can be roughly divided into two categories: 1) Global statistics partitioning methods, where the shape of the object is determined by discriminating global image statistics, and 2) Joint shape and appearance matching methods, where a template of the object from the previous frame is matched to the next image. The former is limited in cases of complex object appearance and cluttered background, where global statistics cannot distinguish between the object and background. The latter is able to cope with complex appearance and a cluttered background, but is limited in cases of camera viewpoint change and object articulation, which induce self-occlusions and self-disocclusions of the object of interest. The purpose of this thesis is to model self-occlusion/disocclusion phenomena in a joint shape and appearance tracking framework. We derive a non-linear dynamic model of the object shape and appearance taking into account occlusion phenomena, which is then used to infer self-occlusions/disocclusions, shape and appearance of the object in a variational optimization framework. To ensure robustness to other unmodeled phenomena that are present in real-video sequences, the Kalman filter is used for appearance updating. Experiments show that our method, which incorporates the modeling of self-occlusion/disocclusion, increases the accuracy of shape estimation in situations of viewpoint change and articulation, and out-performs current state-of-the-art methods for shape tracking.en
dc.language.isoenen
dc.subjectTrackingen
dc.subjectDynamic Modelen
dc.subjectPrecise Shapeen
dc.subjectOcclusionen
dc.titleModeling Self-Occlusions/Disocclusions in Dynamic Shape and Appearance Tracking for Obtaining Precise Shapeen
dc.typeThesisen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
thesis.degree.grantorKing Abdullah University of Science and Technologyen_GB
dc.contributor.committeememberAlouini, Mohamed-Slimen
dc.contributor.committeememberPottmann, Helmuten
thesis.degree.disciplineElectrical Engineeringen
thesis.degree.nameMaster of Scienceen
dc.person.id118383en
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.