3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

Handle URI:
http://hdl.handle.net/10754/556148
Title:
3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes
Authors:
Thabet, Ali Kassem ( 0000-0001-7513-0748 ) ; Lahoud, Jean; Asmar, Daniel; Ghanem, Bernard ( 0000-0002-5534-587X )
Abstract:
RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Publisher:
Springer Science + Business Media
Journal:
Lecture Notes in Computer Science
Conference/Event name:
12th Asian Conference on Computer Vision, ACCV 2014
Issue Date:
16-Apr-2015
DOI:
10.1007/978-3-319-16808-1_16
Type:
Conference Paper
Additional Links:
http://link.springer.com/chapter/10.1007%2F978-3-319-16808-1_16; http://vcc.kaust.edu.sa/Documents/B.%20Ghanem/papers/depth_enhancement_accv2014.pdf
Appears in Collections:
Conference Papers; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorThabet, Ali Kassemen
dc.contributor.authorLahoud, Jeanen
dc.contributor.authorAsmar, Danielen
dc.contributor.authorGhanem, Bernarden
dc.date.accessioned2015-06-02T14:12:46Zen
dc.date.available2015-06-02T14:12:46Zen
dc.date.issued2015-04-16en
dc.identifier.doi10.1007/978-3-319-16808-1_16en
dc.identifier.urihttp://hdl.handle.net/10754/556148en
dc.description.abstractRGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.en
dc.publisherSpringer Science + Business Mediaen
dc.relation.urlhttp://link.springer.com/chapter/10.1007%2F978-3-319-16808-1_16en
dc.relation.urlhttp://vcc.kaust.edu.sa/Documents/B.%20Ghanem/papers/depth_enhancement_accv2014.pdfen
dc.rightsThe final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-16808-1_16en
dc.title3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenesen
dc.typeConference Paperen
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.identifier.journalLecture Notes in Computer Scienceen
dc.conference.date2014-11-01 to 2014-11-05en
dc.conference.name12th Asian Conference on Computer Vision, ACCV 2014en
dc.conference.locationSingapore, SGPen
dc.eprint.versionPost-printen
dc.contributor.institutionAmerican University of Beirut (AUB), Lebanonen
kaust.authorThabet, Ali K.en
kaust.authorLahoud, Jeanen
kaust.authorGhanem, Bernarden
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.