KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Computer Science Program
Visual Computing Center (VCC)
Online Publication Date2016-09-17
Print Publication Date2016
Permanent link to this recordhttp://hdl.handle.net/10754/622215
MetadataShow full item record
AbstractManhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.
CitationLi M, Wonka P, Nan L (2016) Manhattan-World Urban Reconstruction from Point Clouds. Lecture Notes in Computer Science: 54–69. Available: http://dx.doi.org/10.1007/978-3-319-46493-0_4.
SponsorsWe thank the reviewers for their valuable comments. We also thank Dr. Neil Smith for providing us the data used in Fig. 7. This work was supported by the Office of Sponsored Research (OSR) under Award No. OCRF-2014-CGR3-62140401, and the Visual Computing Center at KAUST. Minglei Li was partially supported by NSFC (61272327). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro K5200 GPU used for this research.
Embedded External Content