Manhattan-World Urban Reconstruction from Point Clouds

Handle URI:
http://hdl.handle.net/10754/622215
Title:
Manhattan-World Urban Reconstruction from Point Clouds
Authors:
Li, Minglei; Wonka, Peter ( 0000-0003-0627-9746 ) ; Nan, Liangliang ( 0000-0002-5629-9975 )
Abstract:
Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Computer Science Program; Visual Computing Center (VCC)
Citation:
Li M, Wonka P, Nan L (2016) Manhattan-World Urban Reconstruction from Point Clouds. Lecture Notes in Computer Science: 54–69. Available: http://dx.doi.org/10.1007/978-3-319-46493-0_4.
Publisher:
Springer International Publishing
Journal:
Lecture Notes in Computer Science
Issue Date:
16-Sep-2016
Embedded Video:
DOI:
10.1007/978-3-319-46493-0_4
Type:
Book Chapter
ISSN:
0302-9743; 1611-3349
Sponsors:
We thank the reviewers for their valuable comments. We also thank Dr. Neil Smith for providing us the data used in Fig. 7. This work was supported by the Office of Sponsored Research (OSR) under Award No. OCRF-2014-CGR3-62140401, and the Visual Computing Center at KAUST. Minglei Li was partially supported by NSFC (61272327). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro K5200 GPU used for this research.
Additional Links:
http://link.springer.com/chapter/10.1007%2F978-3-319-46493-0_4; https://youtu.be/onaqn9p7yW4
Appears in Collections:
Computer Science Program; Visual Computing Center (VCC); Book Chapters; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorLi, Mingleien
dc.contributor.authorWonka, Peteren
dc.contributor.authorNan, Liangliangen
dc.date.accessioned2017-01-02T08:42:38Z-
dc.date.available2017-01-02T08:42:38Z-
dc.date.issued2016-09-16en
dc.identifier.citationLi M, Wonka P, Nan L (2016) Manhattan-World Urban Reconstruction from Point Clouds. Lecture Notes in Computer Science: 54–69. Available: http://dx.doi.org/10.1007/978-3-319-46493-0_4.en
dc.identifier.issn0302-9743en
dc.identifier.issn1611-3349en
dc.identifier.doi10.1007/978-3-319-46493-0_4en
dc.identifier.urihttp://hdl.handle.net/10754/622215-
dc.description.abstractManhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.en
dc.description.sponsorshipWe thank the reviewers for their valuable comments. We also thank Dr. Neil Smith for providing us the data used in Fig. 7. This work was supported by the Office of Sponsored Research (OSR) under Award No. OCRF-2014-CGR3-62140401, and the Visual Computing Center at KAUST. Minglei Li was partially supported by NSFC (61272327). We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro K5200 GPU used for this research.en
dc.publisherSpringer International Publishingen
dc.relation.urlhttp://link.springer.com/chapter/10.1007%2F978-3-319-46493-0_4en
dc.relation.urlhttps://youtu.be/onaqn9p7yW4en
dc.rightsThe final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-46493-0_4en
dc.subjectBox fittingen
dc.subjectManhattan-world scenesen
dc.subjectReconstructionen
dc.subjectUrban reconstructionen
dc.titleManhattan-World Urban Reconstruction from Point Cloudsen
dc.typeBook Chapteren
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentComputer Science Programen
dc.contributor.departmentVisual Computing Center (VCC)en
dc.identifier.journalLecture Notes in Computer Scienceen
dc.eprint.versionPost-printen
dc.contributor.institutionCollege of Electronic and Information Engineering, NUAA, Nanjing, Chinaen
dc.relation.embedded<iframe width="560" height="315" src="https://www.youtube.com/embed/onaqn9p7yW4?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>-
kaust.authorLi, Mingleien
kaust.authorWonka, Peteren
kaust.authorNan, Liangliangen
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.