SmartCanvas: Context-inferred Interpretation of Sketches for Preparatory Design Studies
KAUST DepartmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Computer Science Program
Permanent link to this recordhttp://hdl.handle.net/10754/621414
MetadataShow full item record
AbstractIn early or preparatory design stages, an architect or designer sketches out rough ideas, not only about the object or structure being considered, but its relation to its spatial context. This is an iterative process, where the sketches are not only the primary means for testing and refining ideas, but also for communicating among a design team and to clients. Hence, sketching is the preferred media for artists and designers during the early stages of design, albeit with a major drawback: sketches are 2D and effects such as view perturbations or object movement are not supported, thereby inhibiting the design process. We present an interactive system that allows for the creation of a 3D abstraction of a designed space, built primarily by sketching in 2D within the context of an anchoring design or photograph. The system is progressive in the sense that the interpretations are refined as the user continues sketching. As a key technical enabler, we reformulate the sketch interpretation process as a selection optimization from a set of context-generated canvas planes in order to retrieve a regular arrangement of planes. We demonstrate our system (available at http:/geometry.cs.ucl.ac.uk/projects/2016/smartcanvas/) with a wide range of sketches and design studies. © 2016 The Author(s) Computer Graphics Forum © 2016 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
CitationZheng Y, Liu H, Dorsey J, Mitra NJ (2016) SmartCanvas: Context-inferred Interpretation of Sketches for Preparatory Design Studies. Computer Graphics Forum 35: 37–48. Available: http://dx.doi.org/10.1111/cgf.12809.
SponsorsWe thank the anonymous reviewers for their invaluable comments, suggestions, and additional references. We would like to thank Amal Aboulhassan, Pooya Sareh, Wenwen Zhu, Lubing Fan, Cristina Amati, and Dongming Yan for their great help on the user study. We specially thank Luke Pearson (http://www.alephograph.com/) for contributing his sketches using the system. The project was supported in part by the ERC Starting Grant SmartGeometry (StG-2013-335373) and the US National Science Foundation Award (No. 1302267).
JournalComputer Graphics Forum