KAUST DepartmentVisual Computing Center (VCC)
Computer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
KAUST, Vienna University of Technology, Austria
KAUST Grant Number62140401
Online Publication Date2015-07-28
Print Publication Date2015-07-27
Permanent link to this recordhttp://hdl.handle.net/10754/631864
MetadataShow full item record
AbstractWe present a method to learn and propagate shape placements in 2D polygonal scenes from a few examples provided by a user. The placement of a shape is modeled as an oriented bounding box. Simple geometric relationships between this bounding box and nearby scene polygons define a feature set for the placement. The feature sets of all example placements are then used to learn a probabilistic model over all possible placements and scenes. With this model, we can generate a new set of placements with similar geometric relationships in any given scene. We introduce extensions that enable propagation and generation of shapes in 3D scenes, as well as the application of a learned modeling session to large scenes without additional user interaction. These concepts allow us to generate complex scenes with thousands of objects with relatively little user interaction. Copyright is held by the owner/author(s).
CitationGuerrero P, Jeschke S, Wimmer M, Wonka P (2015) Learning shape placements by example. ACM Transactions on Graphics 34: 108:1–108:13. Available: http://dx.doi.org/10.1145/2766933.
SponsorsThis publication is based upon work supported by the KAUST Office of Competitive Research Funds (OCRF) under Award No. 62140401, the KAUST Visual Computing Center and the Austrian Science Fund (FWF) projects DEEP PICTURES (no. P24352-N23) and Data-Driven Procedural Modeling of Interiors (no. P24600N23).
JournalACM Transactions on Graphics
Conference/Event nameACM Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2015