Handle URI:
http://hdl.handle.net/10754/598714
Title:
Learning hatching for pen-and-ink illustration of surfaces
Authors:
Kalogerakis, Evangelos; Nowrouzezahrai, Derek; Breslav, Simon; Hertzmann, Aaron
Abstract:
This article presents an algorithm for learning hatching styles from line drawings. An artist draws a single hatching illustration of a 3D object. Her strokes are analyzed to extract the following per-pixel properties: hatching level (hatching, cross-hatching, or no strokes), stroke orientation, spacing, intensity, length, and thickness. A mapping is learned from input geometric, contextual, and shading features of the 3D object to these hatching properties, using classification, regression, and clustering techniques. Then, a new illustration can be generated in the artist's style, as follows. First, given a new view of a 3D object, the learned mapping is applied to synthesize target stroke properties for each pixel. A new illustration is then generated by synthesizing hatching strokes according to the target properties. © 2012 ACM.
Citation:
Kalogerakis E, Nowrouzezahrai D, Breslav S, Hertzmann A (2012) Learning hatching for pen-and-ink illustration of surfaces. ACM Transactions on Graphics 31: 1–17. Available: http://dx.doi.org/10.1145/2077341.2077342.
Publisher:
Association for Computing Machinery (ACM)
Journal:
ACM Transactions on Graphics
Issue Date:
1-Jan-2012
DOI:
10.1145/2077341.2077342
Type:
Article
ISSN:
0730-0301
Sponsors:
This project was funded by NSERC, CIFAR, CFI, the Ontario MRI, and KAUST Global Collaborative Research.
Appears in Collections:
Publications Acknowledging KAUST Support

Full metadata record

DC FieldValue Language
dc.contributor.authorKalogerakis, Evangelosen
dc.contributor.authorNowrouzezahrai, Dereken
dc.contributor.authorBreslav, Simonen
dc.contributor.authorHertzmann, Aaronen
dc.date.accessioned2016-02-25T13:34:56Zen
dc.date.available2016-02-25T13:34:56Zen
dc.date.issued2012-01-01en
dc.identifier.citationKalogerakis E, Nowrouzezahrai D, Breslav S, Hertzmann A (2012) Learning hatching for pen-and-ink illustration of surfaces. ACM Transactions on Graphics 31: 1–17. Available: http://dx.doi.org/10.1145/2077341.2077342.en
dc.identifier.issn0730-0301en
dc.identifier.doi10.1145/2077341.2077342en
dc.identifier.urihttp://hdl.handle.net/10754/598714en
dc.description.abstractThis article presents an algorithm for learning hatching styles from line drawings. An artist draws a single hatching illustration of a 3D object. Her strokes are analyzed to extract the following per-pixel properties: hatching level (hatching, cross-hatching, or no strokes), stroke orientation, spacing, intensity, length, and thickness. A mapping is learned from input geometric, contextual, and shading features of the 3D object to these hatching properties, using classification, regression, and clustering techniques. Then, a new illustration can be generated in the artist's style, as follows. First, given a new view of a 3D object, the learned mapping is applied to synthesize target stroke properties for each pixel. A new illustration is then generated by synthesizing hatching strokes according to the target properties. © 2012 ACM.en
dc.description.sponsorshipThis project was funded by NSERC, CIFAR, CFI, the Ontario MRI, and KAUST Global Collaborative Research.en
dc.publisherAssociation for Computing Machinery (ACM)en
dc.subjectData-driven hatchingen
dc.subjectHatching by exampleen
dc.subjectIllustrations by exampleen
dc.subjectLearning orientation fieldsen
dc.subjectLearning surface hatchingen
dc.titleLearning hatching for pen-and-ink illustration of surfacesen
dc.typeArticleen
dc.identifier.journalACM Transactions on Graphicsen
dc.contributor.institutionUniversity of Toronto, Toronto, Canadaen
dc.contributor.institutionStanford University, Palo Alto, United Statesen
dc.contributor.institutionDisney Research Zurich, Zurich, Switzerlanden
dc.contributor.institutionUniversite de Montreal, Montreal, Canadaen
dc.contributor.institutionAutodesk Research, Toronto, Canadaen
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.