Notice

This is not the latest version of this item. The latest version can be found at: https://repository.kaust.edu.sa/handle/10754/660691

Show simple item record

dc.contributor.authorZsolnai-Fehér, Károly
dc.contributor.authorWonka, Peter
dc.contributor.authorWimmer, Michael
dc.date.accessioned2019-12-19T07:21:22Z
dc.date.available2019-12-19T07:21:22Z
dc.date.issued2019-09-12
dc.identifier.urihttp://hdl.handle.net/10754/660691.1
dc.description.abstractCreating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect. This is typically a lengthy process that involves a trained artist with specialized knowledge. In this work, we present a technique that aims to empower novice and intermediate-level users to synthesize high-quality photorealistic materials by only requiring basic image processing knowledge. In the proposed workflow, the user starts with an input image and applies a few intuitive transforms (e.g., colorization, image inpainting) within a 2D image editor of their choice, and in the next step, our technique produces a photorealistic result that approximates this target image. Our method combines the advantages of a neural network-augmented optimizer and an encoder neural network to produce high-quality output results within 30 seconds. We also demonstrate that it is resilient against poorly-edited target images and propose a simple extension to predict image sequences with a strict time budget of 1-2 seconds per image.
dc.description.sponsorshipWe would like to thank Reynante Martinez for providing us the geometry and some of the materials for the Paradigm (Fig. 1) and Genesis scenes (Fig. 3), ianofshields for the Liquify scene that served as a basis for Fig. 9, Robin Marin for the material test scene, Andrew Price and Gabor M ´ esz ´ aros for their help with geometry modeling, ´Fel´ıcia Zsolnai-Feher for her help improving our gures, Christian ´Freude, David Ha, Philipp Erler and Adam Celarek for their useful comments. We also thank NVIDIA for providing the hardware to train our neural networks. is work was partially funded by Austrian Science Fund (FWF), project number P27974.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/1909.11622
dc.rightsArchived with thanks to arXiv
dc.titlePhotorealistic Material Editing Through Direct Image Manipulation
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentVisual Computing Center (VCC)
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.contributor.institutionTU Wien
dc.identifier.arxivid1909.11622
kaust.personWonka, Peter
refterms.dateFOA2019-12-19T07:22:28Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
8.962Mb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record

VersionItemEditorDateSummary

*Selected version