Show simple item record

dc.contributor.authorZhu, Peihao
dc.contributor.authorAbdal, Rameen
dc.contributor.authorFemiani, John
dc.contributor.authorWonka, Peter
dc.date.accessioned2021-12-12T10:45:50Z
dc.date.available2021-12-12T10:45:50Z
dc.date.issued2021-12
dc.identifier.citationZhu, P., Abdal, R., Femiani, J., & Wonka, P. (2021). Barbershop. ACM Transactions on Graphics, 40(6), 1–13. doi:10.1145/3478513.3480537
dc.identifier.issn0730-0301
dc.identifier.issn1557-7368
dc.identifier.doi10.1145/3478513.3480537
dc.identifier.urihttp://hdl.handle.net/10754/673981
dc.description.abstractSeamlessly blending features from multiple images is extremely challenging because of complex relationships in lighting, geometry, and partial occlusion which cause coupling between different parts of the image. Even though recent work on GANs enables synthesis of realistic hair or faces, it remains difficult to combine them into a single, coherent, and plausible image rather than a disjointed set of image patches. We present a novel solution to image blending, particularly for the problem of hairstyle transfer, based on GAN-inversion. We propose a novel latent space for image blending which is better at preserving detail and encoding spatial information, and propose a new GAN-embedding algorithm which is able to slightly modify images to conform to a common segmentation mask. Our novel representation enables the transfer of the visual properties from multiple reference images including specific details such as moles and wrinkles, and because we do image blending in a latent-space we are able to synthesize images that are coherent. Our approach avoids blending artifacts present in other approaches and finds a globally consistent image. Our results demonstrate a significant improvement over the current state of the art in a user study, with users preferring our blending solution over 95 percent of the time. Source code for the new approach is available at https://zpdesu.github.io/Barbershop.
dc.description.sponsorshipWe would also like to thank the anonymous reviewers for their insightful comments and constructive remarks. This work was supported by the KAUST Office of Sponsored Research (OSR) and the KAUST Visual Computing Center (VCC).
dc.publisherAssociation for Computing Machinery (ACM)
dc.relation.urlhttps://dl.acm.org/doi/10.1145/3478513.3480537
dc.rights© ACM, 2021. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Graphics, {40, 6, (2021-12)} http://doi.acm.org/10.1145/3478513.3480537
dc.titleBarbershop
dc.typeArticle
dc.contributor.departmentComputer Science
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
dc.contributor.departmentVisual Computing Center (VCC)
dc.identifier.journalACM Transactions on Graphics
dc.eprint.versionPost-print
dc.contributor.institutionMiami University
dc.identifier.volume40
dc.identifier.issue6
dc.identifier.pages1-13
kaust.personZhu, Peihao
kaust.personAbdal, Rameen
kaust.personWonka, Peter
kaust.grant.numberOSR
kaust.acknowledged.supportUnitKAUST Office of Sponsored Research (OSR)
kaust.acknowledged.supportUnitKAUST Visual Computing Center
kaust.acknowledged.supportUnitVisual Computing Center (VCC)


This item appears in the following Collection(s)

Show simple item record