Show simple item record

dc.contributor.authorZhu, Peihao
dc.contributor.authorAbdal, Rameen
dc.contributor.authorQin, Yipeng
dc.contributor.authorWonka, Peter
dc.date.accessioned2020-12-21T12:37:02Z
dc.date.available2020-12-21T12:37:02Z
dc.date.issued2020-12-13
dc.identifier.urihttp://hdl.handle.net/10754/666570
dc.description.abstractStyleGAN is able to produce photorealistic images almost indistinguishable from real ones. Embedding images into the StyleGAN latent space is not a trivial task due to the reconstruction quality and editing quality trade-off. In this paper, we first introduce a new normalized space to analyze the diversity and the quality of the reconstructed latent codes. This space can help answer the question of where good latent codes are located in latent space. Second, we propose a framework to analyze the quality of different embedding algorithms. Third, we propose an improved embedding algorithm based on our analysis. We compare our results with the current state-of-the-art methods and achieve a better trade-off between reconstruction quality and editing quality.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/2012.09036
dc.rightsArchived with thanks to arXiv
dc.titleImproved StyleGAN Embedding: Where are the Good Latents?
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer Science
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentKAUST.
dc.contributor.departmentVisual Computing Center (VCC)
dc.eprint.versionPre-print
dc.contributor.institutionCardiff University.
dc.identifier.arxivid2012.09036
kaust.personZhu, Peihao
kaust.personAbdal, Rameen
kaust.personWonka, Peter
refterms.dateFOA2020-12-21T12:39:01Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
25.46Mb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record