Show simple item record

dc.contributor.authorSun, Ke
dc.contributor.authorZhang, Xiangliang
dc.date.accessioned2017-12-28T07:32:11Z
dc.date.available2017-12-28T07:32:11Z
dc.date.issued2017-02-25
dc.identifier.urihttp://hdl.handle.net/10754/626472
dc.description.abstractVariational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a \emph{semi-continuous} latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models.
dc.publisherarXiv
dc.relation.urlhttp://arxiv.org/abs/1702.07904v1
dc.relation.urlhttp://arxiv.org/pdf/1702.07904v1
dc.rightsArchived with thanks to arXiv
dc.titleCoarse Grained Exponential Variational Autoencoders
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.identifier.arxivid1702.07904
kaust.personSun, Ke
kaust.personZhang, Xiangliang
dc.versionv1
refterms.dateFOA2018-06-14T03:36:22Z


Files in this item

Thumbnail
Name:
1702.07904v1.pdf
Size:
3.649Mb
Format:
PDF
Description:
Preprint

This item appears in the following Collection(s)

Show simple item record