dc.contributor.author Sun, Ke dc.contributor.author Zhang, Xiangliang dc.date.accessioned 2017-12-28T07:32:11Z dc.date.available 2017-12-28T07:32:11Z dc.date.issued 2017-02-25 dc.identifier.uri http://hdl.handle.net/10754/626472 dc.description.abstract Variational autoencoders (VAE) often use Gaussian or category distribution to model the inference process. This puts a limit on variational learning because this simplified assumption does not match the true posterior distribution, which is usually much more sophisticated. To break this limitation and apply arbitrary parametric distribution during inference, this paper derives a \emph{semi-continuous} latent representation, which approximates a continuous density up to a prescribed precision, and is much easier to analyze than its continuous counterpart because it is fundamentally discrete. We showcase the proposition by applying polynomial exponential family distributions as the posterior, which are universal probability density function generators. Our experimental results show consistent improvements over commonly used VAE models. dc.publisher arXiv dc.relation.url http://arxiv.org/abs/1702.07904v1 dc.relation.url http://arxiv.org/pdf/1702.07904v1 dc.rights Archived with thanks to arXiv dc.title Coarse Grained Exponential Variational Autoencoders dc.type Preprint dc.contributor.department Computer Science Program dc.contributor.department Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division dc.eprint.version Pre-print dc.identifier.arxivid 1702.07904 kaust.person Sun, Ke kaust.person Zhang, Xiangliang dc.version v1 refterms.dateFOA 2018-06-14T03:36:22Z
﻿

Name:
1702.07904v1.pdf
Size:
3.649Mb
Format:
PDF
Description:
Preprint