Show simple item record

dc.contributor.authorLi, Yuanpeng
dc.contributor.authorHestness, Joel
dc.contributor.authorElhoseiny, Mohamed
dc.contributor.authorZhao, Liang
dc.contributor.authorChurch, Kenneth
dc.date.accessioned2022-01-12T12:21:18Z
dc.date.available2022-01-12T12:21:18Z
dc.date.issued2022-01-06
dc.identifier.urihttp://hdl.handle.net/10754/674925
dc.description.abstractThis paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4 times quicker than the previous method on various tasks. The source code is available at \url{https://github.com/yuanpeng16/EDCR}.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/2201.01942.pdf
dc.rightsArchived with thanks to arXiv
dc.titleEfficiently Disentangle Causal Representations
dc.typePreprint
dc.contributor.departmentComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.identifier.arxivid2201.01942
kaust.personElhoseiny, Mohamed
refterms.dateFOA2022-01-12T12:22:12Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
625.5Kb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record