dc.contributor.author Li, Yuanpeng dc.contributor.author Hestness, Joel dc.contributor.author Elhoseiny, Mohamed dc.contributor.author Zhao, Liang dc.contributor.author Church, Kenneth dc.date.accessioned 2022-01-12T12:21:18Z dc.date.available 2022-01-12T12:21:18Z dc.date.issued 2022-01-06 dc.identifier.uri http://hdl.handle.net/10754/674925 dc.description.abstract This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4 times quicker than the previous method on various tasks. The source code is available at \url{https://github.com/yuanpeng16/EDCR}. dc.publisher arXiv dc.relation.url https://arxiv.org/pdf/2201.01942.pdf dc.rights Archived with thanks to arXiv dc.title Efficiently Disentangle Causal Representations dc.type Preprint dc.contributor.department Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division dc.eprint.version Pre-print dc.identifier.arxivid 2201.01942 kaust.person Elhoseiny, Mohamed refterms.dateFOA 2022-01-12T12:22:12Z
﻿

### Files in this item

Name:
Preprintfile1.pdf
Size:
625.5Kb
Format:
PDF
Description:
Pre-print