Improving cross-lingual entity alignment via optimal transport

Abstract
Cross-lingual entity alignment identifies entity pairs that share the same meanings but locate in different language knowledge graphs (KGs). The study in this paper is to address two limitations that widely exist in current solutions: 1) the alignment loss functions defined at the entity level serve well the purpose of aligning labeled entities but fail to match the whole picture of labeled and unlabeled entities in different KGs; 2) the translation from one domain to the other has been considered (e.g., X to Y by M1 or Y to X by M2). However, the important duality of alignment between different KGs (X to Y by M1 and Y to X by M2) is ignored. We propose a novel entity alignment framework (OTEA), which dually optimizes the entity-level loss and group-level loss via optimal transport theory. We also impose a regularizer on the dual translation matrices to mitigate the effect of noise during transformation. Extensive experimental results show that our model consistently outperforms the state-of-the-arts with significant improvements on alignment accuracy.

Citation
Pei, S., Yu, L., & Zhang, X. (2019). Improving Cross-lingual Entity Alignment via Optimal Transport. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. doi:10.24963/ijcai.2019/448

Acknowledgements
The research reported in this publication was supported by funding from King Abdullah University of Science and Technology (KAUST), under award number FCC/1/1976-19-01, and NSFC No. 61828302.

Publisher
International Joint Conferences on Artificial Intelligence

Conference/Event Name
28th International Joint Conference on Artificial Intelligence, IJCAI 2019

DOI
10.24963/ijcai.2019/448

Additional Links
https://www.ijcai.org/proceedings/2019/448https://www.ijcai.org/proceedings/2019/0448.pdf

Permanent link to this record