Show simple item record

dc.contributor.authorYu, Tingting
dc.contributor.authorYu, Guoxian
dc.contributor.authorWang, Jun
dc.contributor.authorDomeniconi, Carlotta
dc.contributor.authorZhang, Xiangliang
dc.date.accessioned2021-02-28T13:00:03Z
dc.date.available2021-02-28T13:00:03Z
dc.date.issued2020-11
dc.identifier.citationYu, T., Yu, G., Wang, J., Domeniconi, C., & Zhang, X. (2020). Partial Multi-label Learning using Label Compression. 2020 IEEE International Conference on Data Mining (ICDM). doi:10.1109/icdm50108.2020.00085
dc.identifier.isbn9781728183169
dc.identifier.issn1550-4786
dc.identifier.doi10.1109/ICDM50108.2020.00085
dc.identifier.urihttp://hdl.handle.net/10754/667718
dc.description.abstractPartial multi-label learning (PML) aims at learning a robust multi-label classifier from partial multi-label data, where a sample is annotated with a set of candidate labels, while only a subset of those labels is valid. The existing PML algorithms generally suffer from the high computational cost when learning with large label spaces. In this paper, we introduce a PML approach (PML-LCom) that uses Label Compression to efficiently learn from partial multi-label data. PML-LCom firstly splits the observed label data matrix into a latent relevant label matrix and an irrelevant one, and then factorizes the relevant label matrix into two low-rank matrices, one encodes the compressed labels of samples, and the other explores the underlying label correlations. Next, it optimizes the coefficient matrix of the multi-label predictor with respect to the compressed label matrix. In addition, it regularizes the compressed label matrix with respect to the feature similarity of samples, and optimizes the label matrix and predictor in a coherent manner. Experimental results on both semi-synthetic and real-world PML datasets show that PML-LCom achieves a performance superior to the state-of-the-art solutions on predicting the labels of unlabeled samples with a large label space. The label compression improves both the effectiveness and efficiency, and the coherent optimization mutually benefits the label matrix and predictor.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urlhttps://ieeexplore.ieee.org/document/9338400/
dc.rightsArchived with thanks to IEEE
dc.titlePartial multi-label learning using label compression
dc.typeConference Paper
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentMachine Intelligence & kNowledge Engineering Lab
dc.conference.date2020-11-17 to 2020-11-20
dc.conference.name20th IEEE International Conference on Data Mining, ICDM 2020
dc.conference.locationVirtual, Sorrento, ITA
dc.eprint.versionPost-print
dc.contributor.institution1School of Software, Shandong University, Jinan, China
dc.contributor.institution2College of Computer and Information Sciences, Southwest University, Chongqing, China
dc.contributor.institution3Joint SDU-NTU Centre for Artificial Intelligence Research, Shandong University, Jinan, China
dc.contributor.institution5Department of Computer Science, George Mason University, VA, USA
dc.identifier.volume2020-November
dc.identifier.pages761-770
kaust.personYu, Guoxian
kaust.personZhang, Xiangliang
dc.identifier.eid2-s2.0-85100875019
refterms.dateFOA2021-03-07T05:44:08Z


Files in this item

Thumbnail
Name:
PML-LCom0924.pdf
Size:
543.4Kb
Format:
PDF
Description:
Accepted manuscript

This item appears in the following Collection(s)

Show simple item record