Show simple item record

dc.contributor.authorYu, Guoxian
dc.contributor.authorLiu, Xuanwu
dc.contributor.authorWang, Jun
dc.contributor.authorDomeniconi, Carlotta
dc.contributor.authorZhang, Xiangliang
dc.date.accessioned2020-10-15T12:49:33Z
dc.date.available2020-10-15T12:49:33Z
dc.date.issued2020-10-14
dc.date.submitted2019-07-11
dc.identifier.citationYu, G., Liu, X., Wang, J., Domeniconi, C., & Zhang, X. (2020). Flexible Cross-Modal Hashing. IEEE Transactions on Neural Networks and Learning Systems, 1–11. doi:10.1109/tnnls.2020.3027729
dc.identifier.issn2162-237X
dc.identifier.issn2162-2388
dc.identifier.doi10.1109/tnnls.2020.3027729
dc.identifier.urihttp://hdl.handle.net/10754/665599
dc.description.abstractHashing has been widely adopted for large-scale data retrieval in many domains due to its low storage cost and high retrieval speed. Existing cross-modal hashing methods optimistically assume that the correspondence between training samples across modalities is readily available. This assumption is unrealistic in practical applications. In addition, existing methods generally require the same number of samples across different modalities, which restricts their flexibility. We propose a flexible cross-modal hashing approach (FlexCMH) to learn effective hashing codes from weakly paired data, whose correspondence across modalities is partially (or even totally) unknown. FlexCMH first introduces a clustering-based matching strategy to explore the structure of each cluster and, thus, to find the potential correspondence between clusters (and samples therein) across modalities. To reduce the impact of an incomplete correspondence, it jointly optimizes the potential correspondence, the crossmodal hashing functions derived from the correspondence, and a hashing quantitative loss in a unified objective function. An alternative optimization technique is also proposed to coordinate the correspondence and hash functions and reinforce the reciprocal effects of the two objectives. Experiments on public multimodal data sets show that FlexCMH achieves significantly better results than state-of-the-art methods, and it, indeed, offers a high degree of flexibility for practical cross-modal hashing tasks.
dc.description.sponsorshipThis work was supported in part by the Natural Science Foundation of China under Grant 61872300, Grant 62031003, and Grant 62072380; and in part by the Qilu Scholar Startup Fund of Shandong University.
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.urlhttps://ieeexplore.ieee.org/document/9223723/
dc.rights(c) 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
dc.titleFlexible Cross-Modal Hashing
dc.typeArticle
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentMachine Intelligence & kNowledge Engineering Lab
dc.identifier.journalIEEE Transactions on Neural Networks and Learning Systems
dc.eprint.versionPost-print
dc.contributor.institutionSchool of Software, Joint SDU-NTU Center for Artificial Intelligence Research, Shandong University, Jinan 250101, China.
dc.contributor.institutionAlibaba Group, Hangzhou 310000, China.
dc.contributor.institutionDepartment of Computer Science, George Mason University, Fairfax, VA 22030 USA.
dc.identifier.pages1-11
kaust.personYu, Guoxian
kaust.personZhang, Xiangliang
dc.date.accepted2020-09-26
refterms.dateFOA2020-10-18T06:01:46Z
dc.date.published-online2020-10-14
dc.date.published-print2020


Files in this item

Thumbnail
Name:
TNNLS-2019-P-11759.pdf
Size:
1.762Mb
Format:
PDF
Description:
Accepted manuscript

This item appears in the following Collection(s)

Show simple item record