Show simple item record

dc.contributor.authorChen, Tong
dc.contributor.authorYin, Hongzhi
dc.contributor.authorZhang, Xiangliang
dc.contributor.authorHuang, Zi
dc.contributor.authorWang, Yang
dc.contributor.authorWang, Meng
dc.date.accessioned2021-04-07T06:30:12Z
dc.date.available2021-04-07T06:30:12Z
dc.date.issued2021-04-05
dc.identifier.urihttp://hdl.handle.net/10754/668577
dc.description.abstractAs a well-established approach, factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering. With the prominent development of deep neural networks (DNNs), there is a recent and ongoing trend of enhancing the expressiveness of FM-based models with DNNs. However, though better results are obtained with DNN-based FM variants, such performance gain is paid off by an enormous amount (usually millions) of excessive model parameters on top of the plain FM. Consequently, the heavy parameterization impedes the real-life practicality of those deep models, especially efficient deployment on resource-constrained IoT and edge devices. In this paper, we move beyond the traditional real space where most deep FM-based models are defined, and seek solutions from quaternion representations within the hypercomplex space. Specifically, we propose the quaternion factorization machine (QFM) and quaternion neural factorization machine (QNFM), which are two novel lightweight and memory-efficient quaternion-valued models for sparse predictive analytics. By introducing a brand new take on FM-based models with the notion of quaternion algebra, our models not only enable expressive inter-component feature interactions, but also significantly reduce the parameter size due to lower degrees of freedom in the hypercomplex Hamilton product compared with real-valued matrix multiplication. Extensive experimental results on three large-scale datasets demonstrate that QFM achieves 4.36% performance improvement over the plain FM without introducing any extra parameters, while QNFM outperforms all baselines with up to two magnitudes' parameter size reduction in comparison to state-of-the-art peer methods.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/2104.01716.pdf
dc.rightsArchived with thanks to arXiv
dc.titleQuaternion Factorization Machines: A Lightweight Solution to Intricate Feature Interaction Modelling
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.contributor.institutionSchool of Information Technology and Electrical Engineering, The University of Queensland.
dc.contributor.institutionKey Laboratory of Knowledge Engineering with Big Data, Ministry of Education, School of Computer Science and Information Engineering, Hefei University of Technology, China
dc.identifier.arxivid2104.01716
kaust.personZhang, Xiangliang
refterms.dateFOA2021-04-07T06:30:52Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
582.5Kb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record