Type
ArticleKAUST Department
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) DivisionElectrical Engineering Program
Visual Computing Center (VCC)
VCC Analytics Research Group
Date
2013-04Permanent link to this record
http://hdl.handle.net/10754/562701
Metadata
Show full item recordAbstract
Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.Citation
Yuan, G., Zhang, Z., Ghanem, B., & Hao, Z. (2013). Low-rank quadratic semidefinite programming. Neurocomputing, 106, 51–60. doi:10.1016/j.neucom.2012.10.014Sponsors
Yuan and Hao are supported by NSF-China (61070033, 61100148), NSF-Guangdong (9251009001000005, S2011040004804), Key Technology Research and Development Programs of Guangdong Province (2010B050400011).Publisher
Elsevier BVJournal
Neurocomputingae974a485f413a2113503eed53cd6c53
10.1016/j.neucom.2012.10.014