KAUST DepartmentComputer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Preprint Posting Date2020-10-02
Online Publication Date2020-10-16
Print Publication Date2020-11
Permanent link to this recordhttp://hdl.handle.net/10754/665943
MetadataShow full item record
AbstractStochastic optimization lies at the heart of machine learning, and its cornerstone is stochastic gradient descent (SGD), a method introduced over 60 years ago. The last eight years have seen an exciting new development: Variance reduction for stochastic optimization methods. These variance-reduced (VR) methods excel in settings where more than one pass through the training data is allowed, achieving a faster convergence than SGD in theory and practice. These speedups underline the surge of interest in VR methods and the fast-growing body of work on this topic. This review covers the key principles and main developments behind VR methods for optimization with finite data sets and is aimed at nonexpert readers. We focus mainly on the convex setting and leave pointers to readers interested in extensions for minimizing nonconvex functions.
CitationGower, R. M., Schmidt, M., Bach, F., & Richtarik, P. (2020). Variance-Reduced Methods for Machine Learning. Proceedings of the IEEE, 108(11), 1968–1983. doi:10.1109/jproc.2020.3028013
SponsorsThe authors would like to thank Quanquan Gu, Julien Mairal, Tong Zhang, and Lin Xiao for valuable suggestions and comments on an earlier draft of this article. In particular, Quanquan’s recommendations for the nonconvex section improved the organization of our Section IV-G.
JournalProceedings of the IEEE