Type
ArticleAuthors
Loizou, NicolasRichtarik, Peter

KAUST Department
Computer Science ProgramComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Date
2020-12-15Submitted Date
2019-03-25Permanent link to this record
http://hdl.handle.net/10754/660271
Metadata
Show full item recordAbstract
In this paper we present a convergence rate analysis of inexact variants of several randomized iterative methods for solving three closely related problems: a convex stochastic quadratic optimization problem, a best approximation problem, and its dual, a concave quadratic maximization problem. Among the methods studied are stochastic gradient descent, stochastic Newton, stochastic proximal point, and stochastic subspace ascent. A common feature of these methods is that in their update rule a certain subproblem needs to be solved exactly. We relax this requirement by allowing for the subproblem to be solved inexactly. We provide iteration complexity results under several assumptions on the inexactness error. Inexact variants of many popular and some more exotic methods, including randomized block Kaczmarz, Gaussian block Kaczmarz, and randomized block coordinate descent, can be cast as special cases. Numerical experiments demonstrate the benefits of allowing inexactness.Citation
Loizou, N., & Richtárik, P. (2020). Convergence Analysis of Inexact Randomized Iterative Methods. SIAM Journal on Scientific Computing, 42(6), A3979–A4016. doi:10.1137/19m125248xSponsors
The authors would like to acknowledge Robert Mansel Gower, Georgios Loizou, Aritra Dutta, and Rachael Tappenden for useful discussions.arXiv
1903.07971Additional Links
https://epubs.siam.org/doi/10.1137/19M125248Xae974a485f413a2113503eed53cd6c53
10.1137/19M125248X
Scopus Count
Except where otherwise noted, this item's license is described as Published by SIAM under the terms of the Creative Commons 4.0 license.