Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Type
DissertationAuthors
Hanzely, Filip
Advisors
Richtarik, Peter
Committee members
Tempone, Raul
Ghanem, Bernard

Wright, Stephen
Zhang, Tong

Program
Computer ScienceDate
2020-08-20Permanent link to this record
http://hdl.handle.net/10754/664789
Metadata
Show full item recordAbstract
Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges. In this thesis, we deal with each of these sources of difficulty in a different way. To efficiently address the big data issue, we develop new methods which in each iteration examine a small random subset of the training data only. To handle the big model issue, we develop methods which in each iteration update a random subset of the model parameters only. Finally, to deal with ill-conditioned problems, we devise methods that incorporate either higher-order information or Nesterov’s acceleration/momentum. In all cases, randomness is viewed as a powerful algorithmic tool that we tune, both in theory and in experiments, to achieve the best results. Our algorithms have their primary application in training supervised machine learning models via regularized empirical risk minimization, which is the dominant paradigm for training such models. However, due to their generality, our methods can be applied in many other fields, including but not limited to data science, engineering, scientific computing, and statistics.Citation
Hanzely, F. (2020). Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters. KAUST Research Repository. https://doi.org/10.25781/KAUST-4F2DHae974a485f413a2113503eed53cd6c53
10.25781/KAUST-4F2DH