Distributed Learning with Compressed Gradient Differences
dc.contributor.author | Mishchenko, Konstantin | |
dc.contributor.author | Gorbunov, Eduard | |
dc.contributor.author | Takáč, Martin | |
dc.contributor.author | Richtarik, Peter | |
dc.date.accessioned | 2019-05-28T14:07:40Z | |
dc.date.available | 2019-05-28T14:07:40Z | |
dc.date.issued | 2019-01-26 | |
dc.identifier.uri | http://hdl.handle.net/10754/653106 | |
dc.description.abstract | Training very large machine learning models requires a distributed computingapproach, with communication of the model updates often being the bottleneck.For this reason, several methods based on the compression (e.g., sparsificationand/or quantization) of the updates were recently proposed, including QSGD(Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein etal., 2018), and DQGD (Khirirat et al., 2018). However, none of these methodsare able to learn the gradients, which means that they necessarily suffer fromseveral issues, such as the inability to converge to the true optimum in thebatch mode, inability to work with a nonsmooth regularizer, and slowconvergence rates. In this work we propose a new distributed learningmethod---DIANA---which resolves these issues via compression of gradientdifferences. We perform a theoretical analysis in the strongly convex andnonconvex settings and show that our rates are vastly superior to existingrates. Our analysis of block-quantization and differences between $\ell_2$ and$\ell_\infty$ quantization closes the gaps in theory and practice. Finally, byapplying our analysis technique to TernGrad, we establish the first convergencerate for this method. | |
dc.description.sponsorship | The work of Peter Richtarik was supported by the KAUST baseline funding scheme. The work of Martin Takac was partially supported by the U.S. National Science Foundation, under award numbers NSF:CCF:1618717, NSF:CMMI:1663256 and NSF:CCF:1740796. | |
dc.publisher | arXiv | |
dc.relation.url | https://arxiv.org/pdf/1901.09269 | |
dc.rights | Archived with thanks to arXiv | |
dc.title | Distributed Learning with Compressed Gradient Differences | |
dc.type | Preprint | |
dc.contributor.department | Computer Science | |
dc.contributor.department | Computer Science Program | |
dc.contributor.department | Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division | |
dc.eprint.version | Pre-print | |
dc.contributor.institution | Moscow Institute of Physics and Technology, Russian Federation | |
dc.contributor.institution | Lehigh University, USA | |
dc.contributor.institution | University of Edinburgh, United Kingdom | |
dc.identifier.arxivid | 1901.09269 | |
kaust.person | Mishchenko, Konstantin | |
kaust.person | Richtarik, Peter | |
refterms.dateFOA | 2019-05-28T14:08:12Z |
Files in this item
This item appears in the following Collection(s)
-
Preprints
-
Computer Science Program
For more information visit: https://cemse.kaust.edu.sa/cs -
Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division
For more information visit: https://cemse.kaust.edu.sa/