Show simple item record

dc.contributor.authorMishchenko, Konstantin
dc.contributor.authorGorbunov, Eduard
dc.contributor.authorTakáč, Martin
dc.contributor.authorRichtarik, Peter
dc.date.accessioned2019-05-28T14:07:40Z
dc.date.available2019-05-28T14:07:40Z
dc.date.issued2019-01-26
dc.identifier.urihttp://hdl.handle.net/10754/653106
dc.description.abstractTraining very large machine learning models requires a distributed computingapproach, with communication of the model updates often being the bottleneck.For this reason, several methods based on the compression (e.g., sparsificationand/or quantization) of the updates were recently proposed, including QSGD(Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein etal., 2018), and DQGD (Khirirat et al., 2018). However, none of these methodsare able to learn the gradients, which means that they necessarily suffer fromseveral issues, such as the inability to converge to the true optimum in thebatch mode, inability to work with a nonsmooth regularizer, and slowconvergence rates. In this work we propose a new distributed learningmethod---DIANA---which resolves these issues via compression of gradientdifferences. We perform a theoretical analysis in the strongly convex andnonconvex settings and show that our rates are vastly superior to existingrates. Our analysis of block-quantization and differences between $\ell_2$ and$\ell_\infty$ quantization closes the gaps in theory and practice. Finally, byapplying our analysis technique to TernGrad, we establish the first convergencerate for this method.
dc.description.sponsorshipThe work of Peter Richtarik was supported by the KAUST baseline funding scheme. The work of Martin Takac was partially supported by the U.S. National Science Foundation, under award numbers NSF:CCF:1618717, NSF:CMMI:1663256 and NSF:CCF:1740796.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/1901.09269
dc.rightsArchived with thanks to arXiv
dc.titleDistributed Learning with Compressed Gradient Differences
dc.typePreprint
dc.contributor.departmentComputer Science
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.contributor.institutionMoscow Institute of Physics and Technology, Russian Federation
dc.contributor.institutionLehigh University, USA
dc.contributor.institutionUniversity of Edinburgh, United Kingdom
dc.identifier.arxivid1901.09269
kaust.personMishchenko, Konstantin
kaust.personRichtarik, Peter
refterms.dateFOA2019-05-28T14:08:12Z


Files in this item

Thumbnail
Name:
1901.09269.pdf
Size:
1.905Mb
Format:
PDF
Description:
Preprint

This item appears in the following Collection(s)

Show simple item record