Distributed Learning with Compressed Gradient Differences

Abstract
Training very large machine learning models requires a distributed computingapproach, with communication of the model updates often being the bottleneck.For this reason, several methods based on the compression (e.g., sparsificationand/or quantization) of the updates were recently proposed, including QSGD(Alistarh et al., 2017), TernGrad (Wen et al., 2017), SignSGD (Bernstein etal., 2018), and DQGD (Khirirat et al., 2018). However, none of these methodsare able to learn the gradients, which means that they necessarily suffer fromseveral issues, such as the inability to converge to the true optimum in thebatch mode, inability to work with a nonsmooth regularizer, and slowconvergence rates. In this work we propose a new distributed learningmethod---DIANA---which resolves these issues via compression of gradientdifferences. We perform a theoretical analysis in the strongly convex andnonconvex settings and show that our rates are vastly superior to existingrates. Our analysis of block-quantization and differences between and quantization closes the gaps in theory and practice. Finally, byapplying our analysis technique to TernGrad, we establish the first convergencerate for this method.

Acknowledgements
The work of Peter Richtarik was supported by the KAUST baseline funding scheme. The work of Martin Takac was partially supported by the U.S. National Science Foundation, under award numbers NSF:CCF:1618717, NSF:CMMI:1663256 and NSF:CCF:1740796.

Publisher
arXiv

arXiv
1901.09269

Additional Links
https://arxiv.org/pdf/1901.09269

Permanent link to this record