On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
Name:
technical report.pdf
Size:
497.5Kb
Format:
PDF
Description:
Technical report version - released 2019-11-19
Type
Conference PaperAuthors
Dutta, AritraBergou, El Houcine
Abdelmoniem, Ahmed M.
Ho, Chen-Yu
Sahu, Atal Narayan
Canini, Marco

Kalnis, Panos

KAUST Department
Computer ScienceComputer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
InfoCloud Research Group
Date
2020-04-03Preprint Posting Date
2019-11-19Permanent link to this record
http://hdl.handle.net/10754/660127
Metadata
Show full item recordAbstract
Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model.In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.Citation
Dutta, A., Bergou, E. H., Abdelmoniem, A. M., Ho, C.-Y., Sahu, A. N., Canini, M., & Kalnis, P. (2020). On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3817–3824. doi:10.1609/aaai.v34i04.5793arXiv
1911.08250Additional Links
https://aaai.org/ojs/index.php/AAAI/article/view/5793Relations
Is Supplemented By:- [Software]
Title: sands-lab/layer-wise-aaai20: Code repository for AAAI'20 paper: On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning. Publication Date: 2019-11-17. github: sands-lab/layer-wise-aaai20 Handle: 10754/667393
ae974a485f413a2113503eed53cd6c53
10.1609/aaai.v34i04.5793