Show simple item record

dc.contributor.authorDutta, Aritra
dc.contributor.authorBergou, El Houcine
dc.contributor.authorAbdelmoniem, Ahmed M.
dc.contributor.authorHo, Chen-Yu
dc.contributor.authorSahu, Atal Narayan
dc.contributor.authorCanini, Marco
dc.contributor.authorKalnis, Panos
dc.date.accessioned2020-07-28T06:15:03Z
dc.date.available2019-11-19T09:36:56Z
dc.date.available2020-07-28T06:15:03Z
dc.date.issued2020-04-03
dc.identifier.citationDutta, A., Bergou, E. H., Abdelmoniem, A. M., Ho, C.-Y., Sahu, A. N., Canini, M., & Kalnis, P. (2020). On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3817–3824. doi:10.1609/aaai.v34i04.5793
dc.identifier.issn2374-3468
dc.identifier.issn2159-5399
dc.identifier.doi10.1609/aaai.v34i04.5793
dc.identifier.urihttp://hdl.handle.net/10754/660127
dc.description.abstractCompressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model.In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)
dc.relation.urlhttps://aaai.org/ojs/index.php/AAAI/article/view/5793
dc.rightsThis is the technical report version of a paper later published in the Proceedings of the AAAI Conference on Artificial Intelligence
dc.subjectdistributed deep learning
dc.subjectgradient compression
dc.subjectgradient sparsification
dc.subjectgradient quantization
dc.subjectlayer-wise gradient compression
dc.titleOn the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning
dc.typeConference Paper
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentComputer Science Program
dc.identifier.journalProceedings of the AAAI Conference on Artificial Intelligence
dc.eprint.versionPre-print
dc.contributor.institutionINRA
dc.identifier.volume34
dc.identifier.issue04
dc.identifier.pages3817-3824
dc.identifier.arxividarXiv:1911.08250
kaust.personDutta, Aritra
kaust.personAbdelmoniem, Ahmed M.
kaust.personHo, Chen-Yu
kaust.personSahu, Atal Narayan
kaust.personCanini, Marco
kaust.personKalnis, Panos
refterms.dateFOA2019-11-19T00:00:00Z


Files in this item

Thumbnail
Name:
technical report.pdf
Size:
497.5Kb
Format:
PDF
Description:
Technical report version - released 2019-11-19

This item appears in the following Collection(s)

Show simple item record

VersionItemEditorDateSummary

*Selected version