Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning
Type
Conference PaperAuthors
Gajjala, Rishikesh R.Banchhor, Shashwat
Abdelmoniem, Ahmed M.
Dutta, Aritra
Canini, Marco

Kalnis, Panos

KAUST Department
KAUSTComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Computer Science Program
Date
2020-12Permanent link to this record
http://hdl.handle.net/10754/666175
Metadata
Show full item recordAbstract
Distributed stochastic algorithms, equipped with gradient compression techniques, such as codebook quantization, are becoming increasingly popular and considered state-of-the-art in training large deep neural network (DNN) models. However, communicating the quantized gradients in a network requires efficient encoding techniques. For this, practitioners generally use Elias encoding-based techniques without considering their computational overhead or data-volume. In this paper, based on Huffman coding, we propose several lossless encoding techniques that exploit different characteristics of the quantized gradients during distributed DNN training. Then, we show their effectiveness on 5 different DNN models across three different data-sets, and compare them with classic state-of-the-art Elias-based encoding techniques. Our results show that the proposed Huffman-based encoders (i.e., RLH, SH, and SHS) can reduce the encoded data-volume by up to 5.1×, 4.32×, and 3.8×, respectively, compared to the Elias-based encoders.Citation
Gajjala, R. R., Banchhor, S., Abdelmoniem, A. M., Dutta, A., Canini, M., & Kalnis, P. (2020). Huffman Coding Based Encoding Techniques for Fast Distributed Deep Learning. Proceedings of the 1st Workshop on Distributed Machine Learning. doi:10.1145/3426745.3431334Publisher
ACMISBN
9781450381826Additional Links
https://dl.acm.org/doi/10.1145/3426745.3431334ae974a485f413a2113503eed53cd6c53
10.1145/3426745.3431334