Efficient sparse collective communication and its application to accelerate distributed deep learning
Type
Conference PaperAuthors
Fei, JiaweiHo, Chen-Yu
Sahu, Atal N.
Canini, Marco
Sapio, Amedeo
KAUST Department
Computer ScienceComputer Science Program
Computer, Electrical and Mathematical Science and Engineering (CEMSE) Division
KAUST Grant Number
OSR-CRG2020-4382Online Publication Date
2021-08-09Print Publication Date
2021-08-09Date
2021-08-09Abstract
Efficient collective communication is crucial to parallel-computing applications such as distributed training of large-scale recommendation systems and natural language processing models. Existing collective communication libraries focus on optimizing operations for dense inputs, resulting in transmissions of many zeros when inputs are sparse. This counters current trends that see increasing data sparsity in large models.We propose OmniReduce, an efficient streaming aggregation system that exploits sparsity to maximize effective bandwidth use by sending only non-zero data blocks. We demonstrate that this idea is beneficial and accelerates distributed training by up to 8.2x. Even at 100 Gbps, OmniReduce delivers 1.4--2.9x better performance for network-bottlenecked DNNs.