DualFL: A Duality-based Federated Learning Algorithm with Communication Acceleration in the General Convex Regime

Abstract
We propose a novel training algorithm called DualFL (Dualized Federated Learning), for solving a distributed optimization problem in federated learning. Our approach is based on a specific dual formulation of the federated learning problem. DualFL achieves communication acceleration under various settings on smoothness and strong convexity of the problem. Moreover, it theoretically guarantees the use of inexact local solvers, preserving its optimal communication complexity even with inexact local solutions. DualFL is the first federated learning algorithm that achieves communication acceleration, even when the cost function is either nonsmooth or non-strongly convex. Numerical results demonstrate that the practical performance of DualFL is comparable to those of state-of-the-art federated learning algorithms, and it is robust with respect to hyperparameter tuning.

Acknowledgements
This work was supported by the KAUST Baseline Research Fund.

Publisher
arXiv

arXiv
2305.10294

Additional Links
https://arxiv.org/pdf/2305.10294.pdf

Permanent link to this record