Federated learning (FL) is a communication-efficient machine learning paradigm to leverage distributed data at the network edge. Nevertheless, FL usually fails to train a high-quality model from the networks, where the edge nodes collect noisy labeled data. To tackle this challenge, this paper focuses on developing an innovative robust FL. We consider two kinds of networks with different data distribution. Firstly, we design a reweighted FL under a full-data network, where all edge nodes are equipped with both numerous noisy labeled dataset and small clean dataset. The key idea is that edge devices learn to assign the local weights of loss functions in noisy labeled dataset, and cooperate with central server to update global weights. Secondly, we consider a part-data network where some edge nodes exclude clean dataset, and can not compute the weights locally. The broadcasting of the global weights is added to help those edge nodes without clean dataset to reweight their noisy loss functions. Both designs have a convergence rate of O(1/T2). Simulation results illustrate that the both proposed training processes improve the prediction accuracy due to the proper weights assignments of noisy loss function.
Chen, L., Ang, F., Chen, Y., & Wang, W. (2023). Robust Federated Learning With Noisy Labeled Data Through Loss Function Correction. IEEE Transactions on Network Science and Engineering, 10(3), 1501–1511. https://doi.org/10.1109/tnse.2022.3227287
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFA0701603, in part by the National Natural Science Foundation of China under Grant 62071445, and in part by King Abdullah University of Science and Technology Research Funding (KRF) under Award ORA-2021-CRG10-4696.