Show simple item record

dc.contributor.authorAbdelmoniem, Ahmed M.
dc.contributor.authorSahu, Atal Narayan
dc.contributor.authorCanini, Marco
dc.contributor.authorFahmy, Suhaib A.
dc.date.accessioned2021-11-03T11:18:44Z
dc.date.available2021-11-03T11:18:44Z
dc.date.issued2021-11-01
dc.identifier.urihttp://hdl.handle.net/10754/673095
dc.description.abstractFederated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication. However, it presents numerous challenges relating to the heterogeneity of the data distribution, device capabilities, and participant availability as deployments scale, which can impact both model convergence and bias. Existing FL schemes use random participant selection to improve fairness; however, this can result in inefficient use of resources and lower quality training. In this work, we systematically address the question of resource efficiency in FL, showing the benefits of intelligent participant selection, and incorporation of updates from straggling participants. We demonstrate how these factors enable resource efficiency while also improving trained model quality.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/2111.01108.pdf
dc.rightsArchived with thanks to arXiv
dc.titleResource-Efficient Federated Learning
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Science and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.identifier.arxivid2111.01108
kaust.personAbdelmoniem, Ahmed M.
kaust.personSahu, Atal Narayan
kaust.personCanini, Marco
kaust.personFahmy, Suhaib A.
refterms.dateFOA2021-11-03T11:20:41Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
1.562Mb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record