Show simple item record

dc.contributor.authorKhaled, Ahmed
dc.contributor.authorRichtarik, Peter
dc.date.accessioned2019-11-27T12:54:34Z
dc.date.available2019-11-27T12:54:34Z
dc.date.issued2019-09-10
dc.identifier.urihttp://hdl.handle.net/10754/660288
dc.description.abstractWe propose and analyze a new type of stochastic first order method: gradient descent with compressed iterates (GDCI). GDCI in each iteration first compresses the current iterate using a lossy randomized compression technique, and subsequently takes a gradient step. This method is a distillation of a key ingredient in the current practice of federated learning, where a model needs to be compressed by a mobile device before it is sent back to a server for aggregation. Our analysis provides a step towards closing the gap between the theory and practice of federated learning, and opens the possibility for many extensions.
dc.publisherarXiv
dc.relation.urlhttps://arxiv.org/pdf/1909.04716
dc.rightsArchived with thanks to arXiv
dc.titleGradient Descent with Compressed Iterates
dc.typePreprint
dc.contributor.departmentComputer Science Program
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.eprint.versionPre-print
dc.contributor.institutionCairo University
dc.identifier.arxivid1909.04716
kaust.personRichtarik, Peter
refterms.dateFOA2019-11-27T12:55:19Z


Files in this item

Thumbnail
Name:
Preprintfile1.pdf
Size:
374.8Kb
Format:
PDF
Description:
Pre-print

This item appears in the following Collection(s)

Show simple item record