Type
ArticleKAUST Department
Visual Computing Center (VCC)Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Date
2020-07-14Online Publication Date
2020-07-14Print Publication Date
2021-03Embargo End Date
2021-07-14Submitted Date
2019-08-19Permanent link to this record
http://hdl.handle.net/10754/664383
Metadata
Show full item recordAbstract
A new stochastic primal-dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions / operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.Citation
Bianchi, P., Hachem, W., & Salim, A. (2020). A fully stochastic primal-dual algorithm. Optimization Letters. doi:10.1007/s11590-020-01614-yPublisher
Springer NatureJournal
Optimization LettersarXiv
1901.08170Additional Links
http://link.springer.com/10.1007/s11590-020-01614-yae974a485f413a2113503eed53cd6c53
10.1007/s11590-020-01614-y