Show simple item record

dc.contributor.authorBibi, Adel
dc.contributor.authorBergou, El Houcine
dc.contributor.authorSener, Ozan
dc.contributor.authorGhanem, Bernard
dc.contributor.authorRichtarik, Peter
dc.date.accessioned2021-01-21T06:54:58Z
dc.date.available2019-05-29T06:31:01Z
dc.date.available2019-11-27T08:54:51Z
dc.date.available2021-01-21T06:54:58Z
dc.date.issued2020-04-03
dc.identifier.citationBibi, A., Bergou, E. H., Sener, O., Ghanem, B., & Richtarik, P. (2020). A Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3275–3282. doi:10.1609/aaai.v34i04.5727
dc.identifier.issn2374-3468
dc.identifier.issn2159-5399
dc.identifier.doi10.1609/aaai.v34i04.5727
dc.identifier.urihttp://hdl.handle.net/10754/653108
dc.description.abstractWe consider the problem of unconstrained minimization of a smooth objective function in ℝn in a setting where only function evaluations are possible. While importance sampling is one of the most popular techniques used by machine learning practitioners to accelerate the convergence of their models when applicable, there is not much existing theory for this acceleration in the derivative-free setting. In this paper, we propose the first derivative free optimization method with importance sampling and derive new improved complexity results on non-convex, convex and strongly convex functions. We conduct extensive experiments on various synthetic and real LIBSVM datasets confirming our theoretical results. We test our method on a collection of continuous control tasks on MuJoCo environments with varying difficulty. Experiments show that our algorithm is practical for high dimensional continuous control problems where importance sampling results in a significant sample complexity improvement.
dc.description.sponsorshipThis work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research.
dc.publisherAssociation for the Advancement of Artificial Intelligence (AAAI)
dc.relation.urlhttps://aaai.org/ojs/index.php/AAAI/article/view/5727
dc.rightsArchived with thanks to Proceedings of the AAAI Conference on Artificial Intelligence
dc.titleA Stochastic Derivative-Free Optimization Method with Importance Sampling: Theory and Learning to Control
dc.typeConference Paper
dc.contributor.departmentElectrical Engineering Program
dc.contributor.departmentElectrical Engineering
dc.contributor.departmentPhysical Science and Engineering (PSE) Division
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
dc.contributor.departmentComputer Science Program
dc.identifier.journalProceedings of the AAAI Conference on Artificial Intelligence
dc.eprint.versionPublisher's Version/PDF
dc.contributor.institutionMaIAGE, INRA, Universite Paris-Saclay
dc.contributor.institutionIntel Labs
dc.contributor.institutionMoscow Institute of Physics and Technology
dc.identifier.volume34
dc.identifier.issue04
dc.identifier.pages3275-3282
dc.identifier.arxivid1902.01272
kaust.personBibi, Adel
kaust.personBergou, El Houcine
kaust.personGhanem, Bernard
kaust.personRichtarik, Peter
refterms.dateFOA2019-05-29T06:31:27Z
kaust.acknowledged.supportUnitOffice of Sponsored Research


Files in this item

Thumbnail
Name:
5727-Article Text-8952-1-10-20200513.pdf
Size:
3.201Mb
Format:
PDF
Description:
Conference Paper

This item appears in the following Collection(s)

Show simple item record

VersionItemEditorDateSummary

*Selected version