Deep Relaxation of Controlled Stochastic Gradient Descent via Singular Perturbations


Bardi, Martino
Kouhkouh, Hicham

KAUST Grant Number


We consider a singularly perturbed system of stochastic differential equations proposed by Chaudhari et al. (Res. Math. Sci. 2018) to approximate the Entropic Gradient Descent in the optimization of deep neural networks, via homogenisation. We embed it in a much larger class of two-scale stochastic control problems and rely on convergence results for Hamilton-Jacobi-Bellman equations with unbounded data proved recently by ourselves (arXiv:2208.00655). We show that the limit of the value functions is itself the value function of an effective control problem with extended controls, and that the trajectories of the perturbed system converge in a suitable sense to the trajectories of the limiting effective control system. These rigorous results improve the understanding of the convergence of the algorithms used by Chaudhari et al., as well as of their possible extensions where some tuning parameters are modelled as dynamic controls.

The first author is member of the Gruppo Nazionale per l’Analisi Matematica, la Probabilit`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). He also participates in the King Abdullah University of Science and Technology (KAUST) project CRG2021-4674 “Mean-Field Games: models, theory, and computational aspects”. The second author is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Projektnummer 320021702/GRK2326 – Energy, Entropy, and Dissipative Dynamics (EDDy). The results of this paper are part of his Ph.D. thesis [18] which was conducted when he was a Ph.D. student at the University of Padova.



Additional Links

Permanent link to this record