Type
ArticleAuthors
Liu, HailiangMarkowich, Peter A.

KAUST Department
Applied Mathematics and Computational Science ProgramComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Date
2020-09-21Preprint Posting Date
2019-05-22Online Publication Date
2020-09-21Print Publication Date
2020-12Submitted Date
2019-10-29Permanent link to this record
http://hdl.handle.net/10754/660838
Metadata
Show full item recordAbstract
This paper presents a partial differential equation framework for deep residual neural networks and for the associated learning problem. This is done by carrying out the continuum limits of neural networks with respect to width and depth. We study the wellposedness, the large time solution behavior, and the characterization of the steady states of the forward problem. Several useful time-uniform estimates and stability/instability conditions are presented. We state and prove optimality conditions for the inverse deep learning problem, using standard variational calculus, the Hamilton-Jacobi-Bellmann equation and the Pontryagin maximum principle. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between neural networks, PDE theory, variational analysis, optimal control, and deep learning.Citation
Liu, H., & Markowich, P. (2020). Selection dynamics for deep neural networks. Journal of Differential Equations, 269(12), 11540–11574. doi:10.1016/j.jde.2020.08.041Sponsors
We are grateful to Michael Herty (RWTH) for his interest, which motivated us to investigate this problem and eventually led to this paper. Liu was partially supported by The National Science Foundation under Grant DMS1812666 and by NSF Grant RNMS (Ki-Net)1107291.Publisher
Elsevier BVarXiv
1905.09076Additional Links
https://linkinghub.elsevier.com/retrieve/pii/S002203962030485Xae974a485f413a2113503eed53cd6c53
10.1016/j.jde.2020.08.041