ML-descent: An optimization algorithm for FWI using machine learning
dc.contributor.author | Sun, Bingbing | |
dc.contributor.author | Alkhalifah, Tariq Ali | |
dc.date.accessioned | 2020-03-05T08:35:54Z | |
dc.date.available | 2020-03-05T08:35:54Z | |
dc.date.issued | 2019-08-10 | |
dc.identifier.citation | Sun, B., & Alkhalifah, T. (2019). ML-descent: An optimization algorithm for FWI using machine learning. SEG Technical Program Expanded Abstracts 2019. doi:10.1190/segam2019-3215304.1 | |
dc.identifier.doi | 10.1190/segam2019-3215304.1 | |
dc.identifier.uri | http://hdl.handle.net/10754/661903 | |
dc.description.abstract | Full-Waveform Inversion is a nonlinear inversion problem, and a typical optimization algorithm such as nonlinear conjugate-gradient or LBFGS would iteratively update the model along gradient-descent direction of the misfit function or a slight modification of it. Rather than using a hand-designed optimization algorithm, we trained a machine to learn an optimization algorithm which we refer to as”ML-descent” and applied it in FWI. Using recurrent neural network (RNN), we use the gradient of the misfit function as input for training and the hidden states in the RNN uses the history information of the gradient similar to an BFGS algorithm. However, unlike the fixed BFGS algorithm, the ML version evolves as the gradient directs it to evolve.The loss function for training is formulated by summarization of the FWI misfit function by the L2-norm of the data residual. Any well-defined nonlinear inverse problem can be locally approximated by a linear convex problem, and thus, in order to accelerate the training speed, we train the neural network using the solution of randomly generated quadratic functions instead of the time-consuming FWI gradient. We use the Marmousi example to demonstrate that the ML-descent method outperform the steepest descent method, and the energy in the deeper part of the model can be compensable well by the ML-descent when the pseudo-inverse of the Hessian is not incorporated in the gradient of FWI. | |
dc.description.sponsorship | We thank KAUST for the funding of this research and the members of SWAG group for useful discussions. | |
dc.publisher | Society of Exploration Geophysicists | |
dc.relation.url | https://library.seg.org/doi/10.1190/segam2019-3215304.1 | |
dc.rights | Archived with thanks to Society of Exploration Geophysicists | |
dc.title | ML-descent: An optimization algorithm for FWI using machine learning | |
dc.type | Conference Paper | |
dc.contributor.department | Earth Science and Engineering Program | |
dc.contributor.department | Physical Science and Engineering (PSE) Division | |
dc.contributor.department | Seismic Wave Analysis Group | |
dc.conference.date | 2019-09-15 to 2019-09-20 | |
dc.conference.name | Society of Exploration Geophysicists International Exposition and Annual Meeting 2019, SEG 2019 | |
dc.conference.location | San Antonio, TX, USA | |
dc.eprint.version | Pre-print | |
kaust.person | Sun, Bingbing | |
kaust.person | Alkhalifah, Tariq Ali | |
kaust.acknowledged.supportUnit | members of SWAG | |
kaust.acknowledged.supportUnit | SWAG group |
This item appears in the following Collection(s)
-
Conference Papers
-
Physical Science and Engineering (PSE) Division
For more information visit: http://pse.kaust.edu.sa/ -
Earth Science and Engineering Program
For more information visit: https://pse.kaust.edu.sa/study/academic-programs/earth-science-and-engineering/Pages/home.aspx