Annealing evolutionary stochastic approximation Monte Carlo for global optimization
Type
ArticleAuthors
Liang, FamingKAUST Grant Number
KUS-C1-016-04Date
2010-04-08Online Publication Date
2010-04-08Print Publication Date
2011-07Permanent link to this record
http://hdl.handle.net/10754/597576
Metadata
Show full item recordAbstract
In this paper, we propose a new algorithm, the so-called annealing evolutionary stochastic approximation Monte Carlo (AESAMC) algorithm as a general optimization technique, and study its convergence. AESAMC possesses a self-adjusting mechanism, whose target distribution can be adapted at each iteration according to the current samples. Thus, AESAMC falls into the class of adaptive Monte Carlo methods. This mechanism also makes AESAMC less trapped by local energy minima than nonadaptive MCMC algorithms. Under mild conditions, we show that AESAMC can converge weakly toward a neighboring set of global minima in the space of energy. AESAMC is tested on multiple optimization problems. The numerical results indicate that AESAMC can potentially outperform simulated annealing, the genetic algorithm, annealing stochastic approximation Monte Carlo, and some other metaheuristics in function optimization. © 2010 Springer Science+Business Media, LLC.Citation
Liang F (2010) Annealing evolutionary stochastic approximation Monte Carlo for global optimization. Stat Comput 21: 375–393. Available: http://dx.doi.org/10.1007/s11222-010-9176-1.Sponsors
The author's research was supported in part by the grant (DMS-0607755) made by the National Science Foundation and the award (KUS-C1-016-04) made by King Abdullah University of Science and Technology (KAUST). The author thanks the editor, the associate editor and the referees for their comments which have led to significant improvement of this paper.Publisher
Springer NatureJournal
Statistics and Computingae974a485f413a2113503eed53cd6c53
10.1007/s11222-010-9176-1