Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games
Type
ArticleAuthors
Jaleel, HassanShamma, Jeff S.

KAUST Department
Electrical Engineering ProgramComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Date
2020Preprint Posting Date
2018-04-08Permanent link to this record
http://hdl.handle.net/10754/627530
Metadata
Show full item recordAbstract
Stochastic stability is an important solution concept for stochastic learning dynamics in games. However, a limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We identify this limitation and develop a framework for the comparative analysis of the transient behavior of stochastic learning dynamics. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics lead to the same steady-state behavior, they correspond to different behavioral models for decision making. In this work, we propose multiple criteria to analyze and quantify the differences in the short and medium-run behaviors of stochastic learning dynamics. We derive upper bounds on the expected hitting time of the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles.Citation
Jaleel, H., & Shamma, J. S. (2020). Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games. IEEE Transactions on Automatic Control, 1–1. doi:10.1109/tac.2020.3039485arXiv
1804.02693Additional Links
https://ieeexplore.ieee.org/document/9265240/https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9265240
ae974a485f413a2113503eed53cd6c53
10.1109/TAC.2020.3039485