Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games
KAUST DepartmentElectrical Engineering Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Preprint Posting Date2018-04-08
Permanent link to this recordhttp://hdl.handle.net/10754/627530
MetadataShow full item record
AbstractStochastic stability is an important solution concept for stochastic learning dynamics in games. However, a limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We identify this limitation and develop a framework for the comparative analysis of the transient behavior of stochastic learning dynamics. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics lead to the same steady-state behavior, they correspond to different behavioral models for decision making. In this work, we propose multiple criteria to analyze and quantify the differences in the short and medium-run behaviors of stochastic learning dynamics. We derive upper bounds on the expected hitting time of the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles.
CitationJaleel, H., & Shamma, J. S. (2020). Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games. IEEE Transactions on Automatic Control, 1–1. doi:10.1109/tac.2020.3039485