Game-Theoretic Learning in Distributed Control

Handle URI:
http://hdl.handle.net/10754/626970
Title:
Game-Theoretic Learning in Distributed Control
Authors:
Marden, Jason R.; Shamma, Jeff S. ( 0000-0001-5638-9551 )
Abstract:
In distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.
KAUST Department:
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division; Electrical Engineering Program
Citation:
Marden JR, Shamma JS (2017) Game-Theoretic Learning in Distributed Control. Handbook of Dynamic Game Theory: 1–36. Available: http://dx.doi.org/10.1007/978-3-319-27335-8_9-1.
Publisher:
Springer International Publishing
Journal:
Handbook of Dynamic Game Theory
Issue Date:
5-Jan-2018
DOI:
10.1007/978-3-319-27335-8_9-1
Type:
Book Chapter
Sponsors:
This work was supported by ONR Grant #N00014-17-1-2060 and NSF Grant #ECCS-1638214 and by funding from King Abdullah University of Science and Technology (KAUST).
Additional Links:
http://link.springer.com/chapter/10.1007/978-3-319-27335-8_9-1
Appears in Collections:
Electrical Engineering Program; Book Chapters; Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division

Full metadata record

DC FieldValue Language
dc.contributor.authorMarden, Jason R.en
dc.contributor.authorShamma, Jeff S.en
dc.date.accessioned2018-02-01T07:25:00Z-
dc.date.available2018-02-01T07:25:00Z-
dc.date.issued2018-01-05en
dc.identifier.citationMarden JR, Shamma JS (2017) Game-Theoretic Learning in Distributed Control. Handbook of Dynamic Game Theory: 1–36. Available: http://dx.doi.org/10.1007/978-3-319-27335-8_9-1.en
dc.identifier.doi10.1007/978-3-319-27335-8_9-1en
dc.identifier.urihttp://hdl.handle.net/10754/626970-
dc.description.abstractIn distributed architecture control problems, there is a collection of interconnected decision-making components that seek to realize desirable collective behaviors through local interactions and by processing local information. Applications range from autonomous vehicles to energy to transportation. One approach to control of such distributed architectures is to view the components as players in a game. In this approach, two design considerations are the components’ incentives and the rules that dictate how components react to the decisions of other components. In game-theoretic language, the incentives are defined through utility functions, and the reaction rules are online learning dynamics. This chapter presents an overview of this approach, covering basic concepts in game theory, special game classes, measures of distributed efficiency, utility design, and online learning rules, all with the interpretation of using game theory as a prescriptive paradigm for distributed control design.en
dc.description.sponsorshipThis work was supported by ONR Grant #N00014-17-1-2060 and NSF Grant #ECCS-1638214 and by funding from King Abdullah University of Science and Technology (KAUST).en
dc.publisherSpringer International Publishingen
dc.relation.urlhttp://link.springer.com/chapter/10.1007/978-3-319-27335-8_9-1en
dc.subjectLearning in gamesen
dc.subjectEvolutionary gamesen
dc.subjectMultiagent systemsen
dc.subjectDistributed decision systemsen
dc.titleGame-Theoretic Learning in Distributed Controlen
dc.typeBook Chapteren
dc.contributor.departmentComputer, Electrical and Mathematical Sciences and Engineering (CEMSE) Divisionen
dc.contributor.departmentElectrical Engineering Programen
dc.identifier.journalHandbook of Dynamic Game Theoryen
dc.contributor.institutionDepartment of Electrical and Computer Engineering, University of California, Santa Barbara, USAen
kaust.authorShamma, Jeff S.en
All Items in KAUST are protected by copyright, with all rights reserved, unless otherwise indicated.