Cracking open the black box: What observations can tell us about reinforcement learning agents
KAUST DepartmentComputer Science Program
Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division
Online Publication Date2019-08-14
Print Publication Date2019
Permanent link to this recordhttp://hdl.handle.net/10754/658663
MetadataShow full item record
AbstractMachine learning (ML) solutions to challenging networking problems, while promising, are hard to interpret; the uncertainty about how they would behave in untested scenarios has hindered adoption. Using a case study of an ML-based video rate adaptation model, we show that carefully applying interpretability tools and systematically exploring the model inputs can identify unwanted or anomalous behaviors of the model; hinting at a potential path towards increasing trust in ML-based solutions.
SponsorsWe thank the anonymous reviewers for their feedback. We are grateful to Nikolaj Bjørner, Bernard Ghanem, Hao Wang and Xiaojin Zhu for their valuable comments and suggestions. We also thank the Pensieve authors, in particular Mohammad Alizadeh and Hongzi Mao, for their help and feedback.
Conference/Event name2019 ACM SIGCOMM Workshop on Network Meets AI and ML, NetAI 2019, Part of SIGCOMM 2019