Solving Finite Horizon Decentralized POMDPs By Distributed Reinforcement Learning
Computing Sciences and Computer Engineering
Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. We propose a distributed reinforcement learning approach, where agents take turns to learn best responses to each other's policies. This promotes decetralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. We derive the relation between the sample complexity of best response learning and error tolerance. Our key contribution is to show that even the "per-leaf" sample complexity could grow exponentially witht he problem horizon. We show empirically that even if the sample requirement is set lower than what theory demands, our learning appraoch can produce (near) optimal policies in some benchmark Dec-POMDP problems. We also propose a slight modification that empirically appears to significantly reduce the learning time with relatively little impact on the quality of learned policies.
Seventh Annual Workshop On Multiagent Sequential Decision-Making Under Uncertainty
(2012). Solving Finite Horizon Decentralized POMDPs By Distributed Reinforcement Learning. Seventh Annual Workshop On Multiagent Sequential Decision-Making Under Uncertainty, 9-16.
Available at: https://aquila.usm.edu/fac_pubs/20620