Solving Finite Horizon Decentralized POMDPs By Distributed Reinforcement Learning

Document Type

Conference Proceeding

Publication Date

6-1-2012

School

Computing Sciences and Computer Engineering

Abstract

Decentralized partially observable Markov decision processes (Dec-POMDPs) offer a powerful modeling technique for realistic multi-agent coordination problems under uncertainty. Prevalent solution techniques are centralized and assume prior knowledge of the model. We propose a distributed reinforcement learning approach, where agents take turns to learn best responses to each other's policies. This promotes decetralization of the policy computation problem, and relaxes reliance on the full knowledge of the problem parameters. We derive the relation between the sample complexity of best response learning and error tolerance. Our key contribution is to show that even the "per-leaf" sample complexity could grow exponentially witht he problem horizon. We show empirically that even if the sample requirement is set lower than what theory demands, our learning appraoch can produce (near) optimal policies in some benchmark Dec-POMDP problems. We also propose a slight modification that empirically appears to significantly reduce the learning time with relatively little impact on the quality of learned policies.

Publication Title

Seventh Annual Workshop On Multiagent Sequential Decision-Making Under Uncertainty

First Page

9

Last Page

16

Share

COinS