Reinforcement Learning For Decentralized Planning Under Uncertainty

Document Type

Conference Proceeding

Publication Date

1-1-2013

School

Computing Sciences and Computer Engineering

Abstract

Decentralized partially-observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. But in real world scenarios, model parameters may not be known a priori, or may be difficult to specify. We propose to address these limitations with distributed reinforcement learning (RL). Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.

Publication Title

12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013

Volume

2

First Page

1439

Last Page

1440

Share

COinS