Reinforcement Learning For Decentralized Planning Under Uncertainty
Document Type
Conference Proceeding
Publication Date
1-1-2013
School
Computing Sciences and Computer Engineering
Abstract
Decentralized partially-observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. But in real world scenarios, model parameters may not be known a priori, or may be difficult to specify. We propose to address these limitations with distributed reinforcement learning (RL). Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Publication Title
12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013
Volume
2
First Page
1439
Last Page
1440
Recommended Citation
Kraemer, L.
(2013). Reinforcement Learning For Decentralized Planning Under Uncertainty. 12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013, 2, 1439-1440.
Available at: https://aquila.usm.edu/fac_pubs/20422
COinS