Coordination Confidence Based Human-Multi-Agent Transfer Learning For Collaborative Teams
Computing Sciences and Computer Engineering
Among an array of techniques proposed to speed-up reinforcement learning (RL), learning from human demonstration has a proven record of success. A related technique, called Human Agent Transfer (HAT), and its confidence-based derivatives have been successfully applied to single agent RL. This paper investigates their application to collaborative multi-agent RL problems. We show that a first-cut extension may leave room for improvement in some domains, and propose a new algorithm called coordination confidence (CC). CC analyzes the difference in perspectives between a human demonstrator (global view) and the learning agents (local view), and informs the agents' action choices when the difference is critical and simply following the human demonstration can lead to miscoordination. Experiments in two domains-one where the difference in perspectives is critical, and one where it is not-investigate the performance of CC in comparison with relevant baselines.
ALA 2018 - Adaptive Learning Agents - Workshop at the Federated AI Meeting 2018
(2020). Coordination Confidence Based Human-Multi-Agent Transfer Learning For Collaborative Teams. ALA 2018 - Adaptive Learning Agents - Workshop at the Federated AI Meeting 2018.
Available at: https://aquila.usm.edu/fac_pubs/19305