Date of Award

Spring 2019

Degree Type

Masters Thesis

Degree Name

Master of Science (MS)

School

Computing Sciences and Computer Engineering

Committee Chair

Bikramjit Banerjee

Committee Chair School

Computing Sciences and Computer Engineering

Committee Member 2

Beddhu Murali

Committee Member 2 School

Computing Sciences and Computer Engineering

Committee Member 3

Dia Ali

Committee Member 3 School

Computing Sciences and Computer Engineering

Abstract

Among an array of techniques proposed to speed-up reinforcement learning (RL), learn- ing from human demonstration has a proven record of success. A related technique, called Human Agent Transfer (HAT), and its confidence-based derivatives have been successfully applied to single agent RL. This paper investigates their application to collaborative multi- agent RL problems. We show that a first-cut extension may leave room for improvement in some domains, and propose a new algorithm called coordination confidence (CC). CC analyzes the difference in perspectives between a human demonstrator (global view) and the learning agents (local view), and informs the agents’ action choices when the difference is critical and simply following the human demonstration can lead to miscoordination. We conduct experiments in three domains to investigate the performance of CC in comparison with relevant baselines.

Available for download on Thursday, May 10, 2029

Share

COinS