Model-Free IRL Using Maximum Likelihood Estimation
Document Type
Conference Proceeding
Publication Date
7-17-2019
School
Computing Sciences and Computer Engineering
Abstract
The problem of learning an expert’s unknown reward function using a limited number of demonstrations recorded from the expert’s behavior is investigated in the area of inverse reinforcement learning (IRL). To gain traction in this challenging and underconstrained problem, IRL methods predominantly represent the reward function of the expert as a linear combination of known features. Most of the existing IRL algorithms either assume the availability of a transition function or provide a complex and inefficient approach to learn it. In this paper, we present a model-free approach to IRL, which casts IRL in the maximum likelihood framework. We present modifications of the model-free Q-learning that replace its maximization to allow computing the gradient of the Q-function. We use gradient ascent to update the feature weights to maximize the likelihood of expert’s trajectories. We demonstrate on two problem domains that our approach improves the likelihood compared to previous methods.
Publication Title
Proceedings of the AAAI Conference On Artificial Intelligence
Recommended Citation
Jain, V.,
Doshi, P.,
Banerjee, B.
(2019). Model-Free IRL Using Maximum Likelihood Estimation. Proceedings of the AAAI Conference On Artificial Intelligence.
Available at: https://aquila.usm.edu/fac_pubs/17150