Adaptive Multi-Robot Team Reconfiguration Using a Policy-Reuse Reinforcement Learning Approach

Document Type

Conference Proceeding

Publication Date

1-1-2011

School

Computing Sciences and Computer Engineering

Abstract

We consider the problem of dynamically adjusting the formation and size of robot teams performing distributed area coverage, when they encounter obstacles or occlusions along their path. Based on our earlier formulation of the robotic team formation problem as a coalitional game called a weighted voting game (WVG), we show that the robot team size can be dynamically adapted by adjusting the WVG’s quota parameter. We use a Q-learning algorithm to learn the value of the quota parameter and a policy reuse mechanism to adapt the learning process to changes in the underlying environment. Experimental results using simulated e-puck robots within the Webots simulator show that our Q-learning algorithm converges within a finite number of steps in different types of environments. Using the learning algorithm also improves the performance of an area coverage application where multiple robot teams move in formation to explore an initially unknown environment by 5 − 10%.

Publication Title

International Conference On Autonomous Agents and Multiagent Systems

First Page

330

Last Page

345

Find in your library

Share

COinS