Coordinated Reinforcement Learning (2002)

Carlos Guestrin, Michail Lagoudakis, and Ronald Parr

Abstract -- We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms is a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods differ from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture. Our experimental results, comparing our approach to other RL methods, illustrate both the quality of the policies obtained and the additional benefits of coordination.

download information

Carlos Guestrin, Michail Lagoudakis, and Ronald Parr (2002). "Coordinated Reinforcement Learning." The Nineteenth International Conference on Machine Learning (ICML-2002) (pp. 227-234).   ps  

bibtex citation

@inproceedings{Guestrin+al:2002b,
   author = {Carlos Guestrin and Michail Lagoudakis and Ronald Parr},
   title = {Coordinated Reinforcement Learning},
   year = {2002},
   month = {July},
   booktitle = {The Nineteenth International Conference on Machine Learning (ICML-2002)},
   address = {Sydney, Australia},
   pages = {227--234},
}