Please use this identifier to cite or link to this item:
Title: Distributed relational temporal difference learning
Authors: Lau, Q.P.
Lee, M.L. 
Hsu, W. 
Keywords: Distributed
Reinforcement learning
Issue Date: 2013
Citation: Lau, Q.P.,Lee, M.L.,Hsu, W. (2013). Distributed relational temporal difference learning. 12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013 2 : 1077-1084. ScholarBank@NUS Repository.
Abstract: Relational representations have great potential for rapidly generalizing learned knowledge in large Markov decision processes such as multi-agent problems. In this work, we introduce relational temporal difference learning for the distributed case where the communication links among agents are dynamic. Thus no critical components of the system should reside in any one agent. Relational generalization among agents' learning is achieved through the use of partially bound relational features and a message passing scheme. We further describe how the proposed concepts can be applied to distributed reinforcement learning methods that use value functions. Experiments were conducted on soccer and realtime strategy game domains with dynamic communication. Results show that our methods improve goal achievement in online learning with a greatly decreased number of parameters to learn when compared with existing distributed learning methods. Copyright © 2013, International Foundation for Autonomous Agents and Multiagent Systems ( All rights reserved.
Source Title: 12th International Conference on Autonomous Agents and Multiagent Systems 2013, AAMAS 2013
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Jan 12, 2019

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.