Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/36152
Title: | Effective reinforcement learning for collaborative multi-agent domains | Authors: | LAU QIANGFENG PETER | Keywords: | reinforcement learning, coordination constraints, multi-agent, guiding exploration, relational learning, retinal image analysis | Issue Date: | 14-Sep-2012 | Citation: | LAU QIANGFENG PETER (2012-09-14). Effective reinforcement learning for collaborative multi-agent domains. ScholarBank@NUS Repository. | Abstract: | Online reinforcement learning (RL) in collaborative multi-agent domains is difficult in general. The number of available actions is exponential in the number of agents. This poses serious problems for online learning as exploration requirements are massive. Consequently, the learning system has lesser opportunities to exploit. Furthermore, the learning models for multiple agents can quickly become complex as the number of agents increase, and agents may have fluctuating communication links. To improve online exploration, we devise coordination guided RL (CGRL). CGRL employs expert knowledge of coordination as coordination constraints to dynamically guide exploration. Next, we present distributed CGRL for domains where communication links among agents fluctuate. Then, we introduce distributed relational temporal difference learning that greatly reduces the number of learning parameters for distributed RL. These methods improve learning performance in various domains. Last, we investigate an application of multi-agent learning to the real-world domain of automating retinal image analysis. | URI: | http://scholarbank.nus.edu.sg/handle/10635/36152 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
LauQP_phd_thesis.pdf | 5.21 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.