Please use this identifier to cite or link to this item: https://doi.org/10.1109/ADPRL.2007.368202
Title: Coordinated reinforcement learning for decentralized optimal control
Authors: Yagan, D.
Tham, C.-K. 
Issue Date: 2007
Source: Yagan, D., Tham, C.-K. (2007). Coordinated reinforcement learning for decentralized optimal control. Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, ADPRL 2007 : 296-302. ScholarBank@NUS Repository. https://doi.org/10.1109/ADPRL.2007.368202
Abstract: We consider a multi-agent system where the overall performance is affected by the joint actions or policies of agents. However, each agent only observes a partial view of the global state condition. This model is known as a Decentralized Partially-Observable Markov Decision Process (DEC-POMDP), which can be considered more applicable in real-world applications such as communication networks. It is known that the exact solution to a DEC-POMDP is NEXP-complete and memory requirements grow exponentially even for finite-horizon problems. In this paper, we propose to address these issues by using an online model-free technique and by exploiting the locality of interaction among agents in order to approximate the joint optimal policy. Simulation results show the effectiveness and convergence of the proposed algorithm in the context of resource allocation for multi-agent wireless multi-hop networks. © 2007 IEEE.
Source Title: Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, ADPRL 2007
URI: http://scholarbank.nus.edu.sg/handle/10635/69749
ISBN: 1424407060
DOI: 10.1109/ADPRL.2007.368202
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

9
checked on Dec 14, 2017

WEB OF SCIENCETM
Citations

1
checked on Nov 20, 2017

Page view(s)

25
checked on Dec 10, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.