Please use this identifier to cite or link to this item:
|Title:||Coordinated reinforcement learning for decentralized optimal control|
|Citation:||Yagan, D., Tham, C.-K. (2007). Coordinated reinforcement learning for decentralized optimal control. Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, ADPRL 2007 : 296-302. ScholarBank@NUS Repository. https://doi.org/10.1109/ADPRL.2007.368202|
|Abstract:||We consider a multi-agent system where the overall performance is affected by the joint actions or policies of agents. However, each agent only observes a partial view of the global state condition. This model is known as a Decentralized Partially-Observable Markov Decision Process (DEC-POMDP), which can be considered more applicable in real-world applications such as communication networks. It is known that the exact solution to a DEC-POMDP is NEXP-complete and memory requirements grow exponentially even for finite-horizon problems. In this paper, we propose to address these issues by using an online model-free technique and by exploiting the locality of interaction among agents in order to approximate the joint optimal policy. Simulation results show the effectiveness and convergence of the proposed algorithm in the context of resource allocation for multi-agent wireless multi-hop networks. © 2007 IEEE.|
|Source Title:||Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, ADPRL 2007|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Nov 14, 2018
WEB OF SCIENCETM
checked on Oct 30, 2018
checked on Oct 13, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.