Please use this identifier to cite or link to this item:
Title: Distributed reinforcement learning frameworks for cooperative retransmission in wireless networks
Authors: Naddafzadeh-Shirazi, G.
Kong, P.-Y. 
Tham, C.-K. 
Keywords: Distributed Markov decision process (MDP) for wireless networks
media access control (MAC) cooperative retransmission
reinforcement learning (RL)
Issue Date: Oct-2010
Citation: Naddafzadeh-Shirazi, G., Kong, P.-Y., Tham, C.-K. (2010-10). Distributed reinforcement learning frameworks for cooperative retransmission in wireless networks. IEEE Transactions on Vehicular Technology 59 (8) : 4157-4162. ScholarBank@NUS Repository.
Abstract: We address the problem of cooperative retransmission in the media access control (MAC) layer of a distributed wireless network with spatial reuse, where there can be multiple concurrent transmissions from the source and relay nodes. We propose a novel Markov decision process (MDP) framework for adjusting the transmission powers and transmission probabilities in the source and relay nodes to achieve the highest network throughput per unit of consumed energy. We also propose distributed methods that avoid solving a centralized MDP model with a large number of states by employing model-free reinforcement learning (RL) algorithms. We show the convergence to a local solution and compute a lower bound for the performance of the proposed RL algorithms. We further empirically confirm that the proposed learning schemes are robust to collisions and are scalable with regard to the network size and can provide significant cooperative diversity while enjoying low complexity and fast convergence. © 2006 IEEE.
Source Title: IEEE Transactions on Vehicular Technology
ISSN: 00189545
DOI: 10.1109/TVT.2010.2059055
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.