Please use this identifier to cite or link to this item:
DC FieldValue
dc.titleMarkov decision process frameworks for cooperative retransmission in wireless networks
dc.contributor.authorShirazi, G.N.
dc.contributor.authorKong, P.-Y.
dc.contributor.authorTham, C.-K.
dc.identifier.citationShirazi, G.N.,Kong, P.-Y.,Tham, C.-K. (2009). Markov decision process frameworks for cooperative retransmission in wireless networks. IEEE Wireless Communications and Networking Conference, WCNC : -. ScholarBank@NUS Repository. <a href="" target="_blank"></a>
dc.description.abstractThe challenging problem of cooperative retransmission in the wireless networks is investigated in this paper. This paper introduces the centralized and distributed Markov decision process (MDP) frameworks in the context of cooperative retransmission. Specifically, a MDP model with the global channel information is first constructed for the cooperation problem in the MAC layer. It is shown that this global MDP is able to perform optimally, where the objective is to minimize the total number of required transmissions for a successful packet delivery to the destination. When the global information is unavailable, we show that the suitable distributed MDP models can replace the global model for a near-optimal performance. Furthermore, the reinforcement learning methods are investigated when the MDP model is unavailable. Interestingly, simulation results confirm that the learning methods also provide an acceptable performance despite their simplicity and low overhead. © 2009 IEEE.
dc.typeConference Paper
dc.contributor.departmentELECTRICAL & COMPUTER ENGINEERING
dc.description.sourcetitleIEEE Wireless Communications and Networking Conference, WCNC
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.


checked on Dec 4, 2021

Page view(s)

checked on Dec 2, 2021

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.