Please use this identifier to cite or link to this item: https://doi.org/10.1109/ISSNIP.2007.4496881
Title: Achieving coverage through distributed reinforcement learning in wireless sensor networks
Authors: Seah, M.W.M.
Tham, C.-K. 
Srinivasan, V. 
Xin, A.
Issue Date: 2007
Citation: Seah, M.W.M., Tham, C.-K., Srinivasan, V., Xin, A. (2007). Achieving coverage through distributed reinforcement learning in wireless sensor networks. Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP : 425-430. ScholarBank@NUS Repository. https://doi.org/10.1109/ISSNIP.2007.4496881
Abstract: With the extensive implementations of wireless sensor networks in many areas, it is imperative to have better management of the coverage and energy consumption of such networks. These networks consist of large number of sensor nodes and therefore a multiagent system approach needs to be taken in order for a more accurate model. Three coordination algorithms are being put to the test in this paper: (i) fully distributed Q-learning which we refer to as independent learner (IL), (ii) Distributed Value Function (DVF) and (iii) an algorithm we developed which is a variation of the IL, Coordinated algorithm (COORD). The results show that the IL and DVF algorithm performed for higher sensor node densities but at low sensor node densities, the three algorithms have similar performance. ©2007 IEEE.
Source Title: Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP
URI: http://scholarbank.nus.edu.sg/handle/10635/69146
ISBN: 1424415020
DOI: 10.1109/ISSNIP.2007.4496881
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.