Please use this identifier to cite or link to this item:
|Title:||Achieving coverage through distributed reinforcement learning in wireless sensor networks|
|Source:||Seah, M.W.M., Tham, C.-K., Srinivasan, V., Xin, A. (2007). Achieving coverage through distributed reinforcement learning in wireless sensor networks. Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP : 425-430. ScholarBank@NUS Repository. https://doi.org/10.1109/ISSNIP.2007.4496881|
|Abstract:||With the extensive implementations of wireless sensor networks in many areas, it is imperative to have better management of the coverage and energy consumption of such networks. These networks consist of large number of sensor nodes and therefore a multiagent system approach needs to be taken in order for a more accurate model. Three coordination algorithms are being put to the test in this paper: (i) fully distributed Q-learning which we refer to as independent learner (IL), (ii) Distributed Value Function (DVF) and (iii) an algorithm we developed which is a variation of the IL, Coordinated algorithm (COORD). The results show that the IL and DVF algorithm performed for higher sensor node densities but at low sensor node densities, the three algorithms have similar performance. ©2007 IEEE.|
|Source Title:||Proceedings of the 2007 International Conference on Intelligent Sensors, Sensor Networks and Information Processing, ISSNIP|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 6, 2017
WEB OF SCIENCETM
checked on Nov 20, 2017
checked on Dec 10, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.