Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/161030
Title: An on demand reinforcement learning approach for mobile ad hoc networks (MANETs) routing protocol
Authors: LAI KOK SIONG
Keywords: Simulation, Communication system routing, MANET, Q-Learning
Issue Date: 17-Jan-2005
Citation: LAI KOK SIONG (2005-01-17). An on demand reinforcement learning approach for mobile ad hoc networks (MANETs) routing protocol. ScholarBank@NUS Repository.
Abstract: 

A NOVEL DYNAMIC ON DEMAND ROUTING SCHEME FOR MANETS, WHICH USES THE ON DEMAND ROUTING CAPABILITY OF AD HOC ON-DEMAND DISTANCE VECTOR (AODV) ROUTING PROTOCOL AS THE BASE POLICY FOR Q-LEARNING ROUTING PROTOCOL. DUE TO NODE MOBILITY AND POWER LIMITATION, THE NETWORK TOPOLOGY CHANGES FREQUENTLY SUCH THAT THE ROUTE FOUND BY AODV MAY NO LONGER BE OPTIMAL OR IN EXISTENCE. THE PROPOSED ON-DEMAND Q-LEARNING ROUTING PROTOCOL (OQRP) IS ABLE TO DYNAMICALLY DISCOVER AND PROVIDE MULTIPLE ALTERNATIVE ROUTES TO THE DESTINATION WITHOUT PRODUCING ADDITIONAL CONTROL MESSAGES TO AODV. IN ADDITION, IT USES Q-LEARNING TO CHOOSE THE OPTIMAL ROUTE TO THE DESTINATION DYNAMICALLY WHENEVER THE TOPOLOGY CHANGES. THESE TWO PROPERTIES ARE CURRENTLY NOT FOUND IN THE OTHER MODIFIED VERSIONS OF AODV PROTOCOL.

IN THIS THESIS, IT IS SHOWN THAT THE PROPOSED OQRP REDUCES END-TO-END DELAY, NUMBER OF ROUTE REQUEST, OVERHEAD AND INCREASES CONNECTIVITY WHEN COMPARED TO AODV AND MULTIPLE ROUTES AODV PROTOCOLS.

URI: https://scholarbank.nus.edu.sg/handle/10635/161030
Appears in Collections:Master's Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Thesis (Lai Kok Siong).pdf350.77 kBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.