Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/245665
DC FieldValue
dc.titleDEEP REINFORCEMENT LEARNING IN MULTI-AGENT PATH PLANNING
dc.contributor.authorYANG TIANZE
dc.date.accessioned2023-10-31T18:00:46Z
dc.date.available2023-10-31T18:00:46Z
dc.date.issued2023-06-26
dc.identifier.citationYANG TIANZE (2023-06-26). DEEP REINFORCEMENT LEARNING IN MULTI-AGENT PATH PLANNING. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/245665
dc.description.abstractn this thesis, we introduce the basic concepts of decentralized RL and discuss how they can be applied to MAPP. Our proposed framework consists of only a decentralized decision-making module, where each agent learns its own policy without any centralized coordination. The key advantage of decentralized RL is its ability to handle an increasing number of agents without imposing an excessive computational burden. This scalability ensures that the system’s performance remains unaffected even as the number of agents involved grows. We also assume that communications are available within the team to share relevant information in MAPP.
dc.language.isoen
dc.subjectReinforcement Learning; Multi-Agent; Informative Path planning; Path Finding; Intent-based; Attention mechanism
dc.typeThesis
dc.contributor.departmentMECHANICAL ENGINEERING
dc.contributor.supervisorAdrien Sartoretti Guillaume
dc.contributor.supervisorChee Meng Chew
dc.description.degreeMaster's
dc.description.degreeconferredMASTER OF ENGINEERING (CDE)
dc.identifier.orcid0009-0009-7572-2482
Appears in Collections:Master's Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
YangYT.pdf.pdf3.66 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.