Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/245665
Title: | DEEP REINFORCEMENT LEARNING IN MULTI-AGENT PATH PLANNING | Authors: | YANG TIANZE | ORCID iD: | orcid.org/0009-0009-7572-2482 | Keywords: | Reinforcement Learning; Multi-Agent; Informative Path planning; Path Finding; Intent-based; Attention mechanism | Issue Date: | 26-Jun-2023 | Citation: | YANG TIANZE (2023-06-26). DEEP REINFORCEMENT LEARNING IN MULTI-AGENT PATH PLANNING. ScholarBank@NUS Repository. | Abstract: | n this thesis, we introduce the basic concepts of decentralized RL and discuss how they can be applied to MAPP. Our proposed framework consists of only a decentralized decision-making module, where each agent learns its own policy without any centralized coordination. The key advantage of decentralized RL is its ability to handle an increasing number of agents without imposing an excessive computational burden. This scalability ensures that the system’s performance remains unaffected even as the number of agents involved grows. We also assume that communications are available within the team to share relevant information in MAPP. | URI: | https://scholarbank.nus.edu.sg/handle/10635/245665 |
Appears in Collections: | Master's Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
YangYT.pdf.pdf | 3.66 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.