Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/226760
DC Field | Value | |
---|---|---|
dc.title | GYM_PM: A REINFORCEMENT LEARNING FRAMEWORK FOR PREDICTIVE MAINTENANCE AND SUPPLY CHAIN OPTIMISATION | |
dc.contributor.author | LIN QIWEI | |
dc.date.accessioned | 2022-06-08T09:03:38Z | |
dc.date.available | 2022-06-08T09:03:38Z | |
dc.date.issued | 2022-04-04 | |
dc.identifier.citation | LIN QIWEI (2022-04-04). GYM_PM: A REINFORCEMENT LEARNING FRAMEWORK FOR PREDICTIVE MAINTENANCE AND SUPPLY CHAIN OPTIMISATION. ScholarBank@NUS Repository. | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/226760 | |
dc.description.abstract | The paper provides an overview of the Python package that we have developed for Deep Reinforcement Learning (DRL) in the domains of Predictive Maintenance and Supply Chain Optimisation. The package consists of 4 different environments in the categories of Rolling Stock and Manufacturing. We went on to benchmark algorithms such as PPO and IMPALA against heuristic baselines to evaluate the performance of our agents. We found that on-policy algorithms tend to outperform off-policy algorithms on more complex scenarios while the opposite is true for simpler environments. Lastly, we propose further enhancements that can be made to the design of our environments. | |
dc.type | Thesis | |
dc.contributor.department | NUS BUSINESS SCHOOL | |
dc.contributor.supervisor | JOEL GOH | |
dc.description.degree | Bachelor's | |
dc.description.degreeconferred | Bachelor of Business Administration with Honours | |
Appears in Collections: | Bachelor's Theses |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
LIN QIWEI_A0183470E_BHD4001.pdf | 1.25 MB | Adobe PDF | RESTRICTED | None | Log In |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.