Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/186856
DC Field | Value | |
---|---|---|
dc.title | ROBOT TRAJECTORY LEARNING FOR DYNAMIC TASKS | |
dc.contributor.author | LI SHIDI | |
dc.date.accessioned | 2021-02-28T18:01:01Z | |
dc.date.available | 2021-02-28T18:01:01Z | |
dc.date.issued | 2020-08-17 | |
dc.identifier.citation | LI SHIDI (2020-08-17). ROBOT TRAJECTORY LEARNING FOR DYNAMIC TASKS. ScholarBank@NUS Repository. | |
dc.identifier.uri | https://scholarbank.nus.edu.sg/handle/10635/186856 | |
dc.description.abstract | Reinforcement learning can be used for robot trajectory planning. However, the existing methods have difficulties applying in physical robots and dynamic robot tasks. In this thesis, we optimize current approaches and propose new methods to apply reinforcement learning algorithms in robot trajectory planning problems. Our methods can plan trajectories for robot tasks with complex dynamics efficiently. The methods are effective in both simulations and real-world experiments. | |
dc.language.iso | en | |
dc.subject | Robotics, trajectory planning, reinforcement learning, manipulation, neural network, artificial intelligent | |
dc.type | Thesis | |
dc.contributor.department | MECHANICAL ENGINEERING | |
dc.contributor.supervisor | Chee Meng Chew | |
dc.contributor.supervisor | Subramaniam Velusamy | |
dc.description.degree | Ph.D | |
dc.description.degreeconferred | DOCTOR OF PHILOSOPHY (FOE) | |
Appears in Collections: | Ph.D Theses (Open) |
Show simple item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
LiSD.pdf | 9.43 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.