Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/186856
DC FieldValue
dc.titleROBOT TRAJECTORY LEARNING FOR DYNAMIC TASKS
dc.contributor.authorLI SHIDI
dc.date.accessioned2021-02-28T18:01:01Z
dc.date.available2021-02-28T18:01:01Z
dc.date.issued2020-08-17
dc.identifier.citationLI SHIDI (2020-08-17). ROBOT TRAJECTORY LEARNING FOR DYNAMIC TASKS. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/186856
dc.description.abstractReinforcement learning can be used for robot trajectory planning. However, the existing methods have difficulties applying in physical robots and dynamic robot tasks. In this thesis, we optimize current approaches and propose new methods to apply reinforcement learning algorithms in robot trajectory planning problems. Our methods can plan trajectories for robot tasks with complex dynamics efficiently. The methods are effective in both simulations and real-world experiments.
dc.language.isoen
dc.subjectRobotics, trajectory planning, reinforcement learning, manipulation, neural network, artificial intelligent
dc.typeThesis
dc.contributor.departmentMECHANICAL ENGINEERING
dc.contributor.supervisorChee Meng Chew
dc.contributor.supervisorSubramaniam Velusamy
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY (FOE)
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
LiSD.pdf9.43 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.