Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/122593
Title: | EFFICIENT HIERARCHICAL REINFORCEMENT LEARNING THROUGH CORE TASK ABSTRACTION AND CONTEXT REASONING | Authors: | LI ZHUORU | Keywords: | hierarchical reinforcement learning, Markov decision process, seqential decision making, options, MAXQ | Issue Date: | 25-Sep-2015 | Citation: | LI ZHUORU (2015-09-25). EFFICIENT HIERARCHICAL REINFORCEMENT LEARNING THROUGH CORE TASK ABSTRACTION AND CONTEXT REASONING. ScholarBank@NUS Repository. | Abstract: | IN HIERARCHICAL REINFORCEMENT LEARNING (HRL), AN AUTONOMOUS AGENT ADOPTS A DIVIDE-AND-CONQUER APPROACH TO SOLVE LARGE, COMPLEX PROBLEMS BY RECURSIVELY DECOMPOSING THE ROOT PROBLEM INTO SMALLER TASKS, AND SOLVING THEM SYSTEMATICALLY. WE PROPOSE CONTEXT SENSITIVE REINFORCEMENT LEARNING (CSRL), A NEW MODEL-BASED APPROACH TO HRL THAT EXPLOITS SHARED KNOWLEDGE AND SELECTIVE EXECUTION AT DIFFERENT LEVELS OF ABSTRACTION TO EFFICIENTLY SOLVE LARGE, COMPLEX PROBLEMS. CSRL HAS THE FOLLOWING ADVANTAGES OVER EXISTING HRL METHODS. FIRST, CSRL DOES NOT REQUIRE THE FULL SET OF TASKS AND PRIMITIVE ACTIONS TO BE SPECIFIED. SECOND, CSRL FACILITATES EFFICIENT EXPERIENCE SHARING BETWEEN SIMILAR SUBTASKS OR TASKS WITH OVERLAPPING FEATURES. THIRD, CSRL CAN HANDLE PROBLEMS WHERE MULTIPLE TASKS ARE ACTIVE AT THE SAME TIME. WE TEST THE FRAMEWORK ON COMMON BENCHMARK PROBLEMS AND COMPLEX SIMULATED ROBOTIC ENVIRONMENTS. IT COMPARES FAVORABLY AGAINST THE STATE-OF-THE-ART ALGORITHMS, AND SCALES WELL IN VERY LARGE P | URI: | http://scholarbank.nus.edu.sg/handle/10635/122593 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
LiZR.pdf | 13.76 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.