Please use this identifier to cite or link to this item:
https://scholarbank.nus.edu.sg/handle/10635/236761
Title: | LEARNING GENERALIZABLE REPRESENTATIONS IN REINFORCEMENT LEARNING | Authors: | WANG, KAIXIN | Keywords: | reinforcement learning, representation learning, generalization | Issue Date: | 14-Jul-2022 | Citation: | WANG, KAIXIN (2022-07-14). LEARNING GENERALIZABLE REPRESENTATIONS IN REINFORCEMENT LEARNING. ScholarBank@NUS Repository. | Abstract: | Learning a suitable representation is an important step towards building AI in reinforcement learning. In many scenarios, we would like the learned representation to summarize the information shared across similar tasks and have good generalization ability. We focus on two aspects of good generalization ability: fast adaptation and zero-shot transfer. We first study the task-agnostic Laplacian representations and introduce a new generalized graph drawing objective, which greatly improves the quality of the learned Laplacian representations. Second, we propose a new reachability-aware Laplacian representation, which can more reliably capture the inter-state distance. These improvements help the agent more quickly adapt to new tasks in reward shaping. Third, we introduce a mixture regularization to shape the learned representation. On the Procgen benchmark, the mixture regularization greatly boosts the agent’s zero-shot transfer performance. We believe our findings and methods will inspire future work in learning generalizable representations in reinforcement learning. | URI: | https://scholarbank.nus.edu.sg/handle/10635/236761 |
Appears in Collections: | Ph.D Theses (Open) |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
WangKX.pdf | 12.46 MB | Adobe PDF | OPEN | None | View/Download |
Google ScholarTM
Check
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.