Please use this identifier to cite or link to this item:
Title: Scalable model-based reinforcement learning in complex, heterogeneous environments
Keywords: Model-based reinforcement learning, online feature selection, transfer learning, multinomial logistic regression, group Lasso
Issue Date: 25-Mar-2013
Source: NGUYEN THANH TRUNG (2013-03-25). Scalable model-based reinforcement learning in complex, heterogeneous environments. ScholarBank@NUS Repository.
Abstract: A system that can automatically learn and act based on feedback from the world has many important applications. For example, the system may replace humans to explore dangerous environments such as Mars, the ocean, or to allocate resources in an information network, or to drive a car home without requiring a programmer to manually specify rules on how to do so. At this time the theoretical framework provided by reinforcement learning (RL) appears quite promising for building such the system. <br><br> There has been a large number of studies focusing on RL to solve challenging problems. However, in complex environments, much domain knowledge is usually required to carefully design a small feature set to control the problem complexity; otherwise, it is almost likely computationally infeasible to solve the RL problems with the state of the art techniques. An appropriate representation of the world dynamics is essential to efficient problem solving. Compactly represented world dynamics models should also be transferable between tasks, which may then further improve the usefulness and performance of the autonomous system. <br><br> In this dissertation, we first propose a scalable method for learning the world dynamics of feature-rich environments in model-based RL. The main idea is formalized as a new, factored state-transition representation that supports efficient online-learning of the relevant features. We construct the transition models through predicting how the actions change the world. We introduce an online sparse coding learning technique for feature selection in high-dimensional spaces. <br><br> Second, we study how to automatically select and adapt multiple abstractions or representations of the world to support model-based RL. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online method that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without pre-defined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the jumpstart and faster convergence to near optimum effects of our system. <br><br> Finally, we implement these techniques in a mobile robot to demonstrate their practicality. We show that the robot equipped with the proposed learning system is able to learn, accumulate, and transfer knowledge in real environments to quickly solve a task.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
NguyenTT.pdf8.07 MBAdobe PDF



Page view(s)

checked on Dec 11, 2017


checked on Dec 11, 2017

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.