Please use this identifier to cite or link to this item:
Title: Transferring expectations in model-based reinforcement learning
Authors: Nguyen, T.T.
Silander, T.
Leong, T.-Y. 
Issue Date: 2012
Source: Nguyen, T.T.,Silander, T.,Leong, T.-Y. (2012). Transferring expectations in model-based reinforcement learning. Advances in Neural Information Processing Systems 4 : 2555-2563. ScholarBank@NUS Repository.
Abstract: We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address the challenges of transfer learning in heterogeneous environments with varying tasks. We present an efficient, online framework that, through a sequence of tasks, learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer learning across different state spaces. We demonstrate the potential impact of our system through improved jumpstart and faster convergence to near optimum policy in two benchmark domains.
Source Title: Advances in Neural Information Processing Systems
ISBN: 9781627480031
ISSN: 10495258
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Jan 20, 2018

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.