Please use this identifier to cite or link to this item:
DC FieldValue
dc.titleMonte Carlo Bayesian reinforcement learning
dc.contributor.authorWang, Y.
dc.contributor.authorWon, K.S.
dc.contributor.authorHsu, D.
dc.contributor.authorLee, W.S.
dc.identifier.citationWang, Y.,Won, K.S.,Hsu, D.,Lee, W.S. (2012). Monte Carlo Bayesian reinforcement learning. Proceedings of the 29th International Conference on Machine Learning, ICML 2012 2 : 1135-1142. ScholarBank@NUS Repository.
dc.description.abstractBayesian reinforcement learning (BRL) encodes prior knowledge of the world in a model and represents uncertainty in model parameters by maintaining a probability distribution over them. This paper presents Monte Carlo BRL (MC-BRL), a simple and general approach to BRL. MC-BRL samples a priori a finite set of hypotheses for the model parameter values and forms a discrete partially observable Markov decision process (POMDP) whose state space is a cross product of the state space for the reinforcement learning task and the sampled model parameter space. The POMDP does not require conjugate distributions for belief representation, as earlier works do, and can be solved relatively easily with pointbased approximation algorithms. MC-BRL naturally handles both fully and partially observable worlds. Theoretical and experimental results show that the discrete POMDP approximates the underlying BRL task well with guaranteed performance. Copyright 2012 by the author(s)/owner(s).
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.contributor.departmentTEMASEK LABORATORIES
dc.description.sourcetitleProceedings of the 29th International Conference on Machine Learning, ICML 2012
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Jul 22, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.