Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.ins.2011.01.001
DC FieldValue
dc.titleHessian matrix distribution for Bayesian policy gradient reinforcement learning
dc.contributor.authorVien, N.A.
dc.contributor.authorYu, H.
dc.contributor.authorChung, T.
dc.date.accessioned2013-07-04T07:28:46Z
dc.date.available2013-07-04T07:28:46Z
dc.date.issued2011
dc.identifier.citationVien, N.A., Yu, H., Chung, T. (2011). Hessian matrix distribution for Bayesian policy gradient reinforcement learning. Information Sciences 181 (9) : 1671-1685. ScholarBank@NUS Repository. https://doi.org/10.1016/j.ins.2011.01.001
dc.identifier.issn00200255
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/38871
dc.description.abstractBayesian policy gradient algorithms have been recently proposed for modeling the policy gradient of the performance measure in reinforcement learning as a Gaussian process. These methods were known to reduce the variance and the number of samples needed to obtain accurate gradient estimates in comparison to the conventional Monte-Carlo policy gradient algorithms. In this paper, we propose an improvement over previous Bayesian frameworks for the policy gradient. We use the Hessian matrix distribution as a learning rate schedule to improve the performance of the Bayesian policy gradient algorithm in terms of the variance and the number of samples. As in computing the policy gradient distributions, the Bayesian quadrature method is used to estimate the Hessian matrix distributions. We prove that the posterior mean of the Hessian distribution estimate is symmetric, one of the important properties of the Hessian matrix. Moreover, we prove that with an appropriate choice of kernel, the computational complexity of Hessian distribution estimate is equal to that of the policy gradient distribution estimates. Using simulations, we show encouraging experimental results comparing the proposed algorithm to the Bayesian policy gradient and the Bayesian policy natural gradient algorithms described in Ghavamzadeh and Engel [10]. © 2011 Elsevier Inc. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.ins.2011.01.001
dc.sourceScopus
dc.subjectBayesian policy gradient
dc.subjectHessian matrix distribution
dc.subjectMarkov decision process
dc.subjectMonte-Carlo policy gradient
dc.subjectPolicy gradient
dc.subjectReinforcement learning
dc.typeArticle
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1016/j.ins.2011.01.001
dc.description.sourcetitleInformation Sciences
dc.description.volume181
dc.description.issue9
dc.description.page1671-1685
dc.description.codenISIJB
dc.identifier.isiut000288774700011
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

23
checked on Aug 18, 2022

WEB OF SCIENCETM
Citations

20
checked on Aug 18, 2022

Page view(s)

203
checked on Aug 18, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.