Please use this identifier to cite or link to this item:
Title: DESPOT: Online POMDP planning with regularization
Authors: Somani, A.
Ye, N. 
Hsu, D. 
Lee, W.S. 
Issue Date: 2013
Citation: Somani, A.,Ye, N.,Hsu, D.,Lee, W.S. (2013). DESPOT: Online POMDP planning with regularization. Advances in Neural Information Processing Systems. ScholarBank@NUS Repository.
Abstract: POMDPs provide a principled framework for planning under uncertainty, but are computationally intractable, due to the "curse of dimensionality" and the "curse of history". This paper presents an online POMDP algorithm that alleviates these difficulties by focusing the search on a set of randomly sampled scenarios. A Determinized Sparse Partially Observable Tree (DESPOT) compactly captures the execution of all policies on these scenarios. Our Regularized DESPOT (R-DESPOT) algorithm searches the DESPOT for a policy, while optimally balancing the size of the policy and its estimated value obtained under the sampled scenarios. We give an output-sensitive performance bound for all policies derived from a DESPOT, and show that R-DESPOT works well if a small optimal policy exists. We also give an anytime algorithm that approximates R-DESPOT. Experiments show strong results, compared with two of the fastest online POMDP algorithms. Source code along with experimental settings are available at
Source Title: Advances in Neural Information Processing Systems
ISSN: 10495258
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Jun 7, 2019

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.