Please use this identifier to cite or link to this item:
|Title:||Monte Carlo value iteration for continuous-state POMDPs|
|Source:||Bai, H.,Hsu, D.,Lee, W.S.,Ngo, V.A. (2010). Monte Carlo value iteration for continuous-state POMDPs. Springer Tracts in Advanced Robotics 68 (STAR) : 175-191. ScholarBank@NUS Repository. https://doi.org/10.1007/978-3-642-17452-0_11|
|Abstract:||Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot's state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and preliminary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty. © 2010 Springer-Verlag Berlin Heidelberg.|
|Source Title:||Springer Tracts in Advanced Robotics|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 11, 2017
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.