Please use this identifier to cite or link to this item:
https://doi.org/10.1007/978-3-642-17452-0_11
Title: | Monte Carlo value iteration for continuous-state POMDPs | Authors: | Bai, H. Hsu, D. Lee, W.S. Ngo, V.A. |
Issue Date: | 2010 | Citation: | Bai, H.,Hsu, D.,Lee, W.S.,Ngo, V.A. (2010). Monte Carlo value iteration for continuous-state POMDPs. Springer Tracts in Advanced Robotics 68 (STAR) : 175-191. ScholarBank@NUS Repository. https://doi.org/10.1007/978-3-642-17452-0_11 | Abstract: | Partially observable Markov decision processes (POMDPs) have been successfully applied to various robot motion planning tasks under uncertainty. However, most existing POMDP algorithms assume a discrete state space, while the natural state space of a robot is often continuous. This paper presents Monte Carlo Value Iteration (MCVI) for continuous-state POMDPs. MCVI samples both a robot's state space and the corresponding belief space, and avoids inefficient a priori discretization of the state space as a grid. Both theoretical results and preliminary experimental results indicate that MCVI is a promising new approach for robot motion planning under uncertainty. © 2010 Springer-Verlag Berlin Heidelberg. | Source Title: | Springer Tracts in Advanced Robotics | URI: | http://scholarbank.nus.edu.sg/handle/10635/40821 | ISBN: | 9783642174513 | ISSN: | 16107438 | DOI: | 10.1007/978-3-642-17452-0_11 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.