Please use this identifier to cite or link to this item: https://doi.org/10.1177/0278364910369861
Title: Planning under uncertainty for robotic tasks with mixed observability
Authors: Ong, S.C.W. 
Png, S.W.
Hsu, D. 
Lee, W.S. 
Keywords: Markov decision process
motion planning
motion planning with uncertainty
partially observable Markov decision process
Issue Date: 2010
Citation: Ong, S.C.W., Png, S.W., Hsu, D., Lee, W.S. (2010). Planning under uncertainty for robotic tasks with mixed observability. International Journal of Robotics Research 29 (8) : 1053-1068. ScholarBank@NUS Repository. https://doi.org/10.1177/0278364910369861
Abstract: Partially observable Markov decision processes (POMDPs) provide a principled, general framework for robot motion planning in uncertain and dynamic environments. They have been applied to various robotic tasks. However, solving POMDPs exactly is computationally intractable. A major challenge is to scale up POMDP algorithms for complex robotic tasks. Robotic systems often have mixed observability : even when a robots state is not fully observable, some components of the state may still be so. We use a factored model to represent separately the fully and partially observable components of a robots state and derive a compact lower-dimensional representation of its belief space. This factored representation can be combined with any point-based algorithm to compute approximate POMDP solutions. Experimental results show that on standard test problems, our approach improves the performance of a leading point-based POMDP algorithm by many times. © The Author(s), 2010.
Source Title: International Journal of Robotics Research
URI: http://scholarbank.nus.edu.sg/handle/10635/40979
ISSN: 02783649
DOI: 10.1177/0278364910369861
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.