Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/77999
Title: Active Markov information-theoretic path planning for robotic environmental sensing
Authors: Low, K.H. 
Dolan, J.M.
Khosla, P.
Keywords: Active learning
Adaptive sampling
Gaussian process
Multi-robot exploration and mapping
Non-myopic path planning
Issue Date: 2011
Source: Low, K.H.,Dolan, J.M.,Khosla, P. (2011). Active Markov information-theoretic path planning for robotic environmental sensing. 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011 2 : 705-712. ScholarBank@NUS Repository.
Abstract: Recent research in multi-robot exploration and mapping has focused on sampling environmental fields, which are typically modeled using the Gaussian process (GP). Existing information-theoretic exploration strategies for learning GP-based environmental field maps adopt the non-Markovian problem structure and consequently scale poorly with the length of history of observations. Hence, it becomes computationally impractical to use these strategies for in situ, realtime active sampling. To ease this computational burden, this paper presents a Markov-based approach to efficient information-theoretic path planning for active sampling of GP-based fields. We analyze the time complexity of solving the Markov-based path planning problem, and demonstrate analytically that it scales better than that of deriving the non-Markovian strategies with increasing length of planning horizon. For a class of exploration tasks called the transect sampling task, we provide theoretical guarantees on the active sampling performance of our Markov-based policy, from which ideal environmental field conditions and sampling task settings can be established to limit its performance degradation due to violation of the Markov assumption. Empirical evaluation on real-world temperature and plankton density field data shows that our Markov-based policy can generally achieve active sampling performance comparable to that of the widely-used non-Markovian greedy policies under less favorable realistic field conditions and task settings while enjoying significant computational gain over them. Copyright © 2011, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
Source Title: 10th International Conference on Autonomous Agents and Multiagent Systems 2011, AAMAS 2011
URI: http://scholarbank.nus.edu.sg/handle/10635/77999
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

29
checked on Feb 16, 2018

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.