Please use this identifier to cite or link to this item:
https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-347
Title: | Robotic object detection: Learning to improve the classifiers using sparse graphs for path planning | Authors: | Jia Z. Saxena A. Chen T. |
Issue Date: | 2011 | Citation: | Jia Z., Saxena A., Chen T. (2011). Robotic object detection: Learning to improve the classifiers using sparse graphs for path planning. IJCAI International Joint Conference on Artificial Intelligence : 2072-2078. ScholarBank@NUS Repository. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-347 | Abstract: | Object detection is a basic skill for a robot to perform tasks in human environments. In order to build a good object classifier, a large training set of labeled images is required; this is typically collected and labeled (often painstakingly) by a human. This method is not scalable and therefore limits the robot's detection performance. We propose an algorithm for a robot to collect more data in the environment during its training phase so that in the future it could detect objects more reliably. The first step is to plan a path for collecting additional training images, which is hard because a previously visited location affects the decision for the future locations. One key component of our work is path planning by building a sparse graph that captures these dependencies. The other key component is our learning algorithm that weighs the errors made in robot's data collection process while updating the classifier. In our experiments, we show that our algorithms enable the robot to improve its object classifiers significantly. | Source Title: | IJCAI International Joint Conference on Artificial Intelligence | URI: | http://scholarbank.nus.edu.sg/handle/10635/146151 | ISBN: | 9781577355120 | ISSN: | 10450823 | DOI: | 10.5591/978-1-57735-516-8/IJCAI11-347 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.