Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICCV.2013.271
Title: Semantic segmentation without annotating segments
Authors: Xia, W.
Domokos, C.
Dong, J.
Cheong, L.-F. 
Yan, S. 
Issue Date: 2013
Citation: Xia, W., Domokos, C., Dong, J., Cheong, L.-F., Yan, S. (2013). Semantic segmentation without annotating segments. Proceedings of the IEEE International Conference on Computer Vision : 2176-2183. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2013.271
Abstract: Numerous existing object segmentation frameworks commonly utilize the object bounding box as a prior. In this paper, we address semantic segmentation assuming that object bounding boxes are provided by object detectors, but no training data with annotated segments are available. Based on a set of segment hypotheses, we introduce a simple voting scheme to estimate shape guidance for each bounding box. The derived shape guidance is used in the subsequent graph-cut-based figure-ground segmentation. The final segmentation result is obtained by merging the segmentation results in the bounding boxes. We conduct an extensive analysis of the effect of object bounding box accuracy. Comprehensive experiments on both the challenging PASCAL VOC object segmentation dataset and GrabCut-50 image segmentation dataset show that the proposed approach achieves competitive results compared to previous detection or bounding box prior based methods, as well as other state-of-the-art semantic segmentation methods. © 2013 IEEE.
Source Title: Proceedings of the IEEE International Conference on Computer Vision
URI: http://scholarbank.nus.edu.sg/handle/10635/84170
ISBN: 9781479928392
DOI: 10.1109/ICCV.2013.271
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

26
checked on Dec 3, 2019

WEB OF SCIENCETM
Citations

18
checked on Nov 26, 2019

Page view(s)

48
checked on Nov 30, 2019

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.