Please use this identifier to cite or link to this item: https://doi.org/10.1109/TMM.2010.2064759
Title: Information-theoretic analysis of input strokes in visual object cutout
Authors: Mu, Y. 
Zhou, B.
Yan, S. 
Keywords: Image analysis
information entropy
object segmentation
Issue Date: Dec-2010
Source: Mu, Y., Zhou, B., Yan, S. (2010-12). Information-theoretic analysis of input strokes in visual object cutout. IEEE Transactions on Multimedia 12 (8) : 843-852. ScholarBank@NUS Repository. https://doi.org/10.1109/TMM.2010.2064759
Abstract: Semantic object cutout serves as a basic unit in various image editing systems. In a typical scenario, users are required to provide several strokes which indicate part of the pixels as image background or objects. However, most existing approaches are passive in the sense of accepting input strokes without checking the consistence with user's intention. Here we argue that an active strategy may potentially reduce the interaction burden. Before any real calculation for segmentation, the program can roughly estimate the uncertainty for each image element and actively provide useful suggestions to users. Such a pre-processing is particularly useful for beginners unaware of feeding the underlying cutout algorithms with optimal strokes. We develop such an active object cutout algorithm, named ActiveCut, which makes it possible to automatically detect ambiguity given current user-supplied strokes, and synthesize "suggestive strokes" as feedbacks. Generally, suggestive strokes come from the ambiguous image parts and have the maximal potentials to reduce label uncertainty. Users can continuously refine their inputs following these suggestive strokes. In this way, the number of user-program interaction iterations can thus be greatly reduced. Specifically, the uncertainty is modeled by mutual information between user strokes and unlabeled image regions. To ensure that ActiveCut works at a user-interactive rate, we adopt superpixel lattice based image representation, whose computation depends on scene complexity rather than original image resolution. Moreover, it retains the 2-D-lattice topology and is thus more suitable for parallel computing. While for the most time-consuming calculation of probabilistic entropy, variational approximation is utilized for acceleration. Finally, based on submodular function theory, we provide a theoretic analysis for the performance lower bound of the proposed greedy algorithm. Various user studies are conducted on the MSRC image dataset to validate the effectiveness of our proposed algorithm. © 2010 IEEE.
Source Title: IEEE Transactions on Multimedia
URI: http://scholarbank.nus.edu.sg/handle/10635/56336
ISSN: 15209210
DOI: 10.1109/TMM.2010.2064759
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

2
checked on Dec 5, 2017

WEB OF SCIENCETM
Citations

1
checked on Dec 5, 2017

Page view(s)

32
checked on Dec 11, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.