Please use this identifier to cite or link to this item:
|Title:||Towards optimizing human labeling for interactive image tagging|
|Keywords:||Interactive image tagging|
Multiscale cluster labeling
|Citation:||Tang, J., Chen, Q., Wang, M., Yan, S., Chua, T.-S., Jain, R. (2013). Towards optimizing human labeling for interactive image tagging. ACM Transactions on Multimedia Computing, Communications and Applications 9 (4) : -. ScholarBank@NUS Repository. https://doi.org/10.1145/2501643.2501651|
|Abstract:||Interactive tagging is an approach that combines human and computer to assign descriptive keywords to image contents in a semi-Automatic way. It can avoid the problems in automatic tagging and pure manual tagging by achieving a compromise between tagging performance and manual cost. However, conventional research efforts on interactive tagging mainly focus on sample selection and models for tag prediction. In this work, we investigate interactive tagging from a different aspect. We introduce an interactive image tagging framework that can more fully make use of human's labeling efforts. That means, it can achieve a specified tagging performance by taking less manual labeling effort or achieve better tagging performance with a specified labeling cost. In the framework, hashing is used to enable a quick clustering of image regions and a dynamic multiscale clustering labeling strategy is proposed such that users can label a large group of similar regions each time. We also employ a tag refinement method such that several inappropriate tags can be automatically corrected. Experiments on a large dataset demonstrate the effectiveness of our approach. © 2014 ACM.|
|Source Title:||ACM Transactions on Multimedia Computing, Communications and Applications|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Oct 17, 2018
WEB OF SCIENCETM
checked on Oct 9, 2018
checked on Apr 20, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.