Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TCSVT.2013.2248495
Title: | Improving bottom-up saliency detection by looking into neighbors | Authors: | Lang, C. Feng, J. Liu, G. Tang, J. Yan, S. Luo, J. |
Keywords: | Multimodal modeling multitask learning saliency detection sparsity and low-rankness visual attention |
Issue Date: | 2013 | Citation: | Lang, C., Feng, J., Liu, G., Tang, J., Yan, S., Luo, J. (2013). Improving bottom-up saliency detection by looking into neighbors. IEEE Transactions on Circuits and Systems for Video Technology 23 (6) : 1016-1028. ScholarBank@NUS Repository. https://doi.org/10.1109/TCSVT.2013.2248495 | Abstract: | Bottom-up saliency detection aims to detect salient areas within natural images usually without learning from labeled images. Typically, the saliency map of an image is inferred by only using the information within this image (referred to as the 'current image'). While efficient, such single-image-based methods may fail to obtain reliable results, because the information within a single image may be insufficient for defining saliency. In this paper, we investigate how saliency detection can benefit from the nearest neighbor structure in the image space. First, we show that existing methods can be improved by extending them to include the visual neighborhood information. This verifies the significance of the neighbors. Next, a solution of multitask sparsity pursuit is proposed to integrate the current image and its neighbors to collaboratively detect saliency. The integration is done by first representing each image as a feature matrix, and then seeking the consistently sparse elements from the joint decompositions of multiple matrices into pairs of low-rank and sparse matrices. The computational procedure is formulated as a constrained nuclear norm and ℓ2,1-norm minimization problem, which is convex and can be solved efficiently with the augmented Lagrange multiplier method. Besides the nearest neighbor structure in the visual feature space, the proposed model can also be generalized to handle multiple visual features. Extensive experiments have clearly validated its superiority over other state-of-the-art methods. © 1991-2012 IEEE. | Source Title: | IEEE Transactions on Circuits and Systems for Video Technology | URI: | http://scholarbank.nus.edu.sg/handle/10635/82513 | ISSN: | 10518215 | DOI: | 10.1109/TCSVT.2013.2248495 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.