Please use this identifier to cite or link to this item: https://doi.org/10.1109/TCSVT.2010.2051286
Title: Near duplicate identification with spatially aligned pyramid matching
Authors: Xu, D.
Cham, T.J.
Yan, S. 
Duan, L.
Chang, S.-F.
Keywords: Near duplicate detection
near duplicate retrieval
spatially aligned pyramid matching
Issue Date: Aug-2010
Citation: Xu, D., Cham, T.J., Yan, S., Duan, L., Chang, S.-F. (2010-08). Near duplicate identification with spatially aligned pyramid matching. IEEE Transactions on Circuits and Systems for Video Technology 20 (8) : 1068-1079. ScholarBank@NUS Repository. https://doi.org/10.1109/TCSVT.2010.2051286
Abstract: A new framework, termed spatially aligned pyramid matching, is proposed for near duplicate image identification. The proposed method robustly handles spatial shifts as well as scale changes, and is extensible for video data. Images are divided into both overlapped and non-overlapped blocks over multiple levels. In the first matching stage, pairwise distances between blocks from the examined image pair are computed using earth mover's distance (EMD) or the visual word with χ2 distance based method with scale-invariant feature transform (SIFT) features. In the second stage, multiple alignment hypotheses that consider piecewise spatial shifts and scale variation are postulated and resolved using integer-flow EMD. Moreover, to compute the distances between two videos, we conduct the third step matching (i.e., temporal matching) after spatial matching. Two application scenarios are addressednear duplicate retrieval (NDR) and near duplicate detection (NDD). For retrieval ranking, a pyramid-based scheme is constructed to fuse matching results from different partition levels. For NDD, we also propose a dual-sample approach by using the multilevel distances as features and support vector machine for binary classification. The proposed methods are shown to clearly outperform existing methods through extensive testing on the Columbia Near Duplicate Image Database and two new datasets. In addition, we also discuss in depth our framework in terms of the extension for video NDR and NDD, the sensitivity to parameters, the utilization of multiscale dense SIFT descriptors, and the test of scalability in image NDD. © 2010 IEEE.
Source Title: IEEE Transactions on Circuits and Systems for Video Technology
URI: http://scholarbank.nus.edu.sg/handle/10635/82750
ISSN: 10518215
DOI: 10.1109/TCSVT.2010.2051286
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

23
checked on Sep 20, 2018

WEB OF SCIENCETM
Citations

16
checked on Sep 4, 2018

Page view(s)

32
checked on May 18, 2018

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.