Please use this identifier to cite or link to this item: https://doi.org/10.1145/1066157.1066240
Title: Towards effective indexing for very large video sequence database
Authors: Shen, H.T.
Ooi, B.C. 
Zhou, X.
Issue Date: 2005
Source: Shen, H.T.,Ooi, B.C.,Zhou, X. (2005). Towards effective indexing for very large video sequence database. Proceedings of the ACM SIGMOD International Conference on Management of Data : 730-741. ScholarBank@NUS Repository. https://doi.org/10.1145/1066157.1066240
Abstract: With rapid advances in video processing technologies and ever fast increments in network bandwidth, the popularity of video content publishing and sharing has made similarity search an indispensable operation to retrieve videos of user interests. The video similarity is usually measured by the percentage of similar frames shared by two video sequences, and each frame is typically represented as a high-dimensional feature vector. Unfortunately, high complexity of video content has posed the following major challenges for fast retrieval: (a) effective and compact video representations, (b) efficient similarity measurements, and (c) efficient indexing on the compact representations. In this paper, we propose a number of methods to achieve fast similarity search for very large video database. First, each video sequence is summarized into a small number of clusters, each of which contains similar frames and is represented by a novel compact model called Video Triplet (ViTri). ViTri models a cluster as a tightly bounded hypersphere described by its position, radius, and density. The ViTri similarity is measured by the volume of intersection between two hyperspheres multiplying the minimal density, i.e., the estimated number of similar frames shared by two clusters. The total number of similar frames is then estimated to derive the overall similarity between two video sequences. Hence the time complexity of video similarity measure can be reduced greatly. To further reduce the number of similarity computations on ViTris, we introduce a new one dimensional transformation technique which rotates and shifts the original axis system using PCA in such a way that the original inter-distance between two high-dimensional vectors can be maximally retained after mapping. An efficient B+-tree is then built on the transformed one dimensional values of ViTris' positions. Such a transformation enables B+-tree to achieve its optimal performance by quickly filtering a large portion of non-similar ViTris. Our extensive experiments on real large video datasets prove the effectiveness of our proposals that outperform existing methods significantly. Copyright 2005 ACM.
Source Title: Proceedings of the ACM SIGMOD International Conference on Management of Data
URI: http://scholarbank.nus.edu.sg/handle/10635/40620
ISSN: 07308078
DOI: 10.1145/1066157.1066240
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

103
checked on Dec 13, 2017

Page view(s)

47
checked on Dec 9, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.