Please use this identifier to cite or link to this item:
|Title:||Contorting High Dimensional Data for Efficient Main Memory KNN Processing|
|Citation:||Cui, B.,Ooi, B.C.,Su, J.,Tan, K.-L. (2003). Contorting High Dimensional Data for Efficient Main Memory KNN Processing. Proceedings of the ACM SIGMOD International Conference on Management of Data : 479-490. ScholarBank@NUS Repository.|
|Abstract:||In this paper, we present a novel index structure, called Δ-tree, to speed up processing of high-dimensional K-nearest neighbor (KNN) queries in main memory environment. The Δ-tree is a multi-level structure where each level represents the data space at different dimensionalities: the number of dimensions increases towards the leaf level which contains the data at their full dimensions. The remaining dimensions are obtained using Principal Component Analysis, which has the desirable property that the first few dimensions capture most of the information in the dataset. Each level of the tree serves to prune the search space more efficiently as the reduced dimensions can better exploit the small cache line size. Moreover, the distance computation on lower dimensionality is less expensive. We also propose an extension, called Δ +-tree, that globally clusters the data space and then further partitions clusters into small regions to reduce the search space. We conducted extensive experiments to evaluate the proposed structures against existing techniques on different kinds of datasets. Our results show that the Δ +-tree is superior in most cases.|
|Source Title:||Proceedings of the ACM SIGMOD International Conference on Management of Data|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jan 13, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.