Please use this identifier to cite or link to this item:
Title: A knowledge-based approach for duplicate elimination in data cleaning
Authors: Lup Low, W.
Li Lee, M. 
Wang Ling, T. 
Keywords: Data cleaning
Duplicate elimination
Knowledge-based system
Issue Date: 2001
Citation: Lup Low, W.,Li Lee, M.,Wang Ling, T. (2001). A knowledge-based approach for duplicate elimination in data cleaning. Information Systems 26 (8) : 585-606. ScholarBank@NUS Repository.
Abstract: Existing duplicate elimination methods for data cleaning work on the basis of computing the degree of similarity between nearby records in a sorted database. High recall can be achieved by accepting records with low degrees of similarity as duplicates, at the cost of lower precision. High precision can be achieved analogously at the cost of lower recall. This is the recall-precision dilemma. We develop a generic knowledge-based framework for effective data cleaning that can implement any existing data cleaning strategies and more. We propose a new method for computing transitive closure under uncertainty for dealing with the merging of groups of inexact duplicate records and explain why small changes to window sizes has little effect on the results of the sorted neighborhood method. Experimental study with two real-world datasets show that this approach can accurately identify duplicates and anomalies with high recall and precision, thus effectively resolving the recall-precision dilemma.
Source Title: Information Systems
ISSN: 03064379
DOI: 10.1016/S0306-4379(01)00041-2
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Sep 12, 2019

Page view(s)

checked on Sep 9, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.