Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/99325
DC FieldValue
dc.titleLearning from multiple sources of inaccurate data
dc.contributor.authorBaliga, G.
dc.contributor.authorJain, S.
dc.contributor.authorSharma, A.
dc.date.accessioned2014-10-27T06:02:57Z
dc.date.available2014-10-27T06:02:57Z
dc.date.issued1997-08
dc.identifier.citationBaliga, G.,Jain, S.,Sharma, A. (1997-08). Learning from multiple sources of inaccurate data. SIAM Journal on Computing 26 (4) : 961-990. ScholarBank@NUS Repository.
dc.identifier.issn00975397
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/99325
dc.description.abstractMost theoretical models of inductive inference make the idealized assumption that the data available to a learner is from a single and accurate source. The subject of inaccuracies in data emanating from a single source has been addressed by several authors. The present paper argues in favor of a more realistic learning model in which data emanates from multiple sources, some or all of which may be inaccurate. Three kinds of inaccuracies are considered: spurious data (modeled as noisy texts), missing data (modeled as incomplete texts), and a mixture of spurious and missing data (modeled as imperfect texts). Motivated by the above argument, the present paper introduces and theoretically analyzes a number of inference criteria in which a learning machine is fed data from multiple sources, some of which may be infected with inaccuracies. The learning situation modeled is the identification in the limit of programs from graphs of computable functions. The main parameters of the investigation are: the kind of inaccuracy, the total number of data sources, the number of faulty data sources which produce data within an acceptable bound, and the bound on the number of errors allowed in the final hypothesis learned by the machine. Sufficient conditions are determined under which, for the same kind of inaccuracy, for the same bound on the number of errors in the final hypothesis, and for the same bound on the number of inaccuracies, learning from multiple texts, some of which may be inaccurate, is equivalent to learning from a single inaccurate text. The general problem of determining when learning from multiple inaccurate texts is a restriction over learning from a single inaccurate text turns out to be combinatorially very complex. Significant partial results are provided for this problem. Several results are also provided about conditions under which the detrimental effects of multiple texts can be overcome by either allowing more errors in the final hypothesis or by reducing the number of inaccuracies in the texts. It is also shown that the usual hierarchies resulting from allowing extra errors in the final program (results in increased learning power) and allowing extra inaccuracies in the texts (results in decreased learning power) hold. Finally, it is demonstrated that in the context of learning from multiple inaccurate texts, spurious data is better than missing data, which in turn is better than a mixture of spurious and missing data.
dc.sourceScopus
dc.subjectInaccurate data
dc.subjectInductive inference
dc.subjectMachine learning
dc.subjectMultiple sources
dc.typeArticle
dc.contributor.departmentINFORMATION SYSTEMS & COMPUTER SCIENCE
dc.description.sourcetitleSIAM Journal on Computing
dc.description.volume26
dc.description.issue4
dc.description.page961-990
dc.description.codenSMJCA
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.