Please use this identifier to cite or link to this item: https://doi.org/10.1016/S0304-3975(01)00084-6
Title: Mind change complexity of learning logic programs
Authors: Jain, S. 
Sharma, A.
Keywords: Computational learning theory
Inductive inference
Logic programs
Mind change complexity
Issue Date: 2002
Source: Jain, S., Sharma, A. (2002). Mind change complexity of learning logic programs. Theoretical Computer Science 284 (1) : 143-160. ScholarBank@NUS Repository. https://doi.org/10.1016/S0304-3975(01)00084-6
Abstract: The present paper motivates the study of mind change complexity for learning minimal models of length-bounded logic programs. It establishes ordinal mind change complexity bounds for learnability of these classes both from positive facts and from positive and negative facts. Building on Angluin's notion of finite thickness and Wright's work on finite elasticity, Shinohara defined the property of bounded finite thickness to give a sufficient condition for learnability of indexed families of computable languages from positive data. This paper shows that an effective version of Shinohara's notion of bounded finite thickness gives sufficient conditions for learnability with ordinal mind change bound, both in the context of learnability from positive data and for learnability from complete (both positive and negative) data. Let ω be a notation for the first limit ordinal. Then, it is shown that if a language defining framework yields a uniformly decidable family of languages and has effective bounded finite thickness, then for each natural number m > 0, the class of languages defined by formal systems of length ≤ m: • is identifiable in the limit from positive data with a mind change bound of ω m; • is identifiable in the limit from both positive and negative data with an ordinal mind change bound of ω × m. The above sufficient conditions are employed to give an ordinal mind change bound for learnability of minimal models of various classes of length-bounded Prolog programs, including Shapiro's linear programs, Arimura and Shinohara's depth-bounded linearly covering programs, and Krishna Rao's depth-bounded linearly moded programs. It is also noted that the bound for learning from positive data is tight for the example classes considered. © 2002 Elsevier Science B.V. All rights reserved.
Source Title: Theoretical Computer Science
URI: http://scholarbank.nus.edu.sg/handle/10635/39937
ISSN: 03043975
DOI: 10.1016/S0304-3975(01)00084-6
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

9
checked on Dec 6, 2017

WEB OF SCIENCETM
Citations

9
checked on Nov 21, 2017

Page view(s)

39
checked on Dec 10, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.