Please use this identifier to cite or link to this item: https://doi.org/10.1016/S0304-3975(00)00131-6
DC FieldValue
dc.titleOn the learnability of recursively enumerable languages from good examples
dc.contributor.authorJain, S.
dc.contributor.authorLange, S.
dc.contributor.authorNessel, J.
dc.date.accessioned2013-07-04T08:18:18Z
dc.date.available2013-07-04T08:18:18Z
dc.date.issued2001
dc.identifier.citationJain, S., Lange, S., Nessel, J. (2001). On the learnability of recursively enumerable languages from good examples. Theoretical Computer Science 261 (1) : 3-29. ScholarBank@NUS Repository. https://doi.org/10.1016/S0304-3975(00)00131-6
dc.identifier.issn03043975
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/41043
dc.description.abstractThe present paper investigates identification of indexed families L of recursively enumerable languages from good examples. We distinguish class-preserving learning from good examples (the good examples have to be generated with respect to a hypothesis space having the same range as L) and class-comprising learning from good examples (the good examples have to be selected with respect to a hypothesis space comprising the range of L). A learner is required to learn a target language on every finite superset of the good examples for it. If the learner's first and only conjecture is correct then the underlying learning model is referred to as finite identification from good examples and if the learner makes a finite number of incorrect conjectures before always outputting a correct one, the model is referred to as limit identification from good examples. In the context of class-preserving learning, it is shown that the learning power of finite and limit identification from good text examples coincide. When class comprising learning from good text examples is concerned, limit identification is strictly more powerful than finite learning. Furthermore, if learning from good informant examples is considered, limit identification is superior to finite identification in the class preserving as well as in the class-comprising case. Finally, we relate the models of learning from good examples to one another as well as to the standard learning models in the context of Gold-style language learning. © 2001 Elsevier Science B.V. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/S0304-3975(00)00131-6
dc.sourceScopus
dc.subjectComputational learning theory
dc.subjectGood examples
dc.subjectInductive inference
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1016/S0304-3975(00)00131-6
dc.description.sourcetitleTheoretical Computer Science
dc.description.volume261
dc.description.issue1
dc.description.page3-29
dc.description.codenTCSCD
dc.identifier.isiut000169420100002
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.