Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.ic.2007.10.005
DC FieldValue
dc.titleWhen unlearning helps
dc.contributor.authorBaligat, G.
dc.contributor.authorCase, J.
dc.contributor.authorMerkle, W.
dc.contributor.authorStephan, F.
dc.contributor.authorWiehagen, R.
dc.date.accessioned2014-10-28T02:49:49Z
dc.date.available2014-10-28T02:49:49Z
dc.date.issued2008-05
dc.identifier.citationBaligat, G., Case, J., Merkle, W., Stephan, F., Wiehagen, R. (2008-05). When unlearning helps. Information and Computation 206 (5) : 694-709. ScholarBank@NUS Repository. https://doi.org/10.1016/j.ic.2007.10.005
dc.identifier.issn08905401
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/104477
dc.description.abstractOverregularization seen in child language learning, for example, verb tense constructs, involves abandoning correct behaviours for incorrect ones and later reverting to correct behaviours. Quite a number of other child development phenomena also follow this U-shaped form of learning, unlearning and relearning. A decisive learner does not do this and, more generally, never abandons an hypothesis H for an inequivalent one where it later conjectures an hypothesis equivalent to H, where equivalence means semantical or behavioural equivalence. The first main result of the present paper entails that decisiveness is a real restriction on Gold's model of explanatory (or in the limit) learning of grammars for languages from positive data. This result also solves an open problem posed in 1986 by Osherson, Stob and Weinstein. Second-time decisive learners semantically conjecture each of their hypotheses for any language at most twice. By contrast, such learners are shown not to restrict Gold's model of learning. Non U-shaped learning liberalizes the requirement of decisiveness from being a restriction on all hypotheses output to the same restriction but only on correct hypotheses. The situation regarding learning power for non U-shaped learning is a little more complex than that for decisiveness. This is explained shortly below. Gold's original model for learning grammars from positive data, called EX-learning, requires, for success, syntactic convergence to a correct grammar. A slight variant, called BC-learning, requires only semantic convergence to a sequence of correct grammars that need not be syntactically identical to one another. The second main result says that non U-shaped learning does not restrict EX-learning. However, from an argument of Fulk, Jain and Osherson, non U-shaped learning does restrict BC-learning. In the final section is discussed the possible meaning, for cognitive science, of these results and, in this regard, indicated are some avenues worthy of future investigation. © 2008 Elsevier Inc. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.ic.2007.10.005
dc.sourceScopus
dc.subject03D25
dc.subject03D80
dc.subject68T05
dc.subjectCognitive science
dc.subjectComputational learning theory
dc.subjectInductive inference of grammars for languages from positive data
dc.typeArticle
dc.contributor.departmentMATHEMATICS
dc.description.doi10.1016/j.ic.2007.10.005
dc.description.sourcetitleInformation and Computation
dc.description.volume206
dc.description.issue5
dc.description.page694-709
dc.description.codenINFCE
dc.identifier.isiut000256002800011
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.