ScholarBank@NUShttps://scholarbank.nus.edu.sgThe DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.Thu, 26 May 2022 08:58:52 GMT2022-05-26T08:58:52Z501331- On conservative learning of recursively enumerable languageshttps://scholarbank.nus.edu.sg/handle/10635/113938Title: On conservative learning of recursively enumerable languages
Authors: Gao, Z.; Jain, S.; Stephan, F.
Abstract: Conservative partial learning is a variant of partial learning whereby the learner, on a text for a target language L, outputs one index e with L = W e infinitely often and every further hypothesis d is output only finitely often and satisfies L ⊈ Wd. The present paper studies the learning strength of this notion, comparing it with other learnability criteria such as confident partial learning, explanatory learning, as well as behaviourally correct learning. It is further established that for classes comprising infinite sets, conservative partial learnability is in fact equivalent to explanatory learnability relative to the halting problem. © 2013 Springer-Verlag.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1139382013-01-01T00:00:00Z
- Index sets and universal numberingshttps://scholarbank.nus.edu.sg/handle/10635/113936Title: Index sets and universal numberings
Authors: Jain, S.; Stephan, F.; Teutsch, J.
Abstract: This paper studies the Turing degrees of various properties defined for universal numberings, that is, for numberings which list all partial-recursive functions. In particular properties relating to the domain of the corresponding functions are investigated like the set DEQ of all pairs of indices of functions with the same domain, the set DMIN of all minimal indices of sets and DMIN 1 of all indices which are minimal with respect to equality of the domain modulo finitely many differences. A partial solution to a question of Schaefer is obtained by showing that for every universal numbering with the Kolmogorov property, the set DMIN1 is Turing equivalent to the double jump of the halting problem. Furthermore, it is shown that the join of DEQ and the halting problem is Turing equivalent to the jump of the halting problem and that there are numberings for which DEQ itself has 1-generic Turing degree. © 2009 Springer Berlin Heidelberg.
Thu, 01 Jan 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1139362009-01-01T00:00:00Z
- Consistent partial identificationhttps://scholarbank.nus.edu.sg/handle/10635/113931Title: Consistent partial identification
Authors: Jain, S.; Stephan, F.
Abstract: This study contrasts consistent partial identification with learning in the limit. Here partial identification means that the learner outputs an infinite sequence of conjectures in which one correct hypothesis occurs infinitely often and all other hypotheses occur only finitely often. Consistency means that every conjecture is correct on all the data seen so far. Learning in the limit means that the learner outputs from some time on always the same correct hypothesis. As the class of all total-recursive functions can be partially identified, the constraint of consistency has to be added to make a meaningful comparison to learning in the limit. For the version of consistency where the learner has to be defined and consistent on all inputs, it is shown that the power of the learning criterion depends on whether the function to be learnt is fed in canonical order or in arbitrary order. In the first case, consistent partial identification is incomparable to learning in the limit; in the second case, it is equivalent to consistent learning in the limit with arbitrarily fed input. Furthermore, the inference degrees of these criteria are investigated. For the case where the function is fed in canonical order, there are just two inference degrees: the trivial one which contains all oracles of hyperimmune free Turing degree and the omniscient one which contains all oracles of hyperimmune Turing degree. In the case that the function is fed in arbitrary order, the picture is more complicated and the omniscient inference degree contains exactly all oracles of high Turing degree.
Thu, 01 Jan 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1139312009-01-01T00:00:00Z
- Effectivity questions for Kleene's Recursion Theoremhttps://scholarbank.nus.edu.sg/handle/10635/113933Title: Effectivity questions for Kleene's Recursion Theorem
Authors: Case, J.; Jain, S.; Stephan, F.
Abstract: The present paper explores the interaction between two recursion-theoretic notions: program self-reference and learning partial recursive functions in the limit. Kleene's Recursion Theorem formalises the notion of program self-reference: It says that given a partial-recursive function ψp there is an index e such that the e-th function ψe is equal to the e-th slice of ψp. The paper studies constructive forms of Kleene's recursion theorem which are inspired by learning criteria from inductive inference and also relates these constructive forms to notions of learnability. For example, it is shown that a numbering can fail to satisfy Kleene's Recursion Theorem, yet that numbering can still be used as a hypothesis space when learning explanatorily an arbitrary learnable class. The paper provides a detailed picture of numberings separating various versions of Kleene's Recursion Theorem and learnability. © Springer-Verlag 2013.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1139332013-01-01T00:00:00Z
- Depth, highness and DNR degreeshttps://scholarbank.nus.edu.sg/handle/10635/177530Title: Depth, highness and DNR degrees
Authors: Moser P.; Stephan F.
Abstract: We study Bennett deep sequences in the context of recursion theory; in particular we investigate the notions of O(1)-deepK, O(1)-deepC, order-deepK and order-deepC sequences. Our main results are that Martin-Löf random sets are not order-deepC, that every many-one degree contains a set which is not O(1)-deepC, that O(1)-deepC sets and order-deepK sets have high or DNR Turing degree and that no K-trival set is O(1)-deepK. © 2017 by the author(s).
Sun, 01 Jan 2017 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1775302017-01-01T00:00:00Z
- Absolute versus probabilistic classification in a logical settinghttps://scholarbank.nus.edu.sg/handle/10635/40685Title: Absolute versus probabilistic classification in a logical setting
Authors: Jain, S.; Martin, E.; Stephan, F.
Abstract: Given a set W of logical structures, or possible worlds, a set D of logical formulas, or possible data, and a logical formula ψ, we consider the classification problem of determining in the limit and almost always correctly whether a possible world ℳ satisfies ψ, from a complete enumeration of the possible data that are true in ℳ. One interpretation of almost always correctly is that the classification might be wrong on a set of possible worlds of measure 0, w.r.t. some natural probability distribution over the set of possible worlds. Another interpretation is that the classifier is only required to classify a set W′ of possible worlds of measure 1, without having to produce any claim in the limit on the truth of ψ in the members of the complement of W′ in W. We compare these notions with absolute classification of W w.r.t. a formula that is almost always equivalent to ψ in W, hence investigate whether the set of possible worlds on which the classification is correct is definable. Finally, in the spirit of the kind of computations considered in Logic programming, we address the issue of computing almost correctly in the limit witnesses to leading existentially quantified variables in existential formulas. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/406852005-01-01T00:00:00Z
- Automatic linear orders and treeshttps://scholarbank.nus.edu.sg/handle/10635/40491Title: Automatic linear orders and trees
Authors: Khoussainov, B.; Rubin, S.; Stephan, F.
Abstract: We investigate partial orders that are computable, in a precise sense, by finite automata. Our emphasis is on trees and linear orders. We study the relationship between automatic linear orders and trees in terms of rank functions that are related to Cantor-Bendixson rank. We prove that automatic linear orders and automatic trees have finite rank. As an application we provide a procedure for deciding the isomorphism problem for automatic ordinals. We also investigate the complexity and definability of infinite paths in automatic trees. In particular, we show that every infinite path in an automatic tree with countably many infinite paths is a regular language. © 2005 ACM.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/404912005-01-01T00:00:00Z
- Kolmogorov-Loveland randomness and stochasticityhttps://scholarbank.nus.edu.sg/handle/10635/40418Title: Kolmogorov-Loveland randomness and stochasticity
Authors: Merkle, W.; Miller, J.; Nies, A.; Reimann, J.; Stephan, F.
Abstract: One of the major open problems in the field of effective randomness is whether Martin-Löf randomness is the same as Kolmogorov-Loveland (or KL) randomness, where an infinite binary sequence is KL-random if there is no computable non-monotonic betting strategy that succeeds on the sequence in the sense of having an unbounded gain in the limit while betting successively on bits of the sequence. Our first main result states that every KL-random sequence has arbitrarily dense, easily extractable subsequences that are Martin-Löf random. A key lemma in the proof of this result is that for every effective split of a KL-random sequence at least one of the halves is Martin-Löf random. We show that this splitting property does not characterize KL-randomness by constructing a sequence that is not even computably random such that every effective split yields subsequences that are 2-random, hence are in particular Martin-Löf random. A sequence X is KL-stochastic if there is no computable non-monotonic selection rule that selects from X an infinite, biased sequence. Our second main result asserts that every KL-stochastic sequence has constructive dimension 1, or equivalently, a sequence cannot be KL-stochastic if it has infinitely many prefixes that can be compressed by a factor of α < 1 with respect to prefix-free Kolmogorov complexity. This improves on a result by Muchnik, who has shown a similar implication where the premise requires that such compressible prefixes can be found effectively. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/404182005-01-01T00:00:00Z
- Prescribed learning of indexed familieshttps://scholarbank.nus.edu.sg/handle/10635/43100Title: Prescribed learning of indexed families
Authors: Jain, S.; Stephan, F.; Nan, Y.
Abstract: This work extends studies of Angluin, Lange and Zeugmann on how learnability of a language class depends on the hypothesis space used by the learner. While previous studies mainly focused on the case where the learner chooses a particular hypothesis space, the goal of this work is to investigate the case where the learner has to cope with all possible hypothesis spaces. In that sense, the present work combines the approach of Angluin, Lange and Zeugmann with the question of how a learner can be synthesized. The investigation for the case of uniformly r.e. classes has been done by Jain, Stephan and Ye [6]. This paper investigates the case for indexed families and gives a special attention to the notions of conservative and non U-shaped learning.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/431002008-01-01T00:00:00Z
- Universal recursively enumerable sets of stringshttps://scholarbank.nus.edu.sg/handle/10635/104666Title: Universal recursively enumerable sets of strings
Authors: Calude, C.S.; Nies, A.; Staiger, L.; Stephan, F.
Abstract: The present work clarifies the relation between domains of universal machines and r.e. prefix-free supersets of such sets. One such characterisation can be obtained in terms of the spectrum function s W (n) mapping n to the number of all strings of length n in the set W. An r.e. prefix-free set W is the superset of the domain of a universal machine iff there are two constants c,d such that s W (n)+s W (n+1)+...+s W (n+c) is between 2 n-H(n)-d and 2 n-H(n)+d for all n; W is the domain of a universal machine iff there is a constant c such that for each n the pair of n and sW (n)+s W (n+1)+...+s W (n+c) has at least the prefix-free Description complexity n. There exists a prefix-free r.e. superset W of a domain of a universal machine which is the not a domain of a universal machine; still, the halting probability Ω W of such a set W is Martin-Löf random. Furthermore, it is investigated to which extend this results can be transferred to plain universal machines. © 2008 Springer-Verlag Berlin Heidelberg.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046662008-01-01T00:00:00Z
- Finite state incompressible infinite sequenceshttps://scholarbank.nus.edu.sg/handle/10635/104565Title: Finite state incompressible infinite sequences
Authors: Calude, C.S.; Staiger, L.; Stephan, F.
Abstract: In this paper we define and study finite state complexity of finite strings and infinite sequences and study connections of these complexity notions to randomness and normality. We show that the finite state complexity does not only depend on the codes for finite transducers, but also on how the codes are mapped to transducers. As a consequence we relate the finite state complexity to the plain (Kolmogorov) complexity, to the process complexity and to prefix-free complexity. Working with prefix-free sets of codes we characterise Martin-Löf random sequences in terms of finite state complexity: the weak power of finite transducers is compensated by the high complexity of enumeration of finite transducers. We also prove that every finite state incompressible sequence is normal, but the converse implication is not true. These results also show that our definition of finite state incompressibility is stronger than all other known forms of finite automata based incompressibility, in particular the notion related to finite automaton based betting systems introduced by Schnorr and Stimm [28]. The paper concludes with a discussion of open questions. © 2014 Springer International Publishing Switzerland.
Wed, 01 Jan 2014 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045652014-01-01T00:00:00Z
- How powerful are integer-valued martingales?https://scholarbank.nus.edu.sg/handle/10635/104661Title: How powerful are integer-valued martingales?
Authors: Bienvenu, L.; Stephan, F.; Teutsch, J.
Abstract: In the theory of algorithmic randomness, one of the central notions is that of computable randomness. An infinite binary sequence X is computably random if no recursive martingale (strategy) can win an infinite amount of money by betting on the values of the bits of X. In the classical model, the martingales considered are real-valued, that is, the bets made by the martingale can be arbitrary real numbers. In this paper, we investigate a more restricted model, where only integer-valued martingales are considered, and we study the class of random sequences induced by this model. © 2010 Springer-Verlag Berlin Heidelberg.
Fri, 01 Jan 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046612010-01-01T00:00:00Z
- Partial learning of recursively enumerable languageshttps://scholarbank.nus.edu.sg/handle/10635/104607Title: Partial learning of recursively enumerable languages
Authors: Gao, Z.; Stephan, F.; Zilles, S.
Abstract: This paper studies several typical learning criteria in the model of partial learning of r.e. sets in the recursion-theoretic framework of inductive inference. Its main contribution is a complete picture of how the criteria of confidence, consistency and conservativeness in partial learning of r.e. sets separate, also in relation to basic criteria of learning in the limit. Thus this paper constitutes a substantial extension to prior work on partial learning. Further highlights of this work are very fruitful characterisations of some of the inference criteria studied, leading to interesting consequences about the structural properties of the collection of classes learnable under these criteria. In particular a class is consistently partially learnable iff it is a subclass of a uniformly recursive family. © 2013 Springer-Verlag.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046072013-01-01T00:00:00Z
- Splitting of learnable classeshttps://scholarbank.nus.edu.sg/handle/10635/104634Title: Splitting of learnable classes
Authors: Li, H.; Stephan, F.
Abstract: A class L is called mitotic if it admits a splitting L0, L 1 such that L, L0, L1 are all equivalent with respect to a certain reducibility. Such a splitting might be called a symmetric splitting. In this paper we investigate the possibility of constructing a class which has a splitting and where any splitting of the class is a symmetric splitting. We call such a class a symmetric class. In particular we construct an incomplete symmetric BC-learnable class with respect to strong reducibility. We also introduce the notion of very strong reducibility and construct a complete symmetric BC-learnable class with respect to very strong reducibility. However, for EX-learnability, it is shown that there does not exist a symmetric class with respect to any weak, strong or very strong reducibility. © 2010 Springer-Verlag.
Fri, 01 Jan 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046342010-01-01T00:00:00Z
- Selection by recursively enumerable setshttps://scholarbank.nus.edu.sg/handle/10635/104624Title: Selection by recursively enumerable sets
Authors: Merkle, W.; Stephan, F.; Teutsch, J.; Wang, W.; Yang, Y.
Abstract: For given sets A, B and Z of natural numbers where the members of Z are z0, z1,⋯ in ascending order, one says that A is selected from B by Z if A(i) = B(zi) for all i. Furthermore, say that A is selected from B if A is selected from B by some recursively enumerable set, and that A is selected from B in n steps iff there are sets E0, E1,⋯, En such that E0 = A, En = B, and Ei is selected from Ei+1 for each i < n. The following results on selections are obtained in the present paper. A set is ω-r.e. if and only if it can be selected from a recursive set in finitely many steps if and only if it can be selected from a recursive set in two steps. There is some Martin-Löf random set from which any ω-r.e. set can be selected in at most two steps, whereas no recursive set can be selected from a Martin-Löf random set in one step. Moreover, all sets selected from Chaitin's Ω in finitely many steps are Martin-Löf random. © Springer-Verlag Berlin Heidelberg 2013.
Tue, 01 Jan 2013 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046242013-01-01T00:00:00Z
- Kolmogorov complexity and the recursion theoremhttps://scholarbank.nus.edu.sg/handle/10635/104662Title: Kolmogorov complexity and the recursion theorem
Authors: Kjos-Hanssen, B.; Merkle, W.; Stephan, P.
Abstract: We introduce the concepts of complex and autocomplex sets, where a set A is complex if there is a recursive, nondecreasing and unbounded lower bound on the Kolmogorov complexity of the prefixes (of the characteristic sequence) of A, and autocomplex is defined likewise with recursive replaced by A-recursive. We observe that exactly the autocomplex sets allow to compute words of given Kolmogorov complexity and demonstrate that a set computes a diagonally nonrecursive (DNR) function if and only if the set is autocomplex. The class of sets that compute DNR functions is intensively studied in recursion theory and is known to coincide with the class of sets that compute fixed-point free functions. Consequently, the Recursion Theorem fails relative to a set if and only if the set is autocomplex, that is, we have a characterization of a fundamental concept of theoretical computer science in terms of Kolmogorov complexity. Moreover, we obtain that recursively enumerable sets are autocomplex if and only if they are complete, which yields an alternate proof of the well-known completeness criterion for recursively enumerable sets in terms of computing DNR functions. All results on autocomplex sets mentioned in the last paragraph extend to complex sets if the oracle computations are restricted to truth-table or weak truth-table computations, for example, a set is complex if and only if it wtt-computes a DNR function. Moreover, we obtain a set that is complex but does not compute a Martin-Löf random set, which gives a partial answer to the open problem whether all sets of positive constructive Hausdorff dimension compute Martin-Löf random sets.Furthermore, the following questions are addressed: Given n, how difficult is it to find a word of length n that (a) has at least prefix-free Kolmogorov complexity n, (b) has at least plain Kolmogorov complexity n or (c) has the maximum possible prefix-free Kolmogorov complexity among all words of length n. All these questions are investigated with respect to the oracles needed to carry out this task and it is shown that (a) is easier than (b) and (b) is easier than (c). In particular, we argue that for plain Kolmogorov complexity exactly the PA-complete sets compute incompressible words, while the class of sets that compute words of maximum complexity depends on the choice of the universal Turing machine, whereas for prefix-free Kolmogorov complexity exactly the complete sets allow to compute words of maximum complexity. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046622006-01-01T00:00:00Z
- On the parameterised complexity of learning patternshttps://scholarbank.nus.edu.sg/handle/10635/104602Title: On the parameterised complexity of learning patterns
Authors: Stephan, F.; Yoshinaka, R.; Zeugmann, T.
Abstract: Angluin (1980) showed that there is a consistent and conservative learner for the class of non-erasing pattern languages; however, most of these learners are NP-hard. In the current work, the complexity of consistent polynomial time learners for the class of non-erasing pattern languages is revisited, with the goal to close one gap left by Angluin, namely the question on what happens if the learner is not required to output each time a consistent pattern of maximum possible length. It is shown that consistent learners are non-uniformly W[1]-hard inside the fixed-parameter hierarchy of Downey and Fellows (1999), and that there is also a W[1]-complete such learner. Only when one requires that the learner is in addition both, conservative and class-preserving, then one can show that the learning task is NP-hard for certain alphabet-sizes. © 2012 Springer-Verlag London Limited.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1046022012-01-01T00:00:00Z
- Lowness for weakly 1-generic and kurtz-randomhttps://scholarbank.nus.edu.sg/handle/10635/104585Title: Lowness for weakly 1-generic and kurtz-random
Authors: Stephan, F.; Yu, L.
Abstract: It is shown that a set is low for weakly 1-generic iff it has neither dnr nor hyperimmune Turing degree. As this notion is more general than being recursively traceable, this answers negatively a recent question on the characterization of these sets. Furthermore, it is shown that every set which is low for weakly 1-generic is also low for Kurtz-random. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045852006-01-01T00:00:00Z
- Learnability of co-r.e. classeshttps://scholarbank.nus.edu.sg/handle/10635/104582Title: Learnability of co-r.e. classes
Authors: Gao, Z.; Stephan, F.
Abstract: The object of investigation in this paper is the learnability of co-recursively enumerable (co-r.e.) languages based on Gold's [11] original model of inductive inference. In particular, the following learning models are studied: finite learning, explanatory learning, vacillatory learning and behaviourally correct learning. The relative effects of imposing further learning constraints, such as conservativeness and prudence on these various learning models are also investigated. Moreover, an extension of Angluin's [1] characterisation of identifiable indexed families of recursive languages to families of conservatively learnable co-r.e. classes is presented. In this connection, the paper considers the learnability of indexed families of recursive languages, uniformly co-r.e. classes as well as other general classes of co-r.e. languages. A containment hierarchy of co-r.e. learning models is thereby established; while this hierarchy is quite similar to its r.e. analogue, there are some surprising collapses when using a co-r.e. hypothesis space; for example vacillatory learning collapses to explanatory learning. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045822012-01-01T00:00:00Z
- Learning families of closed sets in matroidshttps://scholarbank.nus.edu.sg/handle/10635/104583Title: Learning families of closed sets in matroids
Authors: Gao, Z.; Stephan, F.; Wu, G.; Yamamoto, A.
Abstract: In this paper it is studied for which oracles A and which types of A-r.e. matroids the class of all A-r.e. closed sets in the matroid is learnable by an unrelativised learner. The learning criteria considered comprise in particular criteria more general than behaviourally correct learning, namely behaviourally correct learning from recursive texts, partial learning and reliably partial learning. For various natural classes of matroids and learning criteria, characterisations of learnability are obtained. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045832012-01-01T00:00:00Z
- Initial segment complexities of randomness notionshttps://scholarbank.nus.edu.sg/handle/10635/104498Title: Initial segment complexities of randomness notions
Authors: Hölzl, R.; Kräling, T.; Stephan, F.; Wu, G.
Abstract: Schnorr famously proved that Martin-Lö f-randomness of a sequence A can be characterised via the complexity of A's initial segments. Nies, Stephan and Terwijn as well as independently Miller showed that Kolmogorov randomness coincides with Martin-Lö f randomness relative to the halting problem K; that is, a set A is Martin-Löf random relative to K iff there is no function f such that for all m and all n > f(m) it holds that C(A(0)A(1)... A(n)) ≤ n - m. In the present work it is shown that characterisations of this style can also be given for other randomness criteria like strongly random, Kurtz random relative to K, PA-incomplete Martin-Lö f random and strongly Kurtz random; here one does not just quantify over all functions f but over functions f of a specific form. For example, A is Martin-Löf random and PA-incomplete iff there is no A-recursive function f such that for all m and all n > f(m) it holds that C(A(0)A(1)... A(n)) ≤ n-m. The characterisation for strong randomness relates to functions which are the concatenation of an A-recursive function executed after a Krecursive function; this solves an open problem of Nies. In addition to this, characterisations of a similar style are also given for Demuth randomness and Schnorr randomness relative to K. Although the unrelativised versions of Kurtz randomness and Schnorr randomness do not admit such a characterisation in terms of plain Kolmogorov complexity, Bienvenu and Merkle gave one in terms of Kolmogorov complexity defined by computable machines. © IFIP International Federation for Information Processing 2010.
Fri, 01 Jan 2010 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1044982010-01-01T00:00:00Z
- Constructive dimension and weak truth-table degreeshttps://scholarbank.nus.edu.sg/handle/10635/104547Title: Constructive dimension and weak truth-table degrees
Authors: Bienvenu, L.; Doty, D.; Stephan, F.
Abstract: This paper examines the constructive Hausdorff and packing dimensions of weak truth-table degrees. The main result is that every infinite sequence S with constructive Hausdorff dimension dimH (S) and constructive packing dimension dimP(S) is weak truth-table equivalent to a sequence R with dim H(R) ≥ dimH(S)/dimP(S) - ε, for arbitrary ε > 0. Furthermore, if dimP(S) > 0, then dimP(R) ≥ 1-ε. The reduction thus serves as a randomness extractor that increases the algorithmic randomness of S1 as measured by constructive dimension. A number of applications of this result shed new light on the constructive dimensions of wtt degrees (and, by extension, Turing degrees). A lower bound of dimH(5)/dimP(5) is shown to hold for the wtt degree of any sequence S. A new proof is given of a previously-known zero-one law for the constructive packing dimension of wtt degrees. It is also shown that, for any regular sequence S (that is, dim H(S) = dimP(S)) such that dimH(S) > 0, the wtt degree of S has constructive Hausdorff and packing dimension equal to 1. Finally, it is shown that no single Turing reduction can be a universal constructive Hausdorff dimension extractor. © Springer-Verlag Berlin Heidelberg 2007.
Mon, 01 Jan 2007 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045472007-01-01T00:00:00Z
- Confident and consistent partial learning of recursive functionshttps://scholarbank.nus.edu.sg/handle/10635/104546Title: Confident and consistent partial learning of recursive functions
Authors: Gao, Z.; Stephan, F.
Abstract: Partial learning is a criterion where the learner infinitely often outputs one correct conjecture while every other hypothesis is issued only finitely often. This paper addresses two variants of partial learning in the setting of inductive inference of functions: first, confident partial learning requires that the learner also on those functions which it does not learn, singles out exactly one hypothesis which is output infinitely often; second, essentially class consistent partial learning is partial learning with the additional constraint that on the functions to be learnt, almost all hypotheses issued are consistent with all the data seen so far. The results of the present work are that confident partial learning is more general than explanatory learning, incomparable with behaviourally correct learning and closed under union; essentially class consistent partial learning is more general than behaviourally correct learning and incomparable with confident partial learning. Furthermore, it is investigated which oracles permit to learn all recursive functions under these criteria: for confident partial learning, some non-high oracles are omniscient; for essentially class consistent partial learning, all PA-complete and all oracles of hyperimmune Turing degree are omniscient. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045462012-01-01T00:00:00Z
- Automata on ordinals and linear ordershttps://scholarbank.nus.edu.sg/handle/10635/104537Title: Automata on ordinals and linear orders
Authors: Schlicht, P.; Stephan, F.
Abstract: We investigate structures recognizable by α-automata with running time a limit ordinal α. The domain of such a structure consists of finite α-words with gaps. An α-automaton resembles a finite automaton but has a limit rule which maps the set of states which appear cofinally often before the limit to a limit state. We determine the suprema of the α-automatic ordinals and the ranks of α-automatic linear orders. The power of α-automata increases with every power of ω. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1045372011-01-01T00:00:00Z
- Applications of kolmogorov complexity to computable model theoryhttps://scholarbank.nus.edu.sg/handle/10635/102869Title: Applications of kolmogorov complexity to computable model theory
Authors: Khoussainov, B.; Semukhin, P.; Stephan, F.
Abstract: In this paper we answer the following well-known open question in computable model theory. Does there exist a computable not N0- categorical saturated structure with a unique computable isomorphism type? Our answer is affirmative and uses a construction based on Kolmogorov complexity. With a variation of this construction, we also provide an example of an N 1 -categorical but not N0-categorical saturated Σ1 0-structure with a unique computable isomorphism type. In addition, using the construction we give an example of an N 1-categorical but not N0-categorical theory whose only non-computable model is the prime one. © 2007, Association for Symbolic Logic.
Sat, 01 Sep 2007 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1028692007-09-01T00:00:00Z
- Presentations of K-trivial reals and kolmogorov complexityhttps://scholarbank.nus.edu.sg/handle/10635/40532Title: Presentations of K-trivial reals and kolmogorov complexity
Authors: Stephan, F.; Wu, G.
Abstract: For given real α ∈ {0,1} ∞, a presentation V of α is a prefix-free and recursively enumerable subset of {0,1}* such that α = ∑ σ∈V 2 -|σ|. So, α has a presentation iff α is a left-r.e. real. Let A be the class of all reals which have only computable presentations. Downey and LaForte proved that A has an incomputable member. Call α strongly Kurtz-random if there does not exist any recursive function f with K(α(0) ... α(f(n) - 1)) < f(n) - n for all n. It is shown that every α ∈ A is either computable or strongly Kurtz-random. In particular, all K-trivial members of A are computable where α is K-trivial iff there is a c such that, for all n, K(α(0) ... α(n - 1)) < K(n) + c. Thus there is a natural and nontrivial Turing ideal of left-r.e. α not containing any incomputable member of A. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/405322005-01-01T00:00:00Z
- Immunity and hyperimmunity for sets of minimal indiceshttps://scholarbank.nus.edu.sg/handle/10635/103400Title: Immunity and hyperimmunity for sets of minimal indices
Authors: Stephan, F.; Teutsch, J.
Abstract: We extend Meyer's 1972 investigation of sets of minimal indices. Blum showed that minimal index sets are immune, and we show that they are also immune against high levels of the arithmetic hierarchy. We give optimal immunity results for sets of minimal indices with respect to the arithmetic hierarchy, and we illustrate with an intuitive example that immunity is not simply a refinement of arithmetic complexity. Of particular note here are the fact that there are three minimal index sets located in Π3 - Σ3 with distinct levels of immunity and that certain immunity properties depend on the choice of underlying acceptable numbering. We show that minimal index sets are never hyperimmune; however, they can be immune against the arithmetic sets. Lastly, we investigate Turing degrees for sets of random strings defined with respect to Bagchi's size-function s. © 2008 by University of Notre Dame.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1034002008-01-01T00:00:00Z
- Learning from streamshttps://scholarbank.nus.edu.sg/handle/10635/43283Title: Learning from streams
Authors: Jain, S.; Stephan, F.; Ye, N.
Abstract: Learning from streams is a process in which a group of learners separately obtain information about the target to be learned, but they can communicate with each other in order to learn the target. We are interested in machine models for learning from streams and study its learning power (as measured by the collection of learnable classes). We study how the power of learning from streams depends on the two parameters m and n, where n is the number of learners which track a single stream of input each and m is the number of learners (among the n learners) which have to find, in the limit, the right description of the target. We study for which combinations m,n and m′,n′ the following inclusion holds: Every class learnable from streams with parameters m,n is also learnable from streams with parameters m′,n′. For the learning of uniformly recursive classes, we get a full characterization which depends only on the ratio m/n; but for general classes the picture is more complicated. Most of the noninclusions in team learning carry over to noninclusions with the same parameters in the case of learning from streams; but only few inclusions are preserved and some additional noninclusions hold. Besides this, we also relate learning from streams to various other closely related and well-studied forms of learning: iterative learning from text, learning from incomplete text and learning from noisy text. © 2009 Springer.
Thu, 01 Jan 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432832009-01-01T00:00:00Z
- Robust learning of automatic classes of languageshttps://scholarbank.nus.edu.sg/handle/10635/43281Title: Robust learning of automatic classes of languages
Authors: Jain, S.; Martin, E.; Stephan, F.
Abstract: This paper adapts and investigates the paradigm of robust learning, originally defined in the inductive inference literature for classes of recursive functions, to learning languages from positive data. Robustness is a very desirable property, as it captures a form of invariance of learnability under admissible transformations on the object of study. The classes of languages of interest are automatic - a formal concept that captures the notion of being recognisable by a finite automaton. A class of first-order definable operators - called translators - is introduced as natural transformations that preserve automaticity of languages in a given class and the inclusion relations between languages in the class. For many learning criteria, we characterise the classes of languages all of whose translations are learnable under that criterion. The learning criteria have been chosen from the literature on both explanatory learning from positive data and query learning, and include consistent and conservative learning, strong-monotonic learning, strong-monotonic consistent learning, finite learning, learning from subset queries, learning from superset queries, and learning from membership queries. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432812011-01-01T00:00:00Z
- Uncountable automatic classes and learninghttps://scholarbank.nus.edu.sg/handle/10635/43282Title: Uncountable automatic classes and learning
Authors: Jain, S.; Luo, Q.; Semukhin, P.; Stephan, F.
Abstract: In this paper we consider uncountable classes recognizable by ω-automata and investigate suitable learning paradigms for them. In particular, the counterparts of explanatory, vacillatory and behaviourally correct learning are introduced for this setting. Here the learner reads in parallel the data of a text for a language L from the class plus an ω-index α and outputs a sequence of ω-automata such that all but finitely many of these ω-automata accept the index α iff α is an index for L. It is shown that any class is behaviourally correct learnable if and only if it satisfies Angluin's tell-tale condition. For explanatory learning, such a result needs that a suitable indexing of the class is chosen. On the one hand, every class satisfying Angluin's tell-tale condition is vacillatory learnable in every indexing; on the other hand, there is a fixed class such that the level of the class in the hierarchy of vacillatory learning depends on the indexing of the class chosen. We also consider a notion of blind learning. On the one hand, a class is blind explanatory (vacillatory) learnable if and only if it satisfies Angluin's tell-tale condition and is countable; on the other hand, for behaviourally correct learning there is no difference between the blind and non-blind version. This work establishes a bridge between automata theory and inductive inference (learning theory). © 2009 Springer.
Thu, 01 Jan 2009 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432822009-01-01T00:00:00Z
- Closed left-r.e. setshttps://scholarbank.nus.edu.sg/handle/10635/43289Title: Closed left-r.e. sets
Authors: Jain, S.; Stephan, F.; Teutsch, J.
Abstract: A set is called r-closed left-r.e. iff every set r-reducible to it is also a left-r.e. set. It is shown that some but not all left-r.e. cohesive sets are many-one closed left-r.e. sets. Ascending reductions are many-one reductions via an ascending function; left-r.e. cohesive sets are also ascening closed left-r.e. sets. Furthermore, it is shown that there is a weakly 1-generic many-one closed left-r.e. set. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432892011-01-01T00:00:00Z
- Input-dependence in function-learninghttps://scholarbank.nus.edu.sg/handle/10635/43292Title: Input-dependence in function-learning
Authors: Jain, S.; Martin, E.; Stephan, F.
Abstract: In the standard literature on inductive inference, a learner sees as input the course of values of the function to be learned. In the present work, it is investigated how reasonable this choice is and how sensitive the model is with respect to variations like the overgraph or undergraph of the function. Several implications and separations are shown and for the basic notions, a complete picture is obtained. Furthermore, relations to oracles, additional information and teams are explored. © Springer-Verlag Berlin Heidelberg 2007.
Mon, 01 Jan 2007 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432922007-01-01T00:00:00Z
- Numberings optimal for learninghttps://scholarbank.nus.edu.sg/handle/10635/43290Title: Numberings optimal for learning
Authors: Jain, S.; Stephan, F.
Abstract: This paper extends previous studies on learnability in non-acceptable numberings by considering the question: for which criteria which numberings are optimal, that is, for which numberings it holds that one can learn every learnable class using the given numbering as hypothesis space. Furthermore an effective version of optimality is studied as well. It is shown that the effectively optimal numberings for finite learning are just the acceptable numberings. In contrast to this, there are non-acceptable numberings [3] which are optimal for finite learning and effectively optimal for explanatory, vacillatory and behaviourally correct learning. The numberings effectively optimal for explanatory learning are the K-acceptable numberings. A similar characterization is obtained for the numberings which are effectively optimal for vacillatory learning. Furthermore, it is studied which numberings are optimal for one and not for another criterion: among the criteria of finite, explanatory, vacillatory and behaviourally correct learning all separations can be obtained; however every numbering which is optimal for explanatory learning is also optimal for consistent learning. © 2008 Springer-Verlag Berlin Heidelberg.
Tue, 01 Jan 2008 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432902008-01-01T00:00:00Z
- Mitotic classeshttps://scholarbank.nus.edu.sg/handle/10635/43293Title: Mitotic classes
Authors: Jain, S.; Stephan, F.
Abstract: For the natural notion of splitting classes into two disjoint subclasses via a recursive classifier working on texts, the question is addressed how these splittings can look in the case of learnable classes. Here the strength of the classes is compared using the strong and weak reducibility from intrinsic complexity. It is shown that, for explanatorily learnable classes, the complete classes are also mitotic with respect to weak and strong reducibility, respectively. But there is a weak complete class which cannot be split into two classes which are of the same complexity with respect to strong reducibility. It is shown that for complete classes for behaviourally correct learning, one half of each splitting is complete for this learning notion as well. Furthermore, it is shown that explanatorily learnable and recursively enumerable classes always have a splitting into two incomparable classes; this gives an inductive inference counterpart of Sacks Splitting Theorem from recursion theory. © Springer-Verlag Berlin Heidelberg 2007.
Mon, 01 Jan 2007 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432932007-01-01T00:00:00Z
- Enlarging learnable classeshttps://scholarbank.nus.edu.sg/handle/10635/43287Title: Enlarging learnable classes
Authors: Jain, S.; Kötzing, T.; Stephan, F.
Abstract: An early result in inductive inference shows that the class of Ex-learnable sets is not closed under unions. In this paper we are interested in the following question: For what classes of functions is the union with an arbitrary Ex-learnable class again Ex-learnable, either effectively (in an index for a learner of an Ex-learnable class) or non-effectively? We show that the effective case and the non-effective case separate, and we give a sufficient criterion for the effective case. Furthermore, we extend our notions to considering unions with classes of single functions, as well as to other learning criteria, such as finite learning and behaviorally correct learning. Furthermore, we consider the possibility of (effectively) extending learners to learn (infinitely) more functions. It is known that all Ex-learners learning a dense set of functions can be effectively extended to learn infinitely more. It was open whether the learners learning a non-dense set of functions can be similarly extended. We show that this is not possible, but we give an alternative split of all possible learners into two sets such that, for each of the sets, all learners from that set can be effectively extended. We analyze similar concepts also for other learning criteria. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432872012-01-01T00:00:00Z
- Invertible classeshttps://scholarbank.nus.edu.sg/handle/10635/43284Title: Invertible classes
Authors: Jain, S.; Nessel, J.; Stephan, F.
Abstract: This paper considers when one can invert general recursive operators which map a class of functions ℱ to ℱ. In this regard, we study four different notions of inversion. We additionally consider enumeration of operators which cover all general recursive operators which map ℱ to ℱ in the sense that for every general recursive operator ψ mapping ℱ to ℱ, there is a general recursive operator in the enumerated sequence which behaves the same way as ψ on ℱ. Three different possible types of enumeration are studied. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432842006-01-01T00:00:00Z
- Some recent results in U-shaped learninghttps://scholarbank.nus.edu.sg/handle/10635/43285Title: Some recent results in U-shaped learning
Authors: Jain, S.; Stephan, F.
Abstract: U-shaped learning deals with a learner first having the correct hypothesis, then changing it to an incorrect hypothesis and then relearning the correct hypothesis. This phenomenon has been observed by psychologists in various studies of children development. In this survey talk, we will discuss some recent results regarding U-shaped learning and related criteria. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432852006-01-01T00:00:00Z
- Memory-limited U-shaped learninghttps://scholarbank.nus.edu.sg/handle/10635/43286Title: Memory-limited U-shaped learning
Authors: Carlucci, L.; Case, J.; Jain, S.; Stephan, F.
Abstract: U-shaped learning is a learning behaviour in which the learner first learns something, then unlearns it and finally relearns it. Such a behaviour, observed by psychologists, for example, in the learning of past-tenses of English verbs, has been widely discussed among psychologists and cognitive scientists as a fundamental example of the non-monotonicity of learning. Previous theory literature has studied whether or not U-shaped learning, in the context of Gold's formal model of learning languages from positive data, is necessary for learning some tasks. It is clear that human learning involves memory limitations. In the present paper we consider, then, this question of the necessity of U-shaped learning for some learning models featuring memory limitations. Our results show that the question of the necessity of U-shaped learning in this memory-limited setting depends on delicate tradeoffs between the learner's ability to remember its own previous conjecture, to store some values in its long-term memory, to make queries about whether or not items occur in previously seen data and on the learner's choice of hypothesis space. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432862006-01-01T00:00:00Z
- Automatic learners with feedback querieshttps://scholarbank.nus.edu.sg/handle/10635/43206Title: Automatic learners with feedback queries
Authors: Case, J.; Jain, S.; Ong, Y.S.; Semukhin, P.; Stephan, F.
Abstract: Automatic classes are classes of languages for which a finite automaton can decide whether a given element is in a set given by its index. The present work studies the learnability of automatic families by automatic learners which, in each round, output a hypothesis and update a long term memory, depending on the input datum, via an automatic function, that is, via a function whose graph is recognised by a finite automaton. Many variants of automatic learners are investigated: where the long term memory is restricted to be the just prior hypothesis whenever this exists, cannot be of size larger than the size of the longest example or has to consist of a constant number of examples seen so far. Furthermore, learnability is also studied with respect to queries which reveal information about past data or past computation history; the number of queries per round is bounded by a constant. These models are generalisations of the model of feedback queries, given by Lange, Wiehagen and Zeugmann. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432062011-01-01T00:00:00Z
- Behavioural approximations for restricted linear differential hybrid automatahttps://scholarbank.nus.edu.sg/handle/10635/43256Title: Behavioural approximations for restricted linear differential hybrid automata
Authors: Agrawa, M.; Stephan, F.; Thiagarajan, P.S.; Yang, S.
Abstract: We show the regularity of the discrete time behaviour of hybrid automata in which the rates of continuous variables are governed by linear differential operators in a diagonal form and in which the values of the continuous variables can be observed only with finite precision. We do not demand resetting of the values of the continuous variables during mode changes. We can cope with polynomial guards and we can tolerate bounded delays both in sampling the values of the continuous variables and in effecting changes in their rates required by mode switchings. We also show that if the rates are governed by diagonalizable linear differential operators with rational eigenvalues and there is no delay in effecting rate changes, the discrete time behaviour of the hybrid automaton is recursive. However, the control state reachability problem in this setting is undecidable. © Springer-Verlag Berlin Heidelberg 2006.
Sun, 01 Jan 2006 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432562006-01-01T00:00:00Z
- Automatic functions, linear time and learninghttps://scholarbank.nus.edu.sg/handle/10635/43231Title: Automatic functions, linear time and learning
Authors: Case, J.; Jain, S.; Seah, S.; Stephan, F.
Abstract: The present work determines the exact nature of linear time computable notions which characterise automatic functions (those whose graphs are recognised by a finite automaton). The paper also determines which type of linear time notions permit full learnability for learning in the limit of automatic classes (families of languages which are uniformly recognised by a finite automaton). In particular it is shown that a function is automatic iff there is a one-tape Turing machine with a left end which computes the function in linear time where the input before the computation and the output after the computation both start at the left end. It is known that learners realised as automatic update functions are restrictive for learning. In the present work it is shown that one can overcome the problem by providing work-tapes additional to a resource-bounded base tape while keeping the update-time to be linear in the length of the largest datum seen so far. In this model, one additional such worktape provides additional learning power over the automatic learner model and the two-work-tape model gives full learning power. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432312012-01-01T00:00:00Z
- Automatic learning of subclasses of pattern languageshttps://scholarbank.nus.edu.sg/handle/10635/43241Title: Automatic learning of subclasses of pattern languages
Authors: Case, J.; Jain, S.; Le, T.D.; Ong, Y.S.; Semukhin, P.; Stephan, F.
Abstract: Automatic classes are classes of languages for which a finite automaton can decide membership question for the languages in the class, in a uniform way, given an index for the language. For alphabet size of at least 4, every automatic class of erasing pattern languages is contained, for some constant n, in the class of all languages generated by patterns which contain (1) every variable only once and (2) at most n symbols after the first occurrence of a variable. It is shown that such a class is automatically learnable using a learner with long-term memory bounded by the length of the first example seen. The study is extended to show the learnability of related classes such as the class of unions of two pattern languages of the above type. © 2011 Springer-Verlag.
Sat, 01 Jan 2011 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432412011-01-01T00:00:00Z
- On the data consumption benefits of accepting increased uncertaintyhttps://scholarbank.nus.edu.sg/handle/10635/40811Title: On the data consumption benefits of accepting increased uncertainty
Authors: Martin, E.; Sharma, A.; Stephan, F.
Abstract: In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret 'desirable' as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds - for hypotheses and for data - is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability. © Springer-Verlag Berlin Heidelberg 2004.
Thu, 01 Jan 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/408112004-01-01T00:00:00Z
- On the amount of nonconstructivity in learning formal languages from positive datahttps://scholarbank.nus.edu.sg/handle/10635/43252Title: On the amount of nonconstructivity in learning formal languages from positive data
Authors: Jain, S.; Stephan, F.; Zeugmann, T.
Abstract: Nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [18] and Freivalds [9, 10]. They allow to regard more complicated algorithms from the viewpoint of more primitive computational devices. The amount of nonconstructivity is a quantitative characterization of the distance between types of computational devices with respect to solving a specific problem. This paper studies the amount of nonconstructivity needed to learn classes of formal languages from positive data. Different learning types are compared with respect to the amount of nonconstructivity needed to learn indexable classes and recursively enumerable classes, respectively, of formal languages from positive data. Matching upper and lower bounds for the amount of nonconstructivity needed are shown. © 2012 Springer-Verlag.
Sun, 01 Jan 2012 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/432522012-01-01T00:00:00Z
- Variations on U-shaped learninghttps://scholarbank.nus.edu.sg/handle/10635/41123Title: Variations on U-shaped learning
Authors: Carlucci, L.; Jain, S.; Kinber, E.; Stephan, F.
Abstract: The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of learning? Returning to wrong conjectures complements the paradigm of U-shaped learning [2, 6, 8, 20, 24] when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit: TxtEx-learning - when a learner stabilizes on a correct conjecture, and TxtBc-learning - when a learner stabilizes on a sequence of grammars representing the target concept. In all cases, we show that, surprisingly, returning to wrong conjectures is sometimes necessary to achieve full power of learning. On the other hand it is not necessary to return to old "overgeneralizing" conjectures containing elements not belonging to the target language. We also consider our problem in the context of so-called vacillatory learning when a learner stabilizes to a finite number of correct grammars. In this case we show that both returning to old wrong conjectures and returning to old "overgeneralizing" conjectures is necessary for full learning power. We also show that, surprisingly, learners consistent with the input seen so far can be made decisive [2, 21] - they do not have to return to any old conjectures - wrong or right. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/411232005-01-01T00:00:00Z
- Identifying clusters from positive datahttps://scholarbank.nus.edu.sg/handle/10635/41132Title: Identifying clusters from positive data
Authors: Case, J.; Jain, S.; Martin, E.; Sharma, A.; Stephan, F.
Abstract: The relationship between natural topological properties and clusterability has been investigated. The clusterability of a class does not depend on the decision which numbering of the class is used as a hypothesis space for the clusterer. The Turing degrees of maximal oracles which permit to solve all computationally interactable aspects of clustering are determined. It is shown that some oracles are trivial in the sense that they do not provide any useful information for clustering at all.
Thu, 01 Jan 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/411322004-01-01T00:00:00Z
- Non U-shaped vacillatory and team learninghttps://scholarbank.nus.edu.sg/handle/10635/39987Title: Non U-shaped vacillatory and team learning
Authors: Carlucci, L.; Case, J.; Jain, S.; Stephan, F.
Abstract: U-shaped learning behaviour in cognitive development involves learning, unlearning and relearning. It occurs, for example, in learning irregular verbs. The prior cognitive science literature is occupied with how humans do it, for example, general rules versus tables of exceptions. This paper is mostly concerned with whether U-shaped learning behaviour may be necessary in the abstract mathematical setting of inductive inference, that is, in the computational learning theory following the framework of Gold. All notions considered are learning from text, that is, from positive data. Previous work showed that U-shaped learning behaviour is necessary for behaviourally correct learning but not for syntactically convergent, learning in the limit (= explanatory learning). The present paper establishes the necessity for the whole hierarchy of classes of vacillatory learning where a behaviourally correct learner has to satisfy the additional constraint that it vacillates in the limit between at most k grammars, where k ≥ 1. Non U-shaped vacillatory learning is shown to be restrictive: Every non U-shaped vacillatorily learnable class is already learnable in the limit. Furthermore, if vacillatory learning with the parameter k = 2 is possible then non U-shaped behaviourally correct learning is also possible. But for k = 3, surprisingly, there is a class witnessing that this implication fails. © Springer-Verlag Berlin Heidelberg 2005.
Sat, 01 Jan 2005 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/399872005-01-01T00:00:00Z
- The complexity of verbal languages over groupshttps://scholarbank.nus.edu.sg/handle/10635/155277Title: The complexity of verbal languages over groups
Authors: Jain S.; Miasnikov A.; Stephan F.
Abstract: © 2018 Elsevier Inc. This paper investigates the complexity of verbal languages and pattern languages of automatic groups in terms of the Chomsky hierarchy (regular languages, context-free languages, context-sensitive languages, recursively enumerable languages). For non-Abelian free groups with finitely many generators, the following is shown: the level of a language in the Chomsky hierarchy is independent of the automatic representation; the context-free verbal languages are only the full group and the language of the empty word; the regular pattern languages are either a singleton or the full group; verbal languages are, however, indexed languages by a result of Ciobanu, Diekert and Elder. For some groups, it depends on the exactly chosen automatic representation whether a pattern language is regular, context-free or context-sensitive.
Wed, 01 May 2019 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1552772019-05-01T00:00:00Z
- The dot-depth and the polynomial hierarchy correspond on the delta levelshttps://scholarbank.nus.edu.sg/handle/10635/39650Title: The dot-depth and the polynomial hierarchy correspond on the delta levels
Authors: Borchert, B.; Lange, K.-J.; Stephan, F.; Tesson, P.; Thérien, D.
Abstract: The leaf-language mechanism associates a complexity class to a class of regular languages. It is well-known that the Σ κ- and Π κ-levels of the dot-depth hierarchy and the polynomial hierarchy correspond in this formalism. We extend this correspondence to the Δ κ-levels of these hierarchies: Leaf P(Δ κ L) = Δ κ Ρ. These results are obtained in part by relating operators on varieties of languages to operators on the corresponding complexity classes. © Springer-Verlag Berlin Heidelberg 2004.
Thu, 01 Jan 2004 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/396502004-01-01T00:00:00Z
- Automatic structures - Recent results and open questionshttps://scholarbank.nus.edu.sg/handle/10635/180907Title: Automatic structures - Recent results and open questions
Authors: Stephan, F
Abstract: Regular languages are languages recognised by finite automata; automatic structures are a generalisation of regular languages where one also uses automatic relations (which are relations recognised by synchronous finite automata) and automatic functions (which are functions whose graph is an automatic relation). Functions and relations first-order definable from other automatic functions and relations are again automatic. Automatic functions coincide with the functions computed by position-faithful one-tape Turing machines in linear time. This survey addresses recent results and open questions on topics related to automatic structures: How difficult is the isomorphism problem for various types of automatic structures? Which groups are automatic? When are automatic groups Abelian or orderable? How can one overcome some of the limitations to represent rings and fields by weakening the automaticity requirements of a structure? © Published under licence by IOP Publishing Ltd.
Thu, 01 Jan 2015 00:00:00 GMThttps://scholarbank.nus.edu.sg/handle/10635/1809072015-01-01T00:00:00Z