Please use this identifier to cite or link to this item:
|Title:||Prediction, Learning, Uniform Convergence, and Scale-Sensitive Dimensions||Authors:||Bartlett, P.L.
|Issue Date:||Apr-1998||Citation:||Bartlett, P.L.,Long, P.M. (1998-04). Prediction, Learning, Uniform Convergence, and Scale-Sensitive Dimensions. Journal of Computer and System Sciences 56 (2) : 174-190. ScholarBank@NUS Repository.||Abstract:||We present a new general-purpose algorithm for learning classes of [0, 1]-valued functions in a generalization of the prediction model and prove a general upper bound on the expected absolute error of this algorithm in terms of a scale-sensitive generalization of the Vapnik dimension proposed by Alon, Ben-David, Cesa-Bianchi, and Haussler. We give lower bounds implying that our upper bounds cannot be improved by more than a constant factor in general. We apply this result, together with techniques due to Haussler and to Benedek and Itai, to obtain new upper bounds on packing numbers in terms of this scale-sensitive notion of dimension. Using a different technique, we obtain new bounds on packing numbers in terms of Kearns and Schapire's fat-shattering function. We show how to apply both packing bounds to obtain improved general bounds on the sample complexity of agnostic learning. For each ∈ > 0, we establish weaker sufficient and stronger necessary conditions for a class of [0, 1]-valued functions to be agnostically learnable to within ∈ and to be an ∈-uniform Glivenko-Cantelli class. © 1998 Academic Press.||Source Title:||Journal of Computer and System Sciences||URI:||http://scholarbank.nus.edu.sg/handle/10635/99384||ISSN:||00220000|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Oct 12, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.