Please use this identifier to cite or link to this item:
|Title:||Improved bounds on the sample complexity of learning|
|Authors:||Li, Yi |
Long, Philip M.
|Source:||Li, Yi,Long, Philip M.,Srinivasan, Aravind (2000). Improved bounds on the sample complexity of learning. Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms : 309-318. ScholarBank@NUS Repository.|
|Abstract:||We present two improved bounds on the sample complexity of learning. First, we present a new general upper bound on the number of examples required to estimate all of the expectations of a set of random variables uniformly well. The quality of the estimates is measured using a variant of the relative error proposed by Haussler and Pollard. We also show that our bound is within a constant factor of the best possible. Our upper bound implies improved bounds on the sample complexity of learning according to Haussler's decision theoretic model. Next, we prove a lower bound on the sample complexity for learning according to the prediction model that is optimal to within a factor of 1+o(1).|
|Source Title:||Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.