Please use this identifier to cite or link to this item:
|Title:||Continuous Naive Bayesian classifications|
|Citation:||Vega, V.B.,Bressan, S. (2003). Continuous Naive Bayesian classifications. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2911 : 279-289. ScholarBank@NUS Repository.|
|Abstract:||The most common model of machine learning algorithms involves two life-stages, namely the learning stage and the application stage. The cost of human expertise makes difficult the labeling of large sets of data for the training of machine learning algorithms. In this paper, we propose to challenge this strict dichotomy in the life cycle while addressing the issue of labeling of data. We discuss a learning paradigm called Continuous Learning. After an initial training based on human-labeled data, a Continuously Learning algorithm iteratively trains itself with the result of its own previous application stage and without the privilege of any external feedback. The intuitive motivation and idea of this paradigm are elucidated, followed by explanations on how it differs from other learning models. Finally, empirical evaluation of Continuous Learning applied to the Naive Bayesian Classifier for the classification of newsgroup articles of a well-known benchmark is presented. © Springer-Verlag Berlin Heidelberg 2003.|
|Source Title:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Nov 17, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.