Please use this identifier to cite or link to this item:
|Title:||Pruned neural networks for regression|
|Authors:||Setiono, R. |
|Source:||Setiono, R.,Leow, W.K. (2000). Pruned neural networks for regression. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 1886 LNAI : 500-509. ScholarBank@NUS Repository.|
|Abstract:||Neural networks have been widely used as a tool for regression. They are capable of approximating any function and they do not require any assumption about the distribution of the data. The most commonly used architectures for regression are the feedforward neural networks with one or more hidden layers. In this paper, we present a network pruning algorithm which determines the number of units in the input and hidden layers of the networks. We compare the performance of the pruned networks to four regression methods namely, linear regression (LR), Naive Bayes (NB), k-nearest-neighbor (kNN), and a decision tree predictor M5. On 32 publicly available data sets tested, the neural network method outperforms NB and kNN if the prediction errors are computed in terms of the root mean squared errors. Under this measurement metric, it also performs as well as LR and M5. On the other hand, using the mean absolute error as the measurement metric, the neural network method outperforms all four other regression methods. © Springer-Verlag Berlin Heidelberg 2000.|
|Source Title:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.