Please use this identifier to cite or link to this item:
https://doi.org/10.1007/s10472-008-9097-2
Title: | Convergence analysis of convex incremental neural networks | Authors: | Chen, L. Pung, H.K. |
Keywords: | Convergence rate Feedforward neural networks Generalization performance Universal approximation |
Issue Date: | 2008 | Citation: | Chen, L., Pung, H.K. (2008). Convergence analysis of convex incremental neural networks. Annals of Mathematics and Artificial Intelligence 52 (1) : 67-80. ScholarBank@NUS Repository. https://doi.org/10.1007/s10472-008-9097-2 | Abstract: | Recently, a convex incremental algorithm (CI-ELM) has been proposed in Huang and Chen (Neurocomputing 70:3056-3062, 2007), which randomly chooses hidden neurons and then analytically determines the output weights connecting with the hidden layer and the output layer. Though hidden neurons are generated randomly, the network constructed by CI-ELM is still based on the principle of universal approximation. The random approximation theory breaks through the limitation of most conventional theories, eliminating the need for tuning hidden neurons. However, due to the random characteristic, some of the neurons contribute little to decrease the residual error, which eventually increase the complexity and computation of neural networks. Thus, CI-ELM cannot precisely give out its convergence rate. Based on Lee's results (Lee et al., IEEE Trans Inf Theory 42(6):2118-2132, 1996), we first show the convergence rate of a maximum CI-ELM, and then systematically analyze the convergence rate of an enhanced CI-ELM. Different from CI-ELM, the hidden neurons of the two algorithms are chosen by following the maximum or optimality principle under the same structure as CI-ELM. Further, the proof process also demonstrates that our algorithms achieve smaller residual errors than CI-ELM. Since the proposed neural networks remove these "useless" neurons, they improve the efficiency of neural networks. The experimental results on benchmark regression problems will support our conclusions. © 2008 Springer Science+Business Media B.V. | Source Title: | Annals of Mathematics and Artificial Intelligence | URI: | http://scholarbank.nus.edu.sg/handle/10635/39806 | ISSN: | 10122443 | DOI: | 10.1007/s10472-008-9097-2 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.