Please use this identifier to cite or link to this item:
Title: Feedforward neural netwotks without orthonormalization
Authors: Chen, L. 
Pung, H.K. 
Long, F. 
Keywords: Approximation
Extreme learning machine(ELM)
Feedforward neural networks
Generalization performance
Kernel function
Orthonormal transformation
Issue Date: 2007
Citation: Chen, L., Pung, H.K., Long, F. (2007). Feedforward neural netwotks without orthonormalization. ICEIS 2007 - 9th International Conference on Enterprise Information Systems, Proceedings AIDSS : 420-423. ScholarBank@NUS Repository.
Abstract: Feedforward neural networks have attracted considerable attention in many fields mainly due to their approximation capability. Recently, an effective noniterative technique has been proposed by Kaminski and Strumillo(Kaminski and Strumillo, 1997), where kernel hidden neurons are transformed into an orthonormal set of neurons by using Gram-Schmidt orthonormalization. After this transformation, neural networks do not need recomputation of network weights already calculated, therefore the orthonormal neural networks can reduce computing time. In this paper, we will show that it is equivalent between neural networks without orthonormal transformation and the orthonormal neural networks, thus we can naturally conclude that such orthonormalization transformation is not necessary for neural networks. Moreover, we will extend such orthonormal neural networks into additive neurons. The experimental results based on some benchmark regression applications further verify our conclusion.
Source Title: ICEIS 2007 - 9th International Conference on Enterprise Information Systems, Proceedings
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Aug 4, 2022

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.