Please use this identifier to cite or link to this item:
|Title:||Incremental extreme learning machine with fully complex hidden nodes||Authors:||Huang, G.-B.
Complex activation function
|Issue Date:||2008||Citation:||Huang, G.-B., Li, M.-B., Chen, L., Siew, C.-K. (2008). Incremental extreme learning machine with fully complex hidden nodes. Neurocomputing 71 (4-6) : 576-583. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2007.07.025||Abstract:||Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879-892] has recently proposed an incremental extreme learning machine (I-ELM), which randomly adds hidden nodes incrementally and analytically determines the output weights. Although hidden nodes are generated randomly, the network constructed by I-ELM remains as a universal approximator. This paper extends I-ELM from the real domain to the complex domain. We show that, as long as the hidden layer activation function is complex continuous discriminatory or complex bounded nonlinear piecewise continuous, I-ELM can still approximate any target functions in the complex domain. The universal capability of the I-ELM in the complex domain is further verified by two function approximations and one channel equalization problems. © 2007 Elsevier B.V. All rights reserved.||Source Title:||Neurocomputing||URI:||http://scholarbank.nus.edu.sg/handle/10635/39831||ISSN:||09252312||DOI:||10.1016/j.neucom.2007.07.025|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Aug 12, 2022
WEB OF SCIENCETM
checked on Aug 5, 2022
checked on Aug 4, 2022
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.