Please use this identifier to cite or link to this item:
|Title:||Convex incremental extreme learning machine||Authors:||Huang, G.-B.
Generalized feedforward networks
Incremental extreme learning machine
Random hidden nodes
|Issue Date:||2007||Citation:||Huang, G.-B., Chen, L. (2007). Convex incremental extreme learning machine. Neurocomputing 70 (16-18) : 3056-3062. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2007.02.009||Abstract:||Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879-892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs fn (x) = underover(∑, i = 1, n) βi G (x, ai, bi) can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such "generalized" SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned. © 2007 Elsevier B.V. All rights reserved.||Source Title:||Neurocomputing||URI:||http://scholarbank.nus.edu.sg/handle/10635/39163||ISSN:||09252312||DOI:||10.1016/j.neucom.2007.02.009|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on May 17, 2019
WEB OF SCIENCETM
checked on May 8, 2019
checked on May 14, 2019
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.