Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.neucom.2007.10.008
DC FieldValue
dc.titleEnhanced random search based incremental extreme learning machine
dc.contributor.authorHuang, G.-B.
dc.contributor.authorChen, L.
dc.date.accessioned2013-07-04T08:26:31Z
dc.date.available2013-07-04T08:26:31Z
dc.date.issued2008
dc.identifier.citationHuang, G.-B., Chen, L. (2008). Enhanced random search based incremental extreme learning machine. Neurocomputing 71 (16-18) : 3460-3468. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2007.10.008
dc.identifier.issn09252312
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/41394
dc.description.abstractRecently an incremental algorithm referred to as incremental extreme learning machine (I-ELM) was proposed by Huang et al. [G.-B. Huang, L. Chen, C.-K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879-892], which randomly generates hidden nodes and then analytically determines the output weights. Huang et al. [G.-B. Huang, L. Chen, C.-K. Siew, Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Trans. Neural Networks 17(4) (2006) 879-892] have proved in theory that although additive or RBF hidden nodes are generated randomly the network constructed by I-ELM can work as a universal approximator. During our recent study, it is found that some of the hidden nodes in such networks may play a very minor role in the network output and thus may eventually increase the network complexity. In order to avoid this issue and to obtain a more compact network architecture, this paper proposes an enhanced method for I-ELM (referred to as EI-ELM). At each learning step, several hidden nodes are randomly generated and among them the hidden node leading to the largest residual error decreasing will be added to the existing network and the output weight of the network will be calculated in a same simple way as in the original I-ELM. Generally speaking, the proposed enhanced I-ELM works for the widespread type of piecewise continuous hidden nodes. © 2007 Elsevier B.V. All rights reserved.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/j.neucom.2007.10.008
dc.sourceScopus
dc.subjectConvergence rate
dc.subjectIncremental extreme learning machine
dc.subjectRandom hidden nodes
dc.subjectRandom search
dc.typeConference Paper
dc.contributor.departmentCOMPUTATIONAL SCIENCE
dc.description.doi10.1016/j.neucom.2007.10.008
dc.description.sourcetitleNeurocomputing
dc.description.volume71
dc.description.issue16-18
dc.description.page3460-3468
dc.description.codenNRCGE
dc.identifier.isiut000260066100047
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

782
checked on Oct 4, 2022

WEB OF SCIENCETM
Citations

664
checked on Sep 27, 2022

Page view(s)

306
checked on Sep 22, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.