Please use this identifier to cite or link to this item:
|Title:||Training neural networks for classification using growth probability-based evolution|
|Citation:||Ang, J.H., Tan, K.C., Al-Mamun, A. (2008-10). Training neural networks for classification using growth probability-based evolution. Neurocomputing 71 (16-18) : 3493-3508. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2007.10.011|
|Abstract:||In this paper, a novel evolutionary algorithm (EA) based on a newly formulated parameter, i.e., growth probability (Pg) is used to evolve the near optimal weights and the number of hidden neurons in neural networks (NNs). Training NNs with growth probability based evolution (NN-GP) initializes networks with only one hidden neuron and the networks are allowed to grow until a suitable size. Growing of neurons is not restricted to one hidden neuron at a time as the optimal number of hidden neurons for the NNs might be a few neurons more than what it represents now. If this solution in the search space is far, networks have to add several number of hidden neurons. Growth rate is based on Gaussian distribution thus providing a way to escape local optima. A self-adaptive version (NN-SAGP) with the aim of evolving the growth probability in parallel with NNs during each generation is also proposed. The evolved networks are applied to widely used real-world benchmark problems. Simulation results show that the proposed approach is effective for evolving NNs with good classification accuracy and low complexity. © 2007 Elsevier B.V. All rights reserved.|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Nov 13, 2018
WEB OF SCIENCETM
checked on Nov 5, 2018
checked on Oct 20, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.