Please use this identifier to cite or link to this item:
https://doi.org/10.1109/TNN.2003.820670
Title: | Parallel nonlinear optimization techniques for training neural networks | Authors: | Phua, P.K.H. Ming, D. |
Keywords: | Backpropagation (BP) Neural networks Parallel optimization techniques Quasi-Newton (QN) methods Training algorithms |
Issue Date: | 2003 | Citation: | Phua, P.K.H., Ming, D. (2003). Parallel nonlinear optimization techniques for training neural networks. IEEE Transactions on Neural Networks 14 (6) : 1460-1468. ScholarBank@NUS Repository. https://doi.org/10.1109/TNN.2003.820670 | Abstract: | In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems. | Source Title: | IEEE Transactions on Neural Networks | URI: | http://scholarbank.nus.edu.sg/handle/10635/42406 | ISSN: | 10459227 | DOI: | 10.1109/TNN.2003.820670 |
Appears in Collections: | Staff Publications |
Show full item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.