Please use this identifier to cite or link to this item: https://doi.org/10.1109/TNN.2003.820670
Title: Parallel nonlinear optimization techniques for training neural networks
Authors: Phua, P.K.H. 
Ming, D.
Keywords: Backpropagation (BP)
Neural networks
Parallel optimization techniques
Quasi-Newton (QN) methods
Training algorithms
Issue Date: 2003
Source: Phua, P.K.H., Ming, D. (2003). Parallel nonlinear optimization techniques for training neural networks. IEEE Transactions on Neural Networks 14 (6) : 1460-1468. ScholarBank@NUS Repository. https://doi.org/10.1109/TNN.2003.820670
Abstract: In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.
Source Title: IEEE Transactions on Neural Networks
URI: http://scholarbank.nus.edu.sg/handle/10635/42406
ISSN: 10459227
DOI: 10.1109/TNN.2003.820670
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

37
checked on Dec 13, 2017

WEB OF SCIENCETM
Citations

25
checked on Dec 13, 2017

Page view(s)

39
checked on Dec 9, 2017

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.