Please use this identifier to cite or link to this item:
Title: Parallel nonlinear optimization techniques for training neural networks
Authors: Phua, P.K.H. 
Ming, D.
Keywords: Backpropagation (BP)
Neural networks
Parallel optimization techniques
Quasi-Newton (QN) methods
Training algorithms
Issue Date: 2003
Source: Phua, P.K.H., Ming, D. (2003). Parallel nonlinear optimization techniques for training neural networks. IEEE Transactions on Neural Networks 14 (6) : 1460-1468. ScholarBank@NUS Repository.
Abstract: In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.
Source Title: IEEE Transactions on Neural Networks
ISSN: 10459227
DOI: 10.1109/TNN.2003.820670
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Mar 7, 2018


checked on Feb 5, 2018

Page view(s)

checked on Mar 11, 2018

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.