Please use this identifier to cite or link to this item: http://scholarbank.nus.edu.sg/handle/10635/15819
Title: Training issues and learning algorithms for feedforward and recurrent neural networks
Authors: TEOH EU JIN
Keywords: neural networks, feedforward, recurrent, computational intelligence
Issue Date: 14-May-2009
Source: TEOH EU JIN (2009-05-14). Training issues and learning algorithms for feedforward and recurrent neural networks. ScholarBank@NUS Repository.
Abstract: This work attempts a two-fold contribution: firstly, towards the design and learning of feedforward neural networks based on the singular value decomposition (SVD), and secondly, towards the use of linear threshold (LT) neurons in recurrent neural network designs. In particular, the first part of this thesis centered largely about feedforward neural architectures, first adopting a theoretical stance before taking on a more application oriented perspective. Specifically, in part one, the SVD is used both as an operator for estimating the appropriate number of hidden layer neurons, as well as a component of a learning algorithm which in turn is first implemented in a evolutionary algorithm approach and secondly, though a layered training algorithm. Subsequently, in the second part of this dissertation, the use of linear-threshold type neurons in recurrent neural networks addressing both theoretical and application-oriented aspects was examined, particularly in an analog associative memory scheme (together with dynamical analysis) and a combinatorial optimization problem, namely the Traveling Salesman Problem.
URI: http://scholarbank.nus.edu.sg/handle/10635/15819
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
PhD Thesis - TEOH EU JIN.pdf2.52 MBAdobe PDF

OPEN

NoneView/Download

Page view(s)

336
checked on Dec 18, 2017

Download(s)

522
checked on Dec 18, 2017

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.