Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/181955
DC FieldValue
dc.titleEFFECTIVE LEARNING IN MAX-MIN NEURAL NETWORKS
dc.contributor.authorTEOW LOO NIN
dc.date.accessioned2020-10-29T06:32:11Z
dc.date.available2020-10-29T06:32:11Z
dc.date.issued1997
dc.identifier.citationTEOW LOO NIN (1997). EFFECTIVE LEARNING IN MAX-MIN NEURAL NETWORKS. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/181955
dc.description.abstractMax and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiable, some ad hoc learning methods for such max-min neural networks have been proposed in the literature. In this thesis, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivations to derive a gradient decent technique for max-min error functions. This method is then applied to two models: a feedforward fuzzy-neural network and a recurrent max-min neural network. We show how a "typical" fuzzy-neural network model employing max-min activation functions can be trained to perform function approximation; its performance was found to be better than that of a conventional feedforward neural network. We also propose a novel recurrent max-min neural network model which is trained to perform grammatical inference as an application example. Comparisons are made between this model and recurrent neural networks that use conventional sigmoidal activation functions; such recurrent sigmoidal networks are known to be difficult to train and generalize poorly on long strings. The comparisons show that our model not only performs better in terms of learning speed and generalization, its final weight configuration allows a DFA to be extracted in a straightforward manner. In essence, we are able to demonstrate that our proposed gradient descent technique, does allow max-min neural networks to learn effectively.
dc.sourceCCK BATCHLOAD 20201023
dc.typeThesis
dc.contributor.departmentINFORMATION SYSTEMS & COMPUTER SCIENCE
dc.contributor.supervisorLOE KIA FOCK
dc.description.degreeMaster's
dc.description.degreeconferredMASTER OF SCIENCE
Appears in Collections:Master's Theses (Restricted)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
B20839790.PDF1.33 MBAdobe PDF

RESTRICTED

NoneLog In

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.