Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/116689
Title: An effective learning method for max-min neural networks
Authors: Teow, L.-N. 
Loe, K.-F. 
Issue Date: 1997
Citation: Teow, L.-N.,Loe, K.-F. (1997). An effective learning method for max-min neural networks. IJCAI International Joint Conference on Artificial Intelligence 2 : 1134-1139. ScholarBank@NUS Repository.
Abstract: Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiate, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivatives to derive a gradient descent technique for max-min error functions. This method is applied to a "typical" fuzzy-neural network model employing max-rnin activation functions. We show how this network can be trained to perform function approximation; its performance was found to be better than that of a conventional feedforward neural network.
Source Title: IJCAI International Joint Conference on Artificial Intelligence
URI: http://scholarbank.nus.edu.sg/handle/10635/116689
ISSN: 10450823
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.