Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/81395
DC FieldValue
dc.titleDelta rule and learning for min-max neural networks
dc.contributor.authorZhang, Xinghu
dc.contributor.authorHang, Chang-Chieh
dc.contributor.authorTan, Shaohua
dc.contributor.authorWang, Pei-Zhuang
dc.date.accessioned2014-10-07T03:07:51Z
dc.date.available2014-10-07T03:07:51Z
dc.date.issued1994
dc.identifier.citationZhang, Xinghu,Hang, Chang-Chieh,Tan, Shaohua,Wang, Pei-Zhuang (1994). Delta rule and learning for min-max neural networks. IEEE International Conference on Neural Networks - Conference Proceedings 1 : 38-43. ScholarBank@NUS Repository.
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/81395
dc.description.abstractThere have been a lot of works discussing (V, Λ)-neural network (see references). However, because of the difficulty of mathematical analysis for (V, Λ)-functions, most previous works choose bounded-plus (+) and multiply (*) as the operations of V and Λ. The (V, Λ) neural network with operators (+, *) is much easier than the (V, Λ) neural network with some other operators, e.g. min-max operators, because it has only a little difference from Back-propagation Neural Network. In this paper, we choose min and max as the operations of V and Λ. Because of the difficulty of functions involved with min and max operations, it is much difficult to deal with (V, Λ) neural network with operators (min, max). In Section 1 of this paper, we first discuss the differentiations of (V, Λ)-functions, and get that 'if f1(x), f2(x), ..., fn(x) are continuously differentiable in real number line R, then any function h(x) generated from f1(x), f2(x), ..., fn(x) through finite times of (V, Λ) operations is continuously differentiable almost everywhere in R'. This statement guarantee that the Delta Rule given in Section 2 is rational and effective. In Section 3 we implement a simple example to show that the Delta Rule given in Section 2 is capable to train (V, Λ) neural networks.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentELECTRICAL ENGINEERING
dc.contributor.departmentINSTITUTE OF SYSTEMS SCIENCE
dc.description.sourcetitleIEEE International Conference on Neural Networks - Conference Proceedings
dc.description.volume1
dc.description.page38-43
dc.description.coden176
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.