Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/111251
DC FieldValue
dc.titleFastProp: A selective training algorithm for fast error propagation
dc.contributor.authorWong, F.S.
dc.date.accessioned2014-11-27T09:46:14Z
dc.date.available2014-11-27T09:46:14Z
dc.date.issued1991
dc.identifier.citationWong, F.S. (1991). FastProp: A selective training algorithm for fast error propagation : 2038-2043. ScholarBank@NUS Repository.
dc.identifier.isbn0780302273
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/111251
dc.description.abstractAn improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentINSTITUTE OF SYSTEMS SCIENCE
dc.description.page2038-2043
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.