Please use this identifier to cite or link to this item:
Title: Neural-network feature selector
Authors: Setiono, R. 
Liu, H. 
Keywords: Backpropagation
Cross entropy
Feature selection
Feedforward neural network
Network pruning
Penalty term
Issue Date: 1997
Citation: Setiono, R., Liu, H. (1997). Neural-network feature selector. IEEE Transactions on Neural Networks 8 (3) : 654-662. ScholarBank@NUS Repository.
Abstract: Feature selection is an integral part of most learning algorithms. Due to the existence of irrelevant and redundant attributes, by selecting only the relevant attributes of the data, higher predictive accuracy can be expected from a machine learning method. In this paper, we propose the use of a three-layer feedforward neural network to select those input attributes that are most useful for discriminating classes in a given set of input patterns. A network pruning algorithm is the foundation of the proposed algorithm. By adding a penalty term to the error function of the network, redundant network connections can be distinguished from those relevant ones by their small weights when the network training process has been completed. A simple criterion to remove an attribute based on the accuracy rate of the network is developed. The network is retrained after removal of an attribute, and the selection process is repeated until no attribute meets the criterion for removal. Our experimental results suggest that the proposed method works very well on a wide variety of classification problems. © 1997 IEEE.
Source Title: IEEE Transactions on Neural Networks
ISSN: 10459227
DOI: 10.1109/72.572104
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.


checked on Sep 24, 2020


checked on Sep 24, 2020

Page view(s)

checked on Sep 26, 2020

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.