Please use this identifier to cite or link to this item:
Title: Discrete variable generation for improved neural network classification
Authors: Setiono, R. 
Seret, A.
Keywords: axis parallel rules
Network pruning
oblique rules
rule extraction
Issue Date: 2012
Citation: Setiono, R., Seret, A. (2012). Discrete variable generation for improved neural network classification. Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI 1 : 230-237. ScholarBank@NUS Repository.
Abstract: Neural networks are widely used for classification as they achieve good predictive accuracy. When the class labels are determined by complex interactions of the input variables, neural networks can be expected to provide better predictions than methods that test on the values of one variable at a time such as univariate decision tree classifiers. On the other hand, when no or relatively simple interaction between variables determines the class membership, the neural network may over fit the data and the input-to-output relationship in the data is represented by a function that is more complex than it should be. In this paper, we propose adding discretized values of the continuous variables in the data as input when training the neural networks. Finding out whether the discretized values or the original continuous values of the variables are useful is achieved by pruning. By having only the relevant inputs left in the pruned networks, we are able to extract classification rules from these networks that are accurate, concise and interpretable. © 2012 IEEE.
Source Title: Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
ISBN: 9780769549156
ISSN: 10823409
DOI: 10.1109/ICTAI.2012.39
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.