Please use this identifier to cite or link to this item:
|Title:||Discrete variable generation for improved neural network classification|
|Authors:||Setiono, R. |
|Keywords:||axis parallel rules|
|Citation:||Setiono, R., Seret, A. (2012). Discrete variable generation for improved neural network classification. Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI 1 : 230-237. ScholarBank@NUS Repository. https://doi.org/10.1109/ICTAI.2012.39|
|Abstract:||Neural networks are widely used for classification as they achieve good predictive accuracy. When the class labels are determined by complex interactions of the input variables, neural networks can be expected to provide better predictions than methods that test on the values of one variable at a time such as univariate decision tree classifiers. On the other hand, when no or relatively simple interaction between variables determines the class membership, the neural network may over fit the data and the input-to-output relationship in the data is represented by a function that is more complex than it should be. In this paper, we propose adding discretized values of the continuous variables in the data as input when training the neural networks. Finding out whether the discretized values or the original continuous values of the variables are useful is achieved by pruning. By having only the relevant inputs left in the pruned networks, we are able to extract classification rules from these networks that are accurate, concise and interpretable. © 2012 IEEE.|
|Source Title:||Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Jul 13, 2018
WEB OF SCIENCETM
checked on Jun 12, 2018
checked on May 25, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.