Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/27479
Title: Feature Selection and Model Selection for Supervised Learning Algorithms
Authors: YANG JIANBO
Keywords: feature selection, model selection, SVM, SVR, MLP, ISVMP
Issue Date: 11-Apr-2011
Citation: YANG JIANBO (2011-04-11). Feature Selection and Model Selection for Supervised Learning Algorithms. ScholarBank@NUS Repository.
Abstract: The thesis is concerned about feature selection and model selection in supervised learning. Specifically, three feature selection methods and one model selection method are proposed. The first feature selection method is a wrapper-based feature selection method for multilayer perceptron (MLP) neural network. It measures the importance of a feature by the its sensitivity with respect to the posterior probability over the whole feature space. The results of experiments show that this method performs at least as well, if not better than the benchmark methods. The second feature selection method is a wrapper-based feature selection method for support vector regressor (SVR). In this method, the importance of a feature is measured by the aggregation, over the entire feature space, of the difference of the output conditional density function provided by SVR with and without a given feature. Two approximations of this criterion are proposed. Some promising results are also obtained in experiments. The third feature selection method is a filter-based feature selection method. It uses a mutual information based criterion to measure the importance of a feature in a backward selection framework. Unlike other mutual information based methods, the proposed criterion measures the importance of a feature with the consideration of all features. As the results of numerical experiments show, the proposed method generally outperforms existing mutual information methods and can effectively handle the data set with interactive features. The one model selection method is to tune the regularization parameter of support vector machine. The tuned regularization parameter by the proposed method guarantees the global optimum of widely used non-smooth validation functions. The proposed method highly relies on the solution path of SVM over a range of the regularization parameter. When the solution path is available, the computation needed is minimal.
URI: http://scholarbank.nus.edu.sg/handle/10635/27479
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
YangJB.pdf1.18 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.