Please use this identifier to cite or link to this item:
Title: Convergence of the extended Lagrangian support vector machine
Authors: Yang, X.W. 
Hao, Z.F.
Liang, Y.C.
Shu, L.
Liu, G.R. 
Han, X. 
Keywords: Decomposition algorithm
Quadratic programming
Support vector machine
Issue Date: 2003
Citation: Yang, X.W.,Hao, Z.F.,Liang, Y.C.,Shu, L.,Liu, G.R.,Han, X. (2003). Convergence of the extended Lagrangian support vector machine. International Conference on Machine Learning and Cybernetics 5 : 3146-3149. ScholarBank@NUS Repository.
Abstract: The Lagrangian support vector machine (LSVM) cannot solve large problems for nonlinear kernel classifiers. In order to extend the LSVM to solve very large problems, an extended Lagrangian support vector machine (ELSVM) for classifications based on LSVM and SVMlight has been presented by the authors. The idea of this paper for the ELSVM is to divide a large quadratic programming problem into a series of sub-problems with small size and to solve them via the LSVM. Since the LSVM can solve small and medium problems very fast for nonlinear kernel classifiers, the ELSVM can be used to handle large problems very efficiently. Numerical experiments on different types of problems have been conducted to demonstrate the high efficiency of the ELSVM. In this paper, the convergence for the ELSVM is proved theoretically to firmly establish the algorithm.
Source Title: International Conference on Machine Learning and Cybernetics
ISBN: 0780378652
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Page view(s)

checked on Oct 13, 2019

Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.