Please use this identifier to cite or link to this item: https://doi.org/10.1016/j.neucom.2006.05.007
Title: Developing parallel sequential minimal optimization for fast training support vector machine
Authors: Cao, L.J.
Keerthi, S.S. 
Ong, C.J. 
Uvaraj, P.
Fu, X.J.
Lee, H.P.
Keywords: Message passing interface (MPI)
Parallel algorithm
Sequential minimal optimization (SMO)
Support vector machine (SVM)
Issue Date: Dec-2006
Citation: Cao, L.J., Keerthi, S.S., Ong, C.J., Uvaraj, P., Fu, X.J., Lee, H.P. (2006-12). Developing parallel sequential minimal optimization for fast training support vector machine. Neurocomputing 70 (1-3) : 93-104. ScholarBank@NUS Repository. https://doi.org/10.1016/j.neucom.2006.05.007
Abstract: A parallel version of sequential minimal optimization (SMO) is developed in this paper for fast training support vector machine (SVM). Up to now, SMO is one popular algorithm for training SVM, but it still requires a large amount of computation time for solving large size problems. The parallel SMO is developed based on message passing interface (MPI). Unlike the sequential SMO which handle all the training data points using one CPU processor, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set, the MNIST data set and IDEVAL data set when many processors are used. There are also satisfactory results on the Web data set. This work is very useful for the research where multiple CPU processors machine is available. © 2006 Elsevier B.V. All rights reserved.
Source Title: Neurocomputing
URI: http://scholarbank.nus.edu.sg/handle/10635/59892
ISSN: 09252312
DOI: 10.1016/j.neucom.2006.05.007
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.