Please use this identifier to cite or link to this item: https://doi.org/10.1109/TKDE.2018.2866097
Title: Efficient Multi-Class Probabilistic SVMs on GPUs
Authors: Zeyi Wen 
Jiashuai Shi
Bingsheng He 
Jian Chen
Yawen Chen
Keywords: Graphics Processing Units
Machine Learning
Multi-class probabilistic SVMs
Issue Date: 2018
Publisher: IEEE Computer Society
Citation: Zeyi Wen, Jiashuai Shi, Bingsheng He, Jian Chen, Yawen Chen (2018). Efficient Multi-Class Probabilistic SVMs on GPUs. IEEE Transactions on Knowledge and Data Engineering : 1693-1706. ScholarBank@NUS Repository. https://doi.org/10.1109/TKDE.2018.2866097
Abstract: Recently, many researchers have been working on improving other traditional machine learning algorithms (besides deep learning) using high-performance hardware such as Graphics Processing Units (GPUs). The recent success of machine learning is not only due to more effective algorithms, but also more efficient systems and implementations. In this paper, we propose a novel and efficient solution to multi-class SVMs with probabilistic output (MP-SVMs) accelerated by GPUs. MP-SVMs are an important technique for many pattern recognition applications. However, MP-SVMs are very time-consuming to use, because using an MP-SVM classifier requires training many binary SVMs and performing probability estimation by combining results of all the binary SVMs. GPUs have much higher computation capability than CPUs and are potentially excellent hardware to accelerate MP-SVMs. Still, two key challenges for efficient GPU accelerations for MP-SVM are: (i) many kernel values are repeatedly computed as a binary SVM classifier is trained iteratively, resulting in repeated accesses to the high latency GPU memory; (ii) performing training or estimating probability in a highly parallel way requires a much larger memory footprint than the GPU memory. To overcome the challenges, we propose a solution called GMP-SVM which exploits two-level (i.e., binary SVM level and MP-SVM level) optimization for training MP-SVMs and high parallelism for estimating probability. GMP-SVM reduces high latency memory accesses and memory consumption through batch processing, kernel value reusing and sharing, and support vector sharing. Experimental results show that GMP-SVM outperforms the GPU baseline by two to five times, and LibSVM with OpenMP by an order of magnitude. Also, GMP-SVM produces the same SVM classifier as LibSVM. IEEE
Source Title: IEEE Transactions on Knowledge and Data Engineering
URI: https://scholarbank.nus.edu.sg/handle/10635/173893
ISSN: 1041-4347
DOI: 10.1109/TKDE.2018.2866097
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
tkde18-pgpusvm.pdf1.44 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.