Please use this identifier to cite or link to this item: https://doi.org/10.1109/TNNLS.2019.2944562
Title: Adaptive Kernel Value Caching for SVM Training
Authors: Li, Qinbin
Wen, Zeyi 
He, Bingsheng 
Keywords: Science & Technology
Technology
Computer Science, Artificial Intelligence
Computer Science, Hardware & Architecture
Computer Science, Theory & Methods
Engineering, Electrical & Electronic
Computer Science
Engineering
Training
Support vector machines
Kernel
Libraries
Learning systems
Adaptive systems
Runtime
Caching
efficiency
kernel values
support vector machines (SVMs)
Issue Date: 1-Jul-2020
Publisher: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Citation: Li, Qinbin, Wen, Zeyi, He, Bingsheng (2020-07-01). Adaptive Kernel Value Caching for SVM Training. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 31 (7) : 2376-2386. ScholarBank@NUS Repository. https://doi.org/10.1109/TNNLS.2019.2944562
Abstract: Support vector machines (SVMs) can solve structured multioutput learning problems such as multilabel classification, multiclass classification, and vector regression. SVM training is expensive, especially for large and high-dimensional data sets. The bottleneck of the SVM training often lies in the kernel value computation. In many real-world problems, the same kernel values are used in many iterations during the training, which makes the caching of kernel values potentially useful. The majority of the existing studies simply adopt the least recently used (LRU) replacement strategy for caching kernel values. However, as we analyze in this article, the LRU strategy generally achieves high hit ratio near the final stage of the training but does not work well in the whole training process. Therefore, we propose a new caching strategy called EFU (less frequently used), which replaces the EFU kernel values that enhance least frequently used (LFU). Our experimental results show that EFU often has 20% higher hit ratio than LRU in the training with the Gaussian kernel. To further optimize the strategy, we propose a caching strategy called hybrid caching for the SVM training (HCST), which has a novel mechanism to automatically adapt the better caching strategy in different stages of the training. We have integrated the caching strategy into ThunderSVM, a recent SVM library on many-core processors. Our experiments show that HCST adaptively achieves high hit ratios with little runtime overhead among different problems including multilabel classification, multiclass classification, and regression problems. Compared with other existing caching strategies, HCST achieves 20% more reduction in training time on average.
Source Title: IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
URI: https://scholarbank.nus.edu.sg/handle/10635/215372
ISSN: 2162-237X,2162-2388
DOI: 10.1109/TNNLS.2019.2944562
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
1911.03011v1.pdf1.41 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.