Please use this identifier to cite or link to this item: https://doi.org/10.1109/TC.2020.3001033
Title: Accelerating Generative Neural Networks on Unmodified Deep Learning Processors-A Software Approach
Authors: Xu, Dawen
Liu, Cheng
Wang, Ying
Tu, Kaijie
He, Bingsheng 
Zhang, Lei
Keywords: Science & Technology
Technology
Computer Science, Hardware & Architecture
Engineering, Electrical & Electronic
Computer Science
Engineering
Deconvolution
Program processors
Neural networks
Convolution
Computer architecture
Hardware
Acceleration
Generative neural network
deconvolution accelerator
split deconvolution
Issue Date: 8-Jan-2020
Publisher: IEEE COMPUTER SOC
Citation: Xu, Dawen, Liu, Cheng, Wang, Ying, Tu, Kaijie, He, Bingsheng, Zhang, Lei (2020-01-08). Accelerating Generative Neural Networks on Unmodified Deep Learning Processors-A Software Approach. IEEE TRANSACTIONS ON COMPUTERS 69 (8) : 1172-1184. ScholarBank@NUS Repository. https://doi.org/10.1109/TC.2020.3001033
Abstract: Generative neural network is a new category of neural networks and it has been widely utilized in many applications such as content generation, unsupervised learning, segmentation, and pose estimation. It typically involves massive computing-intensive deconvolution operations that cannot be fitted to conventional neural network processors directly. However, prior works mainly investigated specialized hardware architectures through intensive hardware modifications to the existing deep learning processors to accelerate deconvolution together with the convolution. In contrast, this article proposes a novel deconvolution implementation with a software approach and enables fast and efficient deconvolution execution on the existing deep learning processors. Our proposed method reorganizes the computation of deconvolution and allows the deep learning processors to treat it as the standard convolution by splitting the original deconvolution filters into multiple small filters. Compared to prior acceleration schemes, the implemented acceleration scheme achieves 2.4× -4.3× performance speedup and reduces the energy consumption by 27.7 -54.5 percent on a set of realistic benchmarks. In addition, we have also applied the deconvolution computing approach to the off-the-shelf commodity deep learning processors. The performance of deconvolution also exhibits significant performance speedup over prior deconvolution implementations.
Source Title: IEEE TRANSACTIONS ON COMPUTERS
URI: https://scholarbank.nus.edu.sg/handle/10635/215371
ISSN: 0018-9340
1557-9956
DOI: 10.1109/TC.2020.3001033
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
1907.01773v3.pdf4.44 MBAdobe PDF

OPEN

Post-printView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.