Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/167567
Title: FIXED AND FLOATING POINT PRECISION OPTIMIZED APPROXIMATION ON EMBEDDED AND PARALLEL ARCHITECTURES
Authors: HO NHUT MINH
Keywords: Approximate computing, half precision, GPU, floating point, fixed point, neural networks
Issue Date: 1-Nov-2019
Citation: HO NHUT MINH (2019-11-01). FIXED AND FLOATING POINT PRECISION OPTIMIZED APPROXIMATION ON EMBEDDED AND PARALLEL ARCHITECTURES. ScholarBank@NUS Repository.
Abstract: The recent emergence of approximable programs sparked a general interest in using low-precision number formats to exploit the trade-off between energy and tolerable accuracy. This thesis focuses on the topic of utilizing these number formats on emerging hardware platforms. For embedded platforms, we propose a set of methods for precisely allocating bitwidth to each variable in the program given an accuracy threshold. For customized hardware which is designed to support fixed-point precision for energy-hungry machine learning applications, we propose a method to adaptively allocate bit-level precision according to hardware-design constraints such as budget, memory bandwidth. For GPU architecture, we focus on the usage of half precision and tensor cores in CUDA programs. We propose the solution to efficiently use half precision with other floating point data types and a framework to utilize low-precision tensor cores to run general CUDA programs.
URI: https://scholarbank.nus.edu.sg/handle/10635/167567
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
HoNM.pdf4.57 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.