Please use this identifier to cite or link to this item:
https://doi.org/10.14529/jsfi170206
Title: | Beating floating point at its own game: Posit arithmetic | Authors: | Gustafson, J.L Yonemoto, I |
Keywords: | Computer hardware Deep learning Energy efficiency Hardware IEEE Standards Linear algebra Neural networks Nitrogen compounds Sodium compounds Computer arithmetic Energy efficient computing Floating points Linpack Posits Unum computing Digital arithmetic |
Issue Date: | 2017 | Publisher: | Publishing center of the South Ural State University | Citation: | Gustafson, J.L, Yonemoto, I (2017). Beating floating point at its own game: Posit arithmetic. Supercomputing Frontiers and Innovations 4 (2) : 71-86. ScholarBank@NUS Repository. https://doi.org/10.14529/jsfi170206 | Rights: | Attribution 4.0 International | Abstract: | A new data type called a posit is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of universal number (unum) arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and "Nota- Number" (NaN) indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality. A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than "approximate computing" methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game. © The Authors 2016. | Source Title: | Supercomputing Frontiers and Innovations | URI: | https://scholarbank.nus.edu.sg/handle/10635/183865 | ISSN: | 2409-6008 | DOI: | 10.14529/jsfi170206 | Rights: | Attribution 4.0 International |
Appears in Collections: | Staff Publications Elements |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
10_14529_jsfi170206.pdf | 3.13 MB | Adobe PDF | OPEN | None | View/Download |
This item is licensed under a Creative Commons License