Please use this identifier to cite or link to this item: https://doi.org/10.14529/jsfi170206
DC FieldValue
dc.titleBeating floating point at its own game: Posit arithmetic
dc.contributor.authorGustafson, J.L
dc.contributor.authorYonemoto, I
dc.date.accessioned2020-11-23T08:52:07Z
dc.date.available2020-11-23T08:52:07Z
dc.date.issued2017
dc.identifier.citationGustafson, J.L, Yonemoto, I (2017). Beating floating point at its own game: Posit arithmetic. Supercomputing Frontiers and Innovations 4 (2) : 71-86. ScholarBank@NUS Repository. https://doi.org/10.14529/jsfi170206
dc.identifier.issn2409-6008
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/183865
dc.description.abstractA new data type called a posit is designed as a direct drop-in replacement for IEEE Standard 754 floating-point numbers (floats). Unlike earlier forms of universal number (unum) arithmetic, posits do not require interval arithmetic or variable size operands; like floats, they round if an answer is inexact. However, they provide compelling advantages over floats, including larger dynamic range, higher accuracy, better closure, bitwise identical results across systems, simpler hardware, and simpler exception handling. Posits never overflow to infinity or underflow to zero, and "Nota- Number" (NaN) indicates an action instead of a bit pattern. A posit processing unit takes less circuitry than an IEEE float FPU. With lower power use and smaller silicon footprint, the posit operations per second (POPS) supported by a chip can be significantly higher than the FLOPS using similar hardware resources. GPU accelerators and Deep Learning processors, in particular, can do more per watt and per dollar with posits, yet deliver superior answer quality. A comprehensive series of benchmarks compares floats and posits for decimals of accuracy produced for a set precision. Low precision posits provide a better solution than "approximate computing" methods that try to tolerate decreased answer quality. High precision posits provide more correct decimals than floats of the same size; in some cases, a 32-bit posit may safely replace a 64-bit float. In other words, posits beat floats at their own game. © The Authors 2016.
dc.publisherPublishing center of the South Ural State University
dc.rightsAttribution 4.0 International
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/
dc.sourceUnpaywall 20201031
dc.subjectComputer hardware
dc.subjectDeep learning
dc.subjectEnergy efficiency
dc.subjectHardware
dc.subjectIEEE Standards
dc.subjectLinear algebra
dc.subjectNeural networks
dc.subjectNitrogen compounds
dc.subjectSodium compounds
dc.subjectComputer arithmetic
dc.subjectEnergy efficient computing
dc.subjectFloating points
dc.subjectLinpack
dc.subjectPosits
dc.subjectUnum computing
dc.subjectDigital arithmetic
dc.typeArticle
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.14529/jsfi170206
dc.description.sourcetitleSupercomputing Frontiers and Innovations
dc.description.volume4
dc.description.issue2
dc.description.page71-86
dc.published.statepublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
10_14529_jsfi170206.pdf3.13 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons