Please use this identifier to cite or link to this item: https://doi.org/10.1016/S0167-739X(00)00056-X
DC FieldValue
dc.titleLoad-balancing data prefetching techniques
dc.contributor.authorChi, C.-H.
dc.contributor.authorYuan, J.-L.
dc.date.accessioned2013-07-04T07:29:34Z
dc.date.available2013-07-04T07:29:34Z
dc.date.issued2001
dc.identifier.citationChi, C.-H., Yuan, J.-L. (2001). Load-balancing data prefetching techniques. Future Generation Computer Systems 17 (6) : 733-744. ScholarBank@NUS Repository. https://doi.org/10.1016/S0167-739X(00)00056-X
dc.identifier.issn0167739X
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/38906
dc.description.abstractDespite the success of hybrid data address and value prediction in increasing the accuracy and coverage of data prefetching, memory access latency is still found to be an important bottleneck to the system performance. Careful study shows that about half of the cache misses are actually due to data references whose access pattern can be predicted accurately. Furthermore, the overall cache effectiveness is bounded by the behavior of unpredictable data references in cache. In this paper, we propose a set of four load-balancing techniques to address this memory latency problem. The first two mechanisms, sequential unification and aggressive lookahead mechanisms, are mainly used to reduce the chance of partial hits and the abortion of accurate prefetch requests. The latter two mechanisms, default prefetching and cache partitioning mechanisms, are used to optimize the cache performance of unpredictable references. The resulting cache, called the LBD (load-balancing data) cache, is found to have superior performance over a wide range of applications. Simulation of the LBD cache with RPT prefetching (reference prediction table - one of the most cited selective data prefetch schemes proposed by Chen and Baer) on SPEC95 showed that significant reduction in data reference latency, ranging from about 20 to over 90% and with an average of 55.89%, can be obtained. This is compared against the performance of prefetch-on-miss and RPT, with an average latency reduction of only 17.37 and 26.05%, respectively. © 2001 Elsevier Science B.V.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1016/S0167-739X(00)00056-X
dc.sourceScopus
dc.subjectCache
dc.subjectLoad-balancing
dc.subjectMemory
dc.subjectPrefetch buffer
dc.subjectPrefetching
dc.typeArticle
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.doi10.1016/S0167-739X(00)00056-X
dc.description.sourcetitleFuture Generation Computer Systems
dc.description.volume17
dc.description.issue6
dc.description.page733-744
dc.description.codenFGCSE
dc.identifier.isiut000167907900008
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

SCOPUSTM   
Citations

1
checked on May 5, 2021

Page view(s)

112
checked on May 5, 2021

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.