Please use this identifier to cite or link to this item:
|Title:||Design considerations of high performance data cache with prefetching|
|Authors:||Chi, C.-H. |
|Source:||Chi, C.-H.,Yuan, J.-L. (1999). Design considerations of high performance data cache with prefetching. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 1685 LNCS : 1243-1250. ScholarBank@NUS Repository.|
|Abstract:||In this paper, we propose a set of four load-balancing techniques to address the memory latency problem of on-chip cache. The first two mechanisms, the sequential unification and the aggressive lookahead mechanisms, are mainly used to reduce the chance of partial hits and the abortion of accurate prefetch requests. The latter two mechanisms, the default prefetching and the cache partitioning mechanisms, are used to optimize the cache performance of the unpredictable references. The resulting cache, called the LBD (Load-Balancing Data) cache, is found to have superior performance over a wide range of applications. Simulation of the LBD cache with RPT prefetching (Reference Prediction Table - one of the most cited selective data prefetch schemes [2,3]) on SPEC95 showed that significant reduction in the data reference latency, ranging from about 20% to over 90% and with an average of 55.89%, can be obtained. This is compared against the performance of prefetch-on-miss and RPT, with an average latency reduction of only 17.37% and 26.05% respectively. © Springer-Verlag Berlin Heidelberg 1999.|
|Source Title:||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Feb 23, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.