Please use this identifier to cite or link to this item:
https://doi.org/10.1109/IPDPS.2011.52
DC Field | Value | |
---|---|---|
dc.title | Automated architecture-aware mapping of streaming applications onto GPUs | |
dc.contributor.author | Hagiescu, A. | |
dc.contributor.author | Huynh, H.P. | |
dc.contributor.author | Wong, W.-F. | |
dc.contributor.author | Goh, R.S.M. | |
dc.date.accessioned | 2013-07-04T08:24:27Z | |
dc.date.available | 2013-07-04T08:24:27Z | |
dc.date.issued | 2011 | |
dc.identifier.citation | Hagiescu, A.,Huynh, H.P.,Wong, W.-F.,Goh, R.S.M. (2011). Automated architecture-aware mapping of streaming applications onto GPUs. Proceedings - 25th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2011 : 467-478. ScholarBank@NUS Repository. <a href="https://doi.org/10.1109/IPDPS.2011.52" target="_blank">https://doi.org/10.1109/IPDPS.2011.52</a> | |
dc.identifier.isbn | 9780769543857 | |
dc.identifier.uri | http://scholarbank.nus.edu.sg/handle/10635/41307 | |
dc.description.abstract | Graphic Processing Units (GPUs) are made up of many streaming multiprocessors, each consisting of processing cores that interleave the execution of a large number of threads. Groups of threads - called warps and wave fronts, respectively, in nVidia and AMD literature - are selected by the hardware scheduler and executed in lockstep on the available cores. If threads in such a group access the slow off-chip global memory, the entire group has to be stalled, and another group is scheduled instead. The utilization of a given multiprocessor will remain high if there is a sufficient number of alternative thread groups to select from. Many parallel general purpose applications have been efficiently mapped to GPUs. Unfortunately, many stream processing applications exhibit unfavorable data movement patterns and low computation-to-communication ratio that may lead to poor performance. In this paper, we describe an automated compilation flow that maps most stream processing applications onto GPUs by taking into consideration two important architectural features of nVidia GPUs, namely interleaved execution as well as the small amount of shared memory available in each streaming multiprocessors. In particular, we show that using a small number of compute threads such that the memory footprint is reduced, we can achieve high utilization of the GPU cores. Our scheme goes against the conventional wisdom of GPU programming which is to use a large number of homogeneous threads. Instead, it uses a mix of compute and memory access threads, together with a carefully crafted schedule that exploits parallelism in the streaming application, while maximizing the effectiveness of the unique memory hierarchy. We have implemented our scheme in the compiler of the Stream It programming language, and our results show a significant speedup compared to the state-of-the-art solutions. © 2011 IEEE. | |
dc.description.uri | http://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/IPDPS.2011.52 | |
dc.source | Scopus | |
dc.subject | GPU | |
dc.subject | stream processing | |
dc.subject | StreamIt | |
dc.type | Conference Paper | |
dc.contributor.department | COMPUTER SCIENCE | |
dc.description.doi | 10.1109/IPDPS.2011.52 | |
dc.description.sourcetitle | Proceedings - 25th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2011 | |
dc.description.page | 467-478 | |
dc.identifier.isiut | NOT_IN_WOS | |
Appears in Collections: | Staff Publications |
Show simple item record
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.