Please use this identifier to cite or link to this item:
|Title:||Optimizing the parallel computation of linear recurrences using compact matrix representations|
Programmable graphics hardware
|Citation:||Nistor, A., Chin, W.-N., Tan, T.-S., Tapus, N. (2009). Optimizing the parallel computation of linear recurrences using compact matrix representations. Journal of Parallel and Distributed Computing 69 (4) : 373-381. ScholarBank@NUS Repository. https://doi.org/10.1016/j.jpdc.2009.01.004|
|Abstract:||This paper presents a novel method for optimizing the parallel computation of linear recurrences. Our method can help reduce the resource requirements for both memory and computation. A unique feature of our technique is its formulation of linear recurrences as matrix computations, before exploiting their mathematical properties for more compact representations. Based on a general notion of closure for matrix multiplication, we present two classes of matrices that have compact representations. These classes are permutation matrices and matrices whose elements are linearly related to each other. To validate the proposed method, we experiment with solving recurrences whose matrices have compact representations using CUDA on nVidia GeForce 8800 GTX GPU. The advantages of our technique are that it enables the computation of larger recurrences in parallel and it provides good speedups of up to eleven times over the un-optimized parallel computations. Also, the memory usage can be as much as nine times lower than that of the un-optimized parallel computations. Our result confirms a promising approach for the adoption of more advanced parallelization techniques. © 2009 Elsevier Inc. All rights reserved.|
|Source Title:||Journal of Parallel and Distributed Computing|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Oct 18, 2018
WEB OF SCIENCETM
checked on Oct 10, 2018
checked on Jun 30, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.