2009.8: A Note on Auto-tuning GEMM for GPUs
2009.8: Yinan Li, Jack Dongarra and Stanimire Tomov (2009) A Note on Auto-tuning GEMM for GPUs.
Full text available as:
|PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader|
The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM [13, 11]. However, the current best GEMM performance, e.g. of up to 375 GFlop/s in single precision and of up to 75 GFlop/s in double precision arithmetic on NVIDIA's GTX 280, is dicult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development . In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27% in certain cases for both single and double precision GEMMs on the GTX 280).
|Item Type:||MIMS Preprint|
Additional Information: Appears also as Technical Report UT-CS-09-635, Department of Computer Science, University of Tennessee, Knoxville, TN, USA, May 2007 and as LAPACK Working Note 212
|Uncontrolled Keywords:||Auto-tuning, matrix multiply, dense linear algebra, GPUs|
|Subjects:||MSC 2000 > 65 Numerical analysis|
MSC 2000 > 68 Computer science
|Deposited By:||Ms Lucy van Russelt|
|Deposited On:||15 January 2009|