A Note on Auto-tuning GEMM for GPUs

Li, Yinan and Dongarra, Jack and Tomov, Stanimire (2009) A Note on Auto-tuning GEMM for GPUs. [MIMS Preprint]

[img] PDF

Download (241kB)


The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM [13, 11]. However, the current best GEMM performance, e.g. of up to 375 GFlop/s in single precision and of up to 75 GFlop/s in double precision arithmetic on NVIDIA's GTX 280, is dicult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development [12]. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27% in certain cases for both single and double precision GEMMs on the GTX 280).

Item Type: MIMS Preprint
Additional Information: Additional Information: Appears also as Technical Report UT-CS-09-635, Department of Computer Science, University of Tennessee, Knoxville, TN, USA, May 2007 and as LAPACK Working Note 212
Uncontrolled Keywords: Auto-tuning, matrix multiply, dense linear algebra, GPUs
Subjects: MSC 2010, the AMS's Mathematics Subject Classification > 65 Numerical analysis
MSC 2010, the AMS's Mathematics Subject Classification > 68 Computer science
Depositing User: Ms Lucy van Russelt
Date Deposited: 15 Jan 2009
Last Modified: 20 Oct 2017 14:12
URI: http://eprints.maths.manchester.ac.uk/id/eprint/1214

Actions (login required)

View Item View Item