R – GPU Programming for All with ‘gpuR’
INTRODUCTION
GPUs (Graphic Processing Units) have become much more popular in recent years for computationally intensive calculations. Despite these gains, the use of this hardware has been very limited in the R programming language. Although possible, the prospect of programming in either OpenCL or CUDA is difficult for many programmers unaccustomed to working with such a low-level interface. Creating bindings for R’s high-level programming that abstracts away the complex GPU code would make using GPUs far more accessible to R users. This is the core idea behind the gpuR package. There are three novel aspects of gpuR:
- Applicable on ‘ALL’ GPUs
- Abstracts away CUDA/OpenCL code to easily incorporate in to existing R algorithms
- Separates copy/compute functions to allow objects to persist on GPU
BROAD APPLICATION:
The ‘gpuR’ package was created to bring the power of GPU computing to any R user with a GPU device. Although there are a handful of packages that provide some GPU capability (e.g.gputools, cudaBayesreg, HiPLARM, HiPLARb, and gmatrix) all are strictly limited to NVIDIA GPUs. As such, a backend that is based upon OpenCL would allow all users to benefit from GPU hardware. The ‘gpuR’ package therefore utilizes the ViennaCL linear algebra library which contains auto-tuned OpenCL kernels (among others) that can be leveraged for GPUs. The headers have been conveniently repackaged in the RViennaCLpackage. It also allows for a CUDA backend for those with NVIDIA GPUs that may see further improved performance (contained within the companion gpuRcuda package not yet formally released).
ABSTRACT AWAY GPU CODE:
The gpuR package uses the S4 object oriented system to have explicit classes and methods that all the user to simply cast their matrix or vector and continue programming in R as normal. For example:
|
1
2
3
4
5
6
7
8
9
10
11
12
|
ORDER = 1024A = matrix(rnorm(ORDER^2), nrow=ORDER)B = matrix(rnorm(ORDER^2), nrow=ORDER)gpuA = gpuMatrix(A, type="double")gpuB = gpuMatrix(B, type="double")C = A %*% BgpuC = gpuA %*% gpuBall(C == gpuC[])[1] TRUE |
The gpuMatrix object points to a matrix in RAM which is then computed by the GPU when a desired function is called. This avoids R’s habit of copying the memory of objects. For example:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
library(pryr)# Initially points to same objectx = matrix(rnorm(16), 4)y = xaddress(x)[1] "0x16177f28"address(y)[1] "0x16177f28"# But once modify the second object it creates a copyy[1,1] = 0address(x)[1] "0x16177f28"address(y)[1] "0x15fbb1d8 |
In contrast, the same syntax for a gpuMatrix will modify the original object in-place without any copy.
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
library(pryr)library(gpuR)# Initially points to same objectx = gpuMatrix(rnorm(16), 4, 4)y = xx@address[1] <pointer: 0x6baa040>y@address[1] <pointer: 0x6baa040># Modification affects both objects without copyy[1,1] = 0x@address[1] <pointer: 0x6baa040>y@address[1] <pointer: 0x6baa040> |
Each new variable assigned to this object will only copy the pointer thereby making the program more memory efficient. However, the gpuMatrix> class does still require allocating GPU memory and copying data to device for each function call. The most commonly used methods have been overloaded such as %*%, +, -, *, /, crossprod, tcrossprod, and trig functions among others. In this way, an R user can create these objects and leverage GPU resources without the need to know a bunch more functions that would break existing algorithms.
DISTINCT COPY/COMPUTE FUNCTIONALITY:
For the gpuMatix and gpuVector classes there are companion vclMatrix andvclVector class that point to objects that persist in the GPU RAM. In this way, the user explicitly decides when data needs to be moved back to the host. By avoiding unnecessary data transfer between host and device performance can significantly improve. For example:
|
1
2
3
4
5
6
7
8
9
|
vclA = vclMatrix(rnorm(10000), nrow = 100)vclB = vclMatrix(rnorm(10000), nrow = 100)vclC = vclMatrix(rnorm(10000), nrow = 100)# GEMMvclD = vclA %*% vclB# Element-wise additionvclD = vclD + vclC |
In this code, the three initial matrices already exist in the GPU memory so no data transfer takes place in the GEMM call. Furthermore, the returned matrix remains in the GPU memory. In this case, the ‘vclD’ object is still in GPU RAM. As such, the element-wise addition call also happens directly on the GPU with no data transfers. It is worth also noting that the user can still modify elements, rows, or columns with the exact same syntax as a normal R matrix.
|
1
2
3
|
vclD[1,1] = 42vclD[,2] = rep(12, 100)vclD[3,] = rep(23, 100) |
These operations simply copy the new elements to the GPU and modify the object in-place within the GPU memory. The ‘vclD’ object is never copied to the host.
BENCHMARKS:
With all that in mind, how does gpuR perform? Here are some general benchmarks of the popular GEMM operation. I currently only have access to a single NVIDIA GeForce GTX 970 for these simulations. Users should expect to see differences with high performance GPUs (e.g. AMD FirePro, NVIDIA Tesla, etc.). Speedup relative to CPU will also vary depending upon user hardware.
(1) DEFAULT DGEMM VS BASE R
R is known to only support two numeric types (integer and double). As such, Figure 1 shows the fold speedup achieved by using the gpuMatrix and vclMatrix classes. Since R is already known to not be the fastest language, an implementation with the OpenBLAS backend is included as well for reference using a 4 core Intel i5-2500 CPU @ 3.30GHz. As can be seen there is a dramatic speedup from just using OpenBLAS or the gpuMatrix class (essentially equivalent). Of interest is the impact of the transfer time from host-device-host that is typical in many GPU implementations. This cost is eliminated by using the vclMatrix class which continues to scale with matrix size.
Figure 1 – Fold speedup achieved using openblas (CPU) as well as the gpuMatrix/vclMatrix (GPU) classes provided in gpuR.
(2) SGEMM VS BASE R
In many GPU benchmarks there is often float operations measured as well. As noted above, R does not provide this by default. One way to go around this is to use the RcppArmadillo package and explicitly casting R objects as float types. The armadillo library will also default to using the BLAS backend provided (i.e. OpenBLAS). Figure 2 shows the impact of using float data types. OpenBLAS continues to provide a noticeable speedup but gpuMatrix begins to outperform once matrix order exceeds 1500. The vclMatrix continues to demonstrate the value in retaining objects in GPU memory and avoiding memory transfers.
Figure 2 – Float type GEMM comparisons. Fold speedup achieved using openblas (via RcppArmadillo) as well as the gpuMatrix/vclMatrix (GPU) classes provided in gpuR.
To give an additional view on the performance achieved by gpuMatrix and vclMatrix is comparing directly against the OpenBLAS performance. The gpuMatrix reaches ~2-3 fold speedup over OpenBLAS whereas vclMatrix scales to over 100 fold speedup! It is curious as to why the performance with vclMatrix is so much faster (only differing in host-device-host transfers). Further optimization with gpuMatrix will need to be explored (fresh eyes are welcome) accepting limitations in the BUS transfer speed. Performance will certainly improve with improved hardware capabilities such as NVIDIA’s NVLink.
Figure 3 – Fold speedup achieved over openblas (via RcppArmadillo) float type GEMM comparisons vs the gpuMatrix/vclMatrix (GPU) classes provided in gpuR.
CONCLUSION
The gpuR package has been created to bring GPU computing to as many R users as possible. It is the intention to use gpuR to more easily supplement current and future algorithms that could benefit from GPU acceleration. The gpuR package is currently available on CRAN. The development version can be found on my github in addition to existing issues and wiki pages (assisting primarily in installation). Future developments include solvers (e.g. QR, SVD, cholesky, etc.), scaling across multiple GPUs, ‘sparse’ class objects, and custom OpenCL kernels.
As noted above, this package is intended to be used with a multitude of hardware and operating systems (it has been tested on Windows, Mac, and multiple Linux flavors). I only have access to a limited set of hardware (I can’t access every GPU, let along the most expensive). As such, the development of gpuR depends upon the R user community. Volunteers who possess different hardware are always welcomed and encouraged to submit issues regarding any discovered bugs. I have begun a gitter account for users to report on successful usage with alternate hardware. Suggestions and general conversation about gpuR is welcome.
转自:http://www.parallelr.com/r-gpu-programming-for-all-with-gpur/
R – GPU Programming for All with ‘gpuR’的更多相关文章
- 把书《CUDA By Example an Introduction to General Purpose GPU Programming》读薄
鉴于自己的毕设需要使用GPU CUDA这项技术,想找一本入门的教材,选择了Jason Sanders等所著的书<CUDA By Example an Introduction to Genera ...
- Udacity并行计算课程笔记-The GPU Programming Model
一.传统的提高计算速度的方法 faster clocks (设置更快的时钟) more work over per clock cycle(每个时钟周期做更多的工作) more processors( ...
- High level GPU programming in C++
https://github.com/prem30488/C2CUDATranslator http://www.training.prace-ri.eu/uploads/tx_pracetmo/GP ...
- 【Udacity并行计算课程笔记】- lesson 1 The GPU Programming Model
一.传统的提高计算速度的方法 faster clocks (设置更快的时钟) more work over per clock cycle(每个时钟周期做更多的工作) more processors( ...
- GPU并行编程小结
http://peghoty.blog.163.com/blog/static/493464092013016113254852/ http://blog.csdn.net/augusdi/artic ...
- [CUDA] 00 - GPU Driver Installation & Concurrency Programming
前言 对,这是一个高大上的技术,终于要做老崔当年做过的事情了,生活很传奇. 一.主流 GPU 编程接口 1. CUDA 是英伟达公司推出的,专门针对 N 卡进行 GPU 编程的接口.文档资料很齐全,几 ...
- GPU深度发掘(一)::GPGPU数学基础教程
作者:Dominik Göddeke 译者:华文广 Contents 介绍 准备条件 硬件设备要求 软件设备要求 两者选择 初始化OpenGL GLUT OpenGL ...
- GPU编程自学7 —— 常量内存与事件
深度学习的兴起,使得多线程以及GPU编程逐渐成为算法工程师无法规避的问题.这里主要记录自己的GPU自学历程. 目录 <GPU编程自学1 -- 引言> <GPU编程自学2 -- CUD ...
- PatentTips - Heterogeneous Parallel Primitives Programming Model
BACKGROUND 1. Field of the Invention The present invention relates generally to a programming model ...
随机推荐
- JavascriptS中的各结构的嵌套和函数
各位朋友大家好,上周更新给大家分享了JavaScript的入门知识及各种常用结构的用法,那么,本次更新博主就跟大家更深入的聊一聊JS各结构的嵌套用法,及JS中及其常用的一种结构--函数.以下为函数和循 ...
- MapReduce处理流程
MapReduce是Hadoop2.x的一个计算框架,利用分治的思想,将一个计算量很大的作业分给很多个任务,每个任务完成其中的一小部分,然后再将结果合并到一起.将任务分开处理的过程为map阶段,将每个 ...
- Divide Groups(分组)
题目链接 题目大意是说输入数字n 然后告诉你第i个人都认识谁? 让你把这些人分成两堆,使这每个堆里的人都互相认识. 做法:把不是互相认识的人建立一条边,则构建二分图,两堆的人肯定都互相认识,也就是说, ...
- 对Qt下对话服务器客户端的总结(MyTcpServer与MyTcpClient)
在汇文培训老师给讲了这个例子.讲的挺好的 Qt编写聊天服务器与客户端主要用到下面两个类: QTcpSocket --- 处理连接的 QTcpServer --- 处理服务器,对接入进行响应,创建每个链 ...
- java复习(9)---数据库JDBC
java写工程当然需要连接数据库.JDBC技术是连接数据库和应用程序的纽带,本节主要说明如何连接数据库. java中提供sql类. package re09; import java.sql.*; p ...
- STM32之呼吸灯实验
首先,我想引用一下在一片博文里 看到 的一段话,写的很详细, 首先来说,你要使用PWM模式你得先选择用那个定时器来输出PWM吧!除了TIM6.TIM7这两个普通的定时器无法输出PWM外,其余的定时器都 ...
- Ubuntu搭建ssh连接(连接方式:桥接网卡、网络地址转换(NAT))
操作系统:Ubuntu Server 16.04.2 SSH软件:Putty(远程连接工具,视本机操作系统选择对应版本的putty) --------------------------------- ...
- 容易产生错误的where条件
错误的方式:$where = [];if ($type == 'wait') { $where['status'] = 0;}if ($type == 'done') { $where['status ...
- Spring+SpringMVC+MyBatis+easyUI整合优化篇(十三)数据层优化-表规范、索引优化
本文提要 最近写的几篇文章都是关于数据层优化方面的,这几天也在想还有哪些地方可以优化改进,结合日志和项目代码发现,关于数据层的优化,还是有几个方面可以继续修改的,代码方面,整合了druid数据源也开启 ...
- spine动画融合与动画叠加
spine动画融合与动画叠加 一.动画融合setMix 1.概述:两个动作之间的平滑过渡 参数duration为需要多少时间从fromAnimation过渡到toAnimation,过渡时间为动画重叠 ...