Voting and Shuffling to Optimize Atomic Operations
2iSome years ago I started work on my first CUDA implementation of the Multiparticle Collision Dynamics (MPC) algorithm, a particle-in-cell code used to simulate hydrodynamic interactions between solvents and solutes. As part of this algorithm, a number of particle parameters are summed to calculate certain cell parameters. This was in the days of the Tesla GPU architecture (such as GT200 GPUs, Compute Capability 1.x), which had poor atomic operation performance. A linked list approach I developed worked well on Tesla and Fermi as an alternative to atomic adds but performed poorly on Kepler GPUs. However, atomic operations are much faster on the Kepler and Maxwell architectures, so it makes sense to use atomic adds.
These types of summations are not limited to MPC or particle-in-cell codes, but, to some extent, occur whenever data elements are aggregated by key. For data elements sorted and combined by key with a large number of possible values, pre-combining elements with the same key at warp level can lead to a significant speed-up. In this post, I will describe algorithms for speeding up your summations (or similar aggregations) for problems with a large number of keys where there is a reasonable correlation between the thread index and the key. This is usually the case for elements that are at least partially sorted. Unfortunately, this argument works in both directions: these algorithms are not for you if your number of keys is small or your distribution of keys is random. To clarify: by a “large” number of keys I mean more than could be handled if all bins were put into shared memory.
Note that this technique is related to a previously posted technique called warp-aggregated atomics by Andrey Adinetz, and also to the post Fast Histograms Using Shared Atomics on Maxwell by Nikolay Sakharnykh. The main difference here is that we are aggregating many groups, each designated by a key (to compute a histogram, for example). So you could consider this technique “warp-aggregated atomic reduction by key”.
Double Precision Atomics and Warp Divergence
To achieve sufficient numerical precision, the natively provided single-precision atomicAdd
is often inadequate. Unfortunately, using the atomicCAS
loop to implement double precision atomic operations (as suggested in the CUDA C Programming guide) introduces warp divergence, especially when the order of the data elements correlates with their keys. However, there is a way to remove this warp divergence (and a number of atomic operations): pre-combine all the elements in each warp that share the same key and use only one atomic operation per key and warp.
Why Work With Warps?
Using a “per warp” approach has two practical benefits. First, thread in a warp can communicate efficiently using warp vote and __shfl
(shuffle) operations. Second, there is no need for synchronization because threads in a warp work synchronously.
Applying the concepts shown later at the thread block level is significantly slower because of the slower communication through shared memory and the necessary synchronization.
Finding Groups of Keys Within a Warp
Various algorithms can be used to find the elements in a warp that share the same key (“peers”). A simple way is to loop over all different keys, but there are also hash-value based approaches using shared memory and probably many others. The performance and sometimes also the suitability of each algorithm depends on the type and distribution of the keys and the architecture of the target GPU.
The code shown in example 1 should work with keys of all types on Kepler and later GPUs (Compute Capability 3.0 and later). It takes the key of lane 0 as reference, distributes it using a __shfl()
operation to all other lanes and uses a __ballot()
operation to determine which lanes share the same key. These are removed from the pool of lanes and the procedure is repeated with the key of the first remaining lane until all keys in the warp are checked. For each warp, this loop obviously has as many operations as there are different keys. The function returns a bit pattern for each thread with the bits of its peer elements set. You can see a step-by-step example of how this works starting on slide 9 of my GTC 2015 talk.
template<typename G> __device__ __inline__ uint get_peers(G key) { uint peers=0; bool is_peer; // in the beginning, all lanes are available uint unclaimed=0xffffffff; do { // fetch key of first unclaimed lane and compare with this key is_peer = (key == __shfl(key, __ffs(unclaimed) - 1)); // determine which lanes had a match peers = __ballot(is_peer); // remove lanes with matching keys from the pool unclaimed ^= peers; // quit if we had a match } while (!is_peer); return peers; }
Pre-Combining Peers
Ideally, the peer elements found can be added or otherwise combined in parallel in log2(max_n_peer) iterations if we use parallel tree-like reductions on all groups of peers. But with a number of interleaved trees traversed in parallel and only represented by bit patterns, it is not obvious which element should be added. There are at least two solutions to this problem:
- make it obvious by switching the positions of all elements back and forth for the calculation; or
- interpret the bit pattern to determine which element to add next.
The latter approach had a slight edge in performance in my implementation.
Note that Mark Harris taught us a long time ago to start reductions with the “far away” elements to avoid bank conflicts. This is still true when using shared memory, but with the __shfl()
operations used here, we do not have bank conflicts, so we can start with our neighboring peers. (See the post “Faster Reductions on Kepler using SHFL”).
Parallel Reductions
Let’s start with some observations on the usual parallel reductions. Of the two elements combined in each operation, the one with the higher index is used and, therefore, obsolete. At each iteration, every second remaining element is “reduced away”. In other words, in iteration i (0 always being the first here), all elements with an index not divisible by 2i+1 are “reduced away”, which is equivalent to saying that in iteration i, elements are reduced away if the least-significant bit of their relative index is 2i.
Replacing thread lane index by a relative index among the thread’s peers, we can perform the parallel reductions over the peer groups as follows.
- Calculate each thread’s relative index within its peer group.
- Ignore all peer elements in the warp with a relative index lower or equal to this lane’s index.
- If the least-significant bit of the relative index of the thread is not 2i (with i being the iteration), and there are peers with higher relative indices left, acquire and use data from the next highest peer.
- If the least-significant bit of the relative index of the thread is 2i, remove the bit for the lane of this thread from the bit set of remaining peers.
- Continue until there are no more remaining peers in the warp.
The last step may sound ineffective, but no thread in this warp will continue before the last thread is done. Furthermore, we need a more complex loop exit condition, so there’s no gain in a more efficient-looking implementation.
You can see an actual implementation in Example 2 and a step-by-step walkthrough starting on slide 19 of my GTC 2015 presentation.
template <typename F> __device__ __inline__ F reduce_peers(uint peers, F &x) { int lane = TX&31; // find the peer with lowest lane index int first = __ffs(peers)-1; // calculate own relative position among peers int rel_pos = __popc(peers << (32 - lane)); // ignore peers with lower (or same) lane index peers &= (0xfffffffe << lane); while(__any(peers)) { // find next-highest remaining peer int next = __ffs(peers); // __shfl() only works if both threads participate, so we always do. F t = __shfl(x, next - 1); // only add if there was anything to add if (next) x += t; // all lanes with their least significant index bit set are done uint done = rel_pos & 1; // remove all peers that are already done peers & = ~__ballot(done); // abuse relative position as iteration counter rel_pos >>= 1; } // distribute final result to all peers (optional) F res = __shfl(x, first); return res; }
Performance Tests
To test the performance I set up an example with a simulation box of size 100x100x100 cells with 10 particles per cell and the polynomial cell index being the key element. This leads to one million different keys and is therefore unsuitable for direct binning in shared memory. I compared my algorithm (“warp reduction” in Figure 1) against an unoptimized atomic implementation and a binning algorithm using shared memory atomics with a hashing function to calculate bin indices per block on the fly.
For comparison I used three different distributions of keys:
- completely random;
- completely ordered by key;
- ordered, then shifted with a 50% probability to the next cell in each direction. This is a test pattern resembling a common distribution during the MPC algorithm.
Tests were performed using single- and double-precision values. For single-precision numbers, the simple atomic implementation is always either faster or at least not significantly slower than other implementations. This is also true for double-precision numbers with randomly distributed keys, where any attempt to reduce work occasionally results in huge overhead. For these reasons single-precision numbers and completely randomly distributed keys are excluded in the comparison. Also excluded is the time used to calculate the peer bit masks, because it varies from an insignificant few percent (shared memory hashing for positive integer keys on Maxwell) to about the same order of magnitude as a reduction step (external loop as described above for arbitrary types, also on Maxwell). But even in the latter case we can achieve a net gain if it is possible to reuse the bit patterns at least once.
Conclusion
For data elements sorted and combined by key with a large number of possible values, pre-combining same-key elements at warp level can lead to a significant speed-up. The actual gain depends on many factors like GPU architecture, number and type of data elements and keys, and re-usability. The changes in code are minor, so you should try it if you have a task fitting the criteria.
For related techniques, be sure to check out the posts Warp-Aggregated Atomics by Andrey Adinetz, and Fast Histograms Using Shared Atomics on Maxwell by Nikolay Sakharnykh.
Voting and Shuffling to Optimize Atomic Operations的更多相关文章
- 什么是Java中的原子操作( atomic operations)
1.啥是java的原子性 原子性:即一个操作或者多个操作 要么全部执行并且执行的过程不会被任何因素打断,要么就都不执行. 一个很经典的例子就是银行账户转账问题: 比如从账户A向账户B转1000元,那么 ...
- Atomic operations on the x86 processors
On the Intel type of x86 processors including AMD, increasingly there are more CPU cores or processo ...
- Adaptively handling remote atomic execution based upon contention prediction
In one embodiment, a method includes receiving an instruction for decoding in a processor core and d ...
- Method and apparatus for an atomic operation in a parallel computing environment
A method and apparatus for a atomic operation is described. A method comprises receiving a first pro ...
- A trip through the Graphics Pipeline 2011_13 Compute Shaders, UAV, atomic, structured buffer
Welcome back to what’s going to be the last “official” part of this series – I’ll do more GPU-relate ...
- 原子操作(atomic operation)
深入分析Volatile的实现原理 引言 在多线程并发编程中synchronized和Volatile都扮演着重要的角色,Volatile是轻量级的synchronized,它在多处理器开发中保证了共 ...
- C++11开发中的Atomic原子操作
C++11开发中的Atomic原子操作 Nicol的博客铭 原文 https://taozj.org/2016/09/C-11%E5%BC%80%E5%8F%91%E4%B8%AD%E7%9A%84 ...
- boost atomic
文档: http://www.boost.org/doc/libs/1_53_0/doc/html/atomic.html Presenting Boost.Atomic Boost.Atomic i ...
- centos 7.0 nginx 1.7.9 安装过程
系统用的是centos 7.0最小化安装 我现在安装完了 写一下步骤 还没完全搞懂 首先安装GCC [root@localhost ~]# yum install -y gcc gcc-c++ 已加载 ...
随机推荐
- spark学习12(spark架构原理)
spark采用的是主从式的架构,主节点叫master,从节点是worker Driver 我们编写的spark就在Driver上,由driver进程执行. Driver是spark集群的节点之一,或你 ...
- 委托和事件C#演示代码
class Cat { private string _name; public Cat(string name) { _name = name; } public void Shout() { Co ...
- RedHat 6.4企业版利用iso镜像做本地yum源
修改文章:http://linux.cn/article-1017-1.html 而RedHat的yum则需要注册付费才能使用,如果不这样则有两种解决方案 1. 利用iso镜像做本地yum源 2. 利 ...
- 2017版:KVM 性能优化之内存优化
我们说完CPU方面的优化,接着我们继续第二块内容,也就是内存方面的优化.内存方面有以下四个方向去着手: EPT 技术 大页和透明大页 KSM 技术 内存限制 1. EPT技术 EPT也就是扩展页表,这 ...
- SpringBoot实现多数据源(实战源码)
通过一个数据库的表数据去查询同步另一个数据库,之前的方式是通过写个小工具,然后jdbc方式进行处理,这个方式也挺好用的.学习了springboot后发现可以实现多数据源操作,然后就具体实现以下. 以下 ...
- TiDB 在摩拜单车的深度实践及应用
一.业务场景 摩拜单车 2017 年开始将 TiDB 尝试应用到实际业务当中,根据业务的不断发展,TiDB 版本快速迭代,我们将 TiDB 在摩拜单车的使用场景逐渐分为了三个等级: P0 级核心业务: ...
- shell 数组操作
1. 定义数组: var_array=(one two three four five) 2.常用操作 获取数组长度: ${#var_array[@]} 获取所有数组元素: ${var_array[ ...
- Python主流框架
15个最受欢迎的Python开源框架.这些框架包括事件I/O,OLAP,Web开发,高性能网络通信,测试,爬虫等. Django: Python Web应用开发框架Django 应该是最出名的Pyth ...
- MySQL index merge
深入理解 index merge 是使用索引进行优化的重要基础之一. [ index merge] 当where谓词中存在多个条件(或者join)涉及到多个字段,它们之间进行 AND 或者 ...
- 如何用 Java 实现 Web 应用中的定时任务
定时任务,是指定一个未来的时间范围执行一定任务的功能.在当前WEB应用中,多数应用都具备任务调度功能,针对不同的语音,不同的操作系统, 都有其自己的语法及解决方案,windows操作系统把它叫做任务计 ...