转载自: Introduction to MPI - Part II (Youtube)

Buffering 

Suppose we have

if(rank==)
MPI_Send(sendbuf,...,,...)
if(rank==)
MPI_Recv(recvbuf,...,,...)

These are blocking communications, which means they will not return until the arguments to the functions can be safely modified by subsequent statements in the program.

Assume that process 1 is not ready to receive. Then there are 3 possibilities for process 0:

(1) stops and waits until process 1 is ready to receive

(2) copies the message at sendbuf into a system buffer (can be on process 0, process 1 or somewhere else) and returns from MPI_Send

(3) fails

As long as buffer space is available, (2) is a reasonable alternative.

An MPI implementation is permitted to copy the message to be sent into internal storage, but it is not required to do so.

What if not enough space is available?

>> In applications communicating large amounts of data, there may not be enough momory (left) in buffers.

>> Until receive starts, no place to store the send message.

>> Practically, (1) results in a serial execution.

A programmer should not assume that the system provides adequate buffering.

Now consider a program executing:

Process 0 Process 1
MPI_Send to process 1 MPI_Send to process 0
MPI_Recv from process 1 MPI_Recv from process 0

Such a program may work in many cases, but it is certain to fail for message of some size that is large enough.

There are some possible solutions:

>> Ordered send and receive - make sure each receive is matched with send in execution order across processes.

>> The aboved matched pairing can be difficult in complex applications. An alternative is to use MPI_Sendrecv. It performs both send and receive such that if no buffering is available, no deadlock will occur.

>> Buffered sends. MPI allows the programmer to provide a buffer into which data can be placed until it is delivered (or at lease left in buffer) via MPI_Bsend.

>> Nonblocking communication. Initiated, then program proceeds while the communication is ongoing, until a check that communication is completed later in the program. IMPORTANT: must make certain data not modified until communication has completed.

Safe programs

>> A program is safe if it will produce correct results even if the system provides no buffering.

>> Need safe programs for portability.

>> Most programmers expect the system to provide some buffering, hence many unsafe MPI programs are around.

>> Write safe programs using matching send with receive, MPI_Sendrecv, allocating own buffers, nonblocking operations.

Nonblocking communications

>> nonblocking communications are useful for overlapping communication with computation, and ensuring safe programs.

>> a nonblocking operation request the MPI library to perform an operation (when it can).

>> nonblocking operations do not wait for any communication events to complete.

>> nonblocking send and receive: return almost immediately

>> can safely modify a send (receive) buffer only after send (receive) is completed.

>> "wait" routines will let program know when a nonblocking operation is done.

Example - Communication between processes in ring topology

>> With blocking communications it is not possible to write a simple code to accomplish this data exchange.

>> For example, if we have MPI_Send first in all processes, program will get stuck as there will be no matching MPI_Recv to send data.

>> Nonblocking communication avoids this problem.

 #include <stdio.h>
#include <stdlib.h>
#include "mpi.h" int main(int argc, char *argv[]) {
int numtasks, rank, next, prev, buf[], tag1=, tag2=; tag1=tag2=;
MPI_Request reqs[];
MPI_Status stats[]; MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank); prev= rank-;
next= rank+;
if(rank == ) prev= numtasks - ;
if(rank == numtasks-) next= ;
MPI_Irecv(&buf[], , MPI_INT, prev, tag1, MPI_COMM_WORLD, &reqs[]);
MPI_Irecv(&buf[], , MPI_INT, next, tag2, MPI_COMM_WORLD, &reqs[]);
MPI_Isend(&rank, , MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[]);
MPI_Isend(&rank, , MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[]);
MPI_Waitall(, reqs, stats); printf("Task %d communicated with tasks %d & %d\n",rank,prev,next);
MPI_Finalize();
return ;
}

Summary for Nonblocking Communications

>> nonblocking send can be posted whether a matching receive has been posted or not.

>> send is completed when data has been copied out of send buffer.

>> nonblocking send can be matched with blocking receive and vice versa.

>> communications are initiated by sender

>> a communication will generally have lower overhead if a receive buffer is already posted when a sender initiates a communication.

MPI - 缓冲区和非阻塞通信的更多相关文章

  1. 【MPI学习4】MPI并行程序设计模式:非阻塞通信MPI程序设计

    这一章讲了MPI非阻塞通信的原理和一些函数接口,最后再用非阻塞通信方式实现Jacobi迭代,记录学习中的一些知识. (1)阻塞通信与非阻塞通信 阻塞通信调用时,整个程序只能执行通信相关的内容,而无法执 ...

  2. 用Java实现非阻塞通信

    用ServerSocket和Socket来编写服务器程序和客户程序,是Java网络编程的最基本的方式.这些服务器程序或客户程序在运行过程中常常会阻塞.例如当一个线程执行ServerSocket的acc ...

  3. UE4 Socket多线程非阻塞通信

    转自:https://blog.csdn.net/lunweiwangxi3/article/details/50468593 ue4自带的Fsocket用起来依旧不是那么的顺手,感觉超出了我的理解范 ...

  4. Java NIO Socket 非阻塞通信

    相对于非阻塞通信的复杂性,通常客户端并不需要使用非阻塞通信以提高性能,故这里只有服务端使用非阻塞通信方式实现 客户端: package com.test.client; import java.io. ...

  5. 利用Python中SocketServer 实现客户端与服务器间非阻塞通信

    利用SocketServer模块来实现网络客户端与服务器并发连接非阻塞通信 版权声明 本文转自:http://blog.csdn.net/cnmilan/article/details/9664823 ...

  6. 【python】网络编程-SocketServer 实现客户端与服务器间非阻塞通信

    利用SocketServer模块来实现网络客户端与服务器并发连接非阻塞通信.首先,先了解下SocketServer模块中可供使用的类:BaseServer:包含服务器的核心功能与混合(mix-in)类 ...

  7. 基于MFC的socket编程(异步非阻塞通信)

       对于许多初学者来说,网络通信程序的开发,普遍的一个现象就是觉得难以入手.许多概念,诸如:同步(Sync)/异步(Async),阻塞(Block)/非阻塞(Unblock)等,初学者往往迷惑不清, ...

  8. TCP非阻塞通信

    一.SelectableChannel SelectableChannel支持阻塞和非阻塞模式的channel 非阻塞模式下的SelectableChannel,读写不会阻塞 SelectableCh ...

  9. java网络通信之非阻塞通信

    java中提供的非阻塞类主要包含在java.nio,包括: 1.ServerSocketChannel:ServerSocket替代类,支持阻塞与非阻塞: 2.SocketChannel:Socket ...

随机推荐

  1. bzoj 1901 Dynamic Rankings (树状数组套线段树)

    1901: Zju2112 Dynamic Rankings Time Limit: 10 Sec  Memory Limit: 128 MB Description 给定一个含有n个数的序列a[1] ...

  2. 分治法:快速排序求第K极值

    标题其实就是nth_element函数的底层实现 nth_element(first, nth, last, compare) 求[first, last]这个区间中第n大小的元素 如果参数加入了co ...

  3. 解决Ubuntu终端里面显示路径名称太长

    方法/步骤 找到配置文件先进行备份:  cp  ~/.bashrc  ~/.bashrc-bak 找到配置文件修改: vi  ~/.bashrc 备份是为了防止配置修改出错,可以还原: 下面是我的/h ...

  4. CSS与HTML结合

    CSS与HTML结合的4中方式: 1.每个HTML标签都有style属性 2.当页面中有多个标签具有相同样式时,可定义style标签封装样式以复用 <style type=”text/css”& ...

  5. c++刷题(3/100)数独,栈和队列

    stack的基本操作 • s.size():返回栈中的元素数量 • s.empty():判断栈是否为空,返回true或false • s.push(元素):返回对栈顶部“元素”的可变(可修改)引用 • ...

  6. 【译】第三篇 SQL Server代理警报和操作员

    本篇文章是SQL Server代理系列的第三篇,详细内容请参考原文. 正如这一系列的上一篇所述,SQL Server代理作业是由一系列的作业步骤组成,每个步骤由一个独立的类型去执行,除了步骤中执行的工 ...

  7. Ajax+innerHTML+Dgls=好的用户体验+高性能+高效率

    为了引入Dgls,我们从创建Dom节点说起. 用JS创建Dom节点 var div = document.createElement('div'); div.className = 'gdls'; v ...

  8. 遍历目录大小——php经典实例

    遍历目录大小——php经典实例 <?php function dirSize($dir){ //定义大小初始值 $sum=; //打开 $dd=opendir($dir); //遍历 while ...

  9. Attention is all you need 论文详解(转)

    一.背景 自从Attention机制在提出之后,加入Attention的Seq2Seq模型在各个任务上都有了提升,所以现在的seq2seq模型指的都是结合rnn和attention的模型.传统的基于R ...

  10. 选择问题(选择数组中第K小的数)

    由排序问题可以引申出选择问题,选择问题就是选择并返回数组中第k小的数,如果把数组全部排好序,在返回第k小的数,也能正确返回,但是这无疑做了很多无用功,由上篇博客中提到的快速排序,稍稍修改下就可以以较小 ...