Notes of Principles of Parallel Programming - TODO
0.1 Topic
Notes of Lin C., Snyder L.. Principles of Parallel Programming. Beijing: China Machine Press. 2008.
(1) Parallel Computer Architecture - done 2015/5/24
(2) Parallel Abstraction - done 2015/5/28
(3) Scable Algorithm Techniques - done 2015/5/30
(4) PP Languages: Java(Thread), MPI(local view), ZPL(global view)
0.2 Audience
Navie PP programmers who want to gain foundamental PP concepts
0.3 Related Topics
Computer Architecture, Sequential Algorithms,
PP Programming Languages
--------------------------------------------------------------------
- ###1 introduction
real world cases:
house construction, manufacturing pipeline, call center
ILP(Instruction Level Parallelism)
(a+b) * (c+d)
Parallel Computing V.S. Distributed Computing
the goal of PC is to provide performance, either in terms of
processor power or memory that a single processor cannot provide;
the goal of DC is to provide convenience, including availability,
realiablity and physical distribution.
Concurrency V.S. Parallelism
CONCURRENCY is widely used in OS and DB communities to describe
exceutions that are LOGICALLY simultaneous;
PARALLELISM is typically used by the architecture and supercomputing
communities to describe executions that PHYSICALLY execute simultaneoulsy.
In either case, the codes that execute simultaneously exhibit unknown
timing characteristics.
iterative sum/pair-wise summation
parallel prefix sum
Parallelism using multiple instruction streams: thread
multithreaded solutions to count 3's number in an array
good parallel programs' characteristics:
(1) correct;
(2) good performance
(3) scalable to large number of processors
(4) portable across a wide variety to parallel platforms
- ###2 parallel computers
6 parallel computers
(1) Chip multiprocessors *
Intel Core Duo
AMD Dual Core Opteron
(2) Symmetric Multiprocessor Architecture
Sun Fire E25K
(3) Heterogeneous Chip Design
Cell
(4) Clusters
(5) Supercomputers
BlueGene/L
sequential computer abstraction
Random Access Machine(RAM) model, i.e. the von Neumann Model
abstract a sequential computer as a device with an instruction
execution unit and an unbounded memory.
2 abstract models of parallel computers:
(1) PRAM: parallel random access machine model
the PRAM consists of an unspecified number of instruction execution units,
connected to a single unbounded shared memory that contains both
programs and data.
(2) CTA: candidate type architecture
the CTA consists of P standard sequential computers(processors,processor element),
connected by an interconnection network(communication network);
seperate 2 types of memory references: inexpensive local reference
and expensive non-local reference;
Locality Rule:
Fast programs tend to maximize the number of local memory references, and
minimize the number of non-local memory references.
3 major communication(memory reference) mechanisms:
(1) shared memory
a natural extension of the flat memory of sequential computers.
(2) one-sided communication
a relaxation of the shared memory concepts: support a single shared address space,
all threads can reference all memory location, but it doesn't attempt to keep the
memory coherent.
(3) message passing
memory references are used to access local memory,
message passing is userd to access non-local memory.
- ### 3 reasoning about parallel performance
thread: thread-based/shared memory parallel programming
process: message passing/non-shread memory parallel proframming
latency: the amount of TIME it takes to complete a given unit of work
throughput: the amount of WORK that can be completed per unit time
## source of performance loss
(1) overhead
communication
synchronization
computation
memory
(2) non-parallelizable computation
Amdahl's Law: portions of a computation that are sequential will,
as parallelism is applied, dominate the execution time.
(3) idle processors
idle time is often a consequence of synchronization and communication
load imbalance: uneven distribution of work to processors
memory-bound computaion: bandwidth, lantency
(4) contention for resources
spin lock, false sharing
## parallel structure
(1) dependences
an ordering relationship between two computations
(2) granularity
the frequency of interactions among threads or processes
(3) locality
temporal locality: memory references are clustered in TIME
spatial locality: memory references are clustered by ADDRESS
## performance trade-off
sequential computation: 90/10 rule
communication V.S. Computation
Memory V.S. Parallelism
Overhead V.S. Parallelism
## measuring performance
(1) execution time/latency
(2) speedup/efficiency
(3) superliear speedup
## scable performance *
is difficult to achieve
- ### 4 first step toward parallel programming
## data and task parallelism
(1) data parallel computation
parallelism is applied by performing the SAME operation to different items of data at the same time
(2) task parallel computation
parallelism is applied by performing DISTINCT computations/tasks at the same time
an example: the job of preparing a banquet/dinner
## Peril-L Notation
see handwrite notes
## formulating parallelism
(1) fixed parallelism
k processors, a k-way parallel algorithm
drawback: 2k processors cannot gain any imporvement
(2) unlimited parallelism
spawn a thread for each single data element:
// backgound: count 3's number in array[n]
int _count_ = 0;
forall (i in(0..n-1))//n is the arraysize
{
_count_ = +/(array[i]==3?1:0);
}
drawback: overhead of setup all threads is n/P,
where P is the number of processor, and P << n.
(3) scable parallelism
formulate a set of substantial subporblems, natural units of the solution are assigned to each subproblem, each subproblem is solved as independentyly as possible.
implications:
substantial: sufficent local work to cover parallel overheads
natural unit: computation is not always smoothly partitionable
independently: reduce parallel communication overheads
- ### 5 scable alogrithmic techniques
focus on data parallel computations
# ideal parallel computation
composed of large blocks of independent computation with no interactions among blocks.
principle:
Parallel programs are more scable when they emphasize blocks of computation, typically
the larger the block the better, that minimize the inter-thread dependences.
## Schwartz's alogrithm
goal: +-reduce
condition: P is number of processors, n is number of values
2 approaches:
(1) use n/2 logicall concurrency - unlimited parallelism
(2) each process handle n/P items locally, then combine using P-leaf tree - better
notation: _total_ = +/ _data_;
where _total_ is a global number, _data_ is a global array
the compiler emit code that use Schwartz's local/global approach.
## reduce and scan abstractions
generalized reduce and scan functions
## assign work to processes statically
## assign work to processes dynamically
## trees
Notes of Principles of Parallel Programming - TODO的更多相关文章
- Notes of Principles of Parallel Programming: Peril-L Notation - TODO
Content 1 syntax and semantic 2 example set 1 syntax and semantic 1.1 extending C Peril-L notation s ...
- Introduction to Multi-Threaded, Multi-Core and Parallel Programming concepts
https://katyscode.wordpress.com/2013/05/17/introduction-to-multi-threaded-multi-core-and-parallel-pr ...
- 4.3 Reduction代码(Heterogeneous Parallel Programming class lab)
首先添加上Heterogeneous Parallel Programming class 中 lab: Reduction的代码: myReduction.c // MP Reduction // ...
- Task Cancellation: Parallel Programming
http://beyondrelational.com/modules/2/blogs/79/posts/11524/task-cancellation-parallel-programming-ii ...
- Samples for Parallel Programming with the .NET Framework
The .NET Framework 4 includes significant advancements for developers writing parallel and concurren ...
- Parallel Programming for FPGAs 学习笔记(1)
Parallel Programming for FPGAs 学习笔记(1)
- Parallel Programming AND Asynchronous Programming
https://blogs.oracle.com/dave/ Java Memory Model...and the pragmatics of itAleksey Shipilevaleksey.s ...
- 【转载】#229 - The Core Principles of Object-Oriented Programming
As an object-oriented language, c# supports the three core principles of object-oriented programming ...
- Fork and Join: Java Can Excel at Painless Parallel Programming Too!---转
原文地址:http://www.oracle.com/technetwork/articles/java/fork-join-422606.html Multicore processors are ...
随机推荐
- java二维数组的定义
java中的一维数组的定义都熟了,但是二位数组和一维数组的定义有些微差别.在网上看到了篇文章,总结的很详细.转载下了. 原文链接[http://blog.sina.com.cn/s/blog_6189 ...
- phonegap开发入门
做了几次开发配置了,但时间一长就忘了,特记录一下. 一.环境变量配置::右击“我的电脑”-->"高级"-->"环境变量" 1.在系统变量里新建JAV ...
- 关于Linux下C编译错误(警告)cast from 'void*' to 'int' loses precision
char *ptr; //此后省略部分代码 ) //出错地方 那句话的意思是从 void* 到 int 的转换丢失精度,相信看到解释有些人就明白了, 此问题只会出现在X64位的Linux上,因为在64 ...
- High Performance Django
构建高性能Django站点 性能 可用 伸缩 扩展 安全 build 1.审慎引入第三方库(是否活跃.是否带入query.是否容易缓存) 2.db:减少query次数 减少耗时query 减小返回 ...
- hdu 1035 (usage of sentinel, proper utilization of switch and goto to make code neat) 分类: hdoj 2015-06-16 12:33 28人阅读 评论(0) 收藏
as Scott Meyers said in his book Effective STL, "My advice on choosing among the sorting algori ...
- as3资源加载-Loader和URLLoader
⊙ Loader:只能加载SWF和图像.⊙ URLLoader:除了能加载swf和图像,还可以加载二进制文件(txt , xml , swc , ........). ================ ...
- Repeater 合并单元格
前途页面: <asp:Repeater ID="rptList" runat="server" OnPreRender="rptList_Pre ...
- sql 解析字符串添加到临时表中 sql存储过程in 参数输入
sql 解析字符串添加到临时表中 sql存储过程in 参数输入 解决方法 把字符串解析 添加到 临时表中 SELECT * into #临时表 FROM dbo.Func_SplitOneCol ...
- 打饭助手之NABC
Need: 同学们在早上跑操后要吃早饭,还有中午打饭时人更是多.常常要排很长的队伍,造成时间的浪费,和焦急的等待.因此我们需要错开打饭的高峰期,来避免打饭排队的悲哀. Approach: 通过获取摄像 ...
- 有关PHP的字符串知识
字符串是由一系列字符组成,在PHP中,字符和字节一样,也就是说,一共有256种不同字符的可能性. 字符串型可以用三种方法定义:单引号形式.双引号形式和Heredoc结构形式. 1.每条指令可要记得使用 ...