spark源码阅读之network(1)
spark将在1.6中替换掉akka,而采用netty实现整个集群的rpc的框架,netty的内存管理和NIO支持将有效的提高spark集群的网络传输能力,为了看懂这块代码,在网上找了两本书看《netty in action》和《netty权威指南》,结合了spark的源码既学习了netty也看完了spark netty的部分源码。该部分源码掺杂了太多netty的东西,看起来还是有点累的。
缓存模块
publicabstractclassManagedBuffer{/** Number of bytes of the data. */publicabstractlong size();/*** Exposes this buffer's data as an NIO ByteBuffer. Changing the position and limit of the* returned ByteBuffer should not affect the content of this buffer.*/// TODO: Deprecate this, usage may require expensive memory mapping or allocation.publicabstractByteBuffer nioByteBuffer()throwsIOException;/*** Exposes this buffer's data as an InputStream. The underlying implementation does not* necessarily check for the length of bytes read, so the caller is responsible for making sure* it does not go over the limit.*/publicabstractInputStream createInputStream()throwsIOException;/*** Increment the reference count by one if applicable.*/publicabstractManagedBuffer retain();/*** If applicable, decrement the reference count by one and deallocates the buffer if the* reference count reaches zero.*/publicabstractManagedBuffer release();/*** Convert the buffer into an Netty object, used to write the data out.*/publicabstractObject convertToNetty()throwsIOException;}
publicfinalclassFileSegmentManagedBufferextendsManagedBuffer{privatefinalTransportConf conf;privatefinalFile file;privatefinallong offset;privatefinallong length;publicFileSegmentManagedBuffer(TransportConf conf,File file,long offset,long length){this.conf = conf;this.file = file;this.offset = offset;this.length = length;}@Overridepubliclong size(){return length;}@OverridepublicByteBuffer nioByteBuffer()throwsIOException{FileChannel channel =null;try{channel =newRandomAccessFile(file,"r").getChannel();// Just copy the buffer if it's sufficiently small, as memory mapping has a high overhead.if(length < conf.memoryMapBytes()){ByteBuffer buf =ByteBuffer.allocate((int) length);channel.position(offset);while(buf.remaining()!=0){if(channel.read(buf)==-1){thrownewIOException(String.format("Reached EOF before filling buffer\n"+"offset=%s\nfile=%s\nbuf.remaining=%s",offset, file.getAbsoluteFile(), buf.remaining()));}}buf.flip();return buf;}else{return channel.map(FileChannel.MapMode.READ_ONLY, offset, length);}}catch(IOException e){try{if(channel !=null){long size = channel.size();thrownewIOException("Error in reading "+this+" (actual file length "+ size +")",e);}}catch(IOException ignored){// ignore}thrownewIOException("Error in opening "+this, e);}finally{JavaUtils.closeQuietly(channel);}}@OverridepublicInputStream createInputStream()throwsIOException{FileInputStream is =null;try{is =newFileInputStream(file);ByteStreams.skipFully(is, offset);returnnewLimitedInputStream(is, length);}catch(IOException e){try{if(is !=null){long size = file.length();thrownewIOException("Error in reading "+this+" (actual file length "+ size +")",e);}}catch(IOException ignored){// ignore}finally{JavaUtils.closeQuietly(is);}thrownewIOException("Error in opening "+this, e);}catch(RuntimeException e){JavaUtils.closeQuietly(is);throw e;}}@OverridepublicManagedBuffer retain(){returnthis;}@OverridepublicManagedBuffer release(){returnthis;}@OverridepublicObject convertToNetty()throwsIOException{if(conf.lazyFileDescriptor()){returnnewLazyFileRegion(file, offset, length);}else{FileChannel fileChannel =newFileInputStream(file).getChannel();returnnewDefaultFileRegion(fileChannel, offset, length);}}publicFile getFile(){return file;}publiclong getOffset(){return offset;}publiclong getLength(){return length;}@OverridepublicString toString(){returnObjects.toStringHelper(this).add("file", file).add("offset", offset).add("length", length).toString();}}
publicfinalclassNettyManagedBufferextendsManagedBuffer{privatefinalByteBuf buf;publicNettyManagedBuffer(ByteBuf buf){this.buf = buf;}@Overridepubliclong size(){return buf.readableBytes();}@OverridepublicByteBuffer nioByteBuffer()throwsIOException{return buf.nioBuffer();}@OverridepublicInputStream createInputStream()throwsIOException{returnnewByteBufInputStream(buf);}@OverridepublicManagedBuffer retain(){buf.retain();returnthis;}@OverridepublicManagedBuffer release(){buf.release();returnthis;}@OverridepublicObject convertToNetty()throwsIOException{return buf.duplicate();}@OverridepublicString toString(){returnObjects.toStringHelper(this).add("buf", buf).toString();}}
publicfinalclassNioManagedBufferextendsManagedBuffer{privatefinalByteBuffer buf;publicNioManagedBuffer(ByteBuffer buf){this.buf = buf;}@Overridepubliclong size(){return buf.remaining();}@OverridepublicByteBuffer nioByteBuffer()throwsIOException{return buf.duplicate();}@OverridepublicInputStream createInputStream()throwsIOException{returnnewByteBufInputStream(Unpooled.wrappedBuffer(buf));}@OverridepublicManagedBuffer retain(){returnthis;}@OverridepublicManagedBuffer release(){returnthis;}@OverridepublicObject convertToNetty()throwsIOException{returnUnpooled.wrappedBuffer(buf);}@OverridepublicString toString(){returnObjects.toStringHelper(this).add("buf", buf).toString();}}
spark源码阅读之network(1)的更多相关文章
- spark源码阅读之network(2)
在上节的解读中发现spark的源码中大量使用netty的buffer部分的api,该节将看到netty核心的一些api,比如channel: 在Netty里,Channel是通讯的载体(网络套接字或组 ...
- spark源码阅读之network(3)
TransportContext用来创建TransportServer和TransportclientFactory,同时使用TransportChannelHandler用来配置channel的pi ...
- Spark源码阅读之存储体系--存储体系概述与shuffle服务
一.概述 根据<深入理解Spark:核心思想与源码分析>一书,结合最新的spark源代码master分支进行源码阅读,对新版本的代码加上自己的一些理解,如有错误,希望指出. 1.块管理器B ...
- win7+idea+maven搭建spark源码阅读环境
1.参考. 利用IDEA工具编译Spark源码(1.60~2.20) https://blog.csdn.net/He11o_Liu/article/details/78739699 Maven编译打 ...
- spark源码阅读
根据spark2.2的编译顺序来确定源码阅读顺序,只阅读核心的基本部分. 1.common目录 ①Tags②Sketch③Networking④Shuffle Streaming Service⑤Un ...
- emacs+ensime+sbt打造spark源码阅读环境
欢迎转载,转载请注明出处,徽沪一郎. 概述 Scala越来越流行, Spark也愈来愈红火, 对spark的代码进行走读也成了一个很普遍的行为.不巧的是,当前java社区中很流行的ide如eclips ...
- spark源码阅读---Utils.getCallSite
1 作用 当该方法在spark内部代码中调用时,会返回当前调用spark代码的用户类的名称,以及其所调用的spark方法.所谓用户类,就是我们这些用户使用spark api的类. 2 内部实现 2.1 ...
- spark源码阅读--SparkContext启动过程
##SparkContext启动过程 基于spark 2.1.0 scala 2.11.8 spark源码的体系结构实在是很庞大,从使用spark-submit脚本提交任务,到向yarn申请容器,启 ...
- Spark源码阅读(1): Stage划分
Spark中job由action动作生成,那么stage是如何划分的呢?一般的解答是根据宽窄依赖划分.那么我们深入源码看看吧 一个action 例如count,会在多次runJob中传递,最终会到一个 ...
随机推荐
- Pod Installing openssl-ios-bitcode报错
pod update --no-repo-update 问题:执行上面的操作之后,pod在安装ssl的时候会报错.部分报错信息如下: localhost:OpenSSLTest Later$ pod ...
- 洛谷 2680 (NOIp2015) 运输计划
题目:https://www.luogu.org/problemnew/show/P2680 因为是最长的时间最短,所以二分! 离线LCA可以知道路径长度.每次只看超过二分值的路径. 原本的想法是遍历 ...
- C++ 中的 new/delete 和 new[]/delete[]
在 C++ 中,你也许经常使用 new 和 delete 来动态申请和释放内存,但你可曾想过以下问题呢? new 和 delete 是函数吗? new [] 和 delete [] 又是什么?什么时候 ...
- 对DDS的深度认识
我知道,我对与电子有关的所有事情都很着迷,但不论从哪个角度看,今天的现场可编程门阵列(FPGA),都显得“鹤立鸡群”,真是非常棒的器件.如果在这个智能时代,在这个领域,想拥有一技之长的你还没有关注FP ...
- selenium自动化浏览器后台运行headless模式
通过selenium做WEB自动化的时候,必须要启动浏览器, 浏览器的启动与关闭会影响执行效率. 当我们在自己电脑运行代码时,还会影响做别的事情. 鉴于这种情况,Google针对Chrome浏览器新增 ...
- Cocos暂停和重新开始游戏
创建按钮 cc.MenuItemFont.setFontSize(18); cc.MenuItemFont.setFontName("Arial"); var systemMenu ...
- SQL Server专题
SQL Server 2005/2008 一.连接异常 在C#代码中调用Open()方法打开数据库连接时(账户为sa),出现异常:异常信息如下: 在与 SQL Server 建立连接时出现与网络相关的 ...
- http响应chunked格式分析
有的时候服务器生成HTTP回应是无法确定信息大小的,这时用Content-Length就无法事先写入长度,而需要实时生成消息长度,这时服务器一般采用Chunked编码. 在进行Chunked编码传输时 ...
- Select/Poll/Epoll异步IO
IO多路复用 同步io和异步io,阻塞io和非阻塞io分别是什么,有什么样的区别? io模式 对于一次io 访问(以read为例),数据会先拷贝到操作系统内核的缓冲区,然后才会从操作系统内核的缓冲区拷 ...
- Android使用简单的Service
首先要自定义一个Service,设定它在后台要干什么. public class MyService extends Service { @Nullable @Override public IBin ...