源码地址:https://gitee.com/a1234567891/koalas-rpc

企业生产级百亿日PV高可用可拓展的RPC框架。理论上并发数量接近服务器带宽,客户端采用thrift协议,服务端支持netty和thrift的TThreadedSelectorServer半同步半异步线程模型,支持动态扩容,服务上下线,权重动态,可用性配置,页面流量统计,支持trace跟踪等,天然接入cat支持数据大盘展示等,持续为个人以及中小型公司提供可靠的RPC框架技术方案,贴上

   @Override
public void run() {
try {
if (Epoll.isAvailable ()) {
bossGroup = new EpollEventLoopGroup (serverPublisher.bossThreadCount==0?AbstractKoalsServerPublisher.DEFAULT_EVENT_LOOP_THREADS:serverPublisher.bossThreadCount);
workerGroup = new EpollEventLoopGroup ( serverPublisher.workThreadCount==0? AbstractKoalsServerPublisher.DEFAULT_EVENT_LOOP_THREADS*2:serverPublisher.workThreadCount);
} else {
bossGroup = new NioEventLoopGroup (serverPublisher.bossThreadCount==0?AbstractKoalsServerPublisher.DEFAULT_EVENT_LOOP_THREADS:serverPublisher.bossThreadCount);
workerGroup = new NioEventLoopGroup ( serverPublisher.workThreadCount==0? AbstractKoalsServerPublisher.DEFAULT_EVENT_LOOP_THREADS*2:serverPublisher.workThreadCount );
}
executorService = KoalasThreadedSelectorWorkerExcutorUtil.getWorkerExecutorWithQueue (serverPublisher.koalasThreadCount==0?AbstractKoalsServerPublisher.DEFAULT_KOALAS_THREADS:serverPublisher.koalasThreadCount,serverPublisher.koalasThreadCount==0?AbstractKoalsServerPublisher.DEFAULT_KOALAS_THREADS:serverPublisher.koalasThreadCount,serverPublisher.workQueue,new KoalasDefaultThreadFactory (serverPublisher.serviceInterface.getName ())); ServerBootstrap b = new ServerBootstrap ();
b.group ( bossGroup, workerGroup ).channel ( workerGroup instanceof EpollEventLoopGroup ? EpollServerSocketChannel.class : NioServerSocketChannel.class )
.handler ( new LoggingHandler ( LogLevel.INFO ) )
.childHandler ( new NettyServerInitiator (serverPublisher,executorService))
.option ( ChannelOption.SO_BACKLOG, 1024 )
.option ( ChannelOption.SO_REUSEADDR, true )
.option ( ChannelOption.SO_KEEPALIVE, true );
Channel ch = b.bind ( serverPublisher.port ).sync ().channel ();
Runtime.getRuntime().addShutdownHook(new Thread(){
@Override
public void run(){
logger.info ( "Shutdown by Runtime" );
if(zookeeperServer != null){
zookeeperServer.destroy ();
}
logger.info ( "wait for service over 3000ms" );
try {
Thread.sleep ( 3000 );
} catch (Exception e) {
}
if(executorService!=null){
executorService.shutdown ();
}
if(bossGroup != null) bossGroup.shutdownGracefully ();
if(workerGroup != null) workerGroup.shutdownGracefully ();
}
}); if(StringUtils.isNotEmpty ( serverPublisher.zkpath )){
ZookServerConfig zookServerConfig = new ZookServerConfig ( serverPublisher.zkpath,serverPublisher.serviceInterface.getName (),serverPublisher.env,serverPublisher.port,serverPublisher.weight,"netty" );
zookeeperServer = new ZookeeperServer ( zookServerConfig );
zookeeperServer.init ();
}
} catch ( Exception e){
logger.error ( "NettyServer start faid !",e );
if(bossGroup != null) bossGroup.shutdownGracefully ();
if(workerGroup != null) workerGroup.shutdownGracefully ();
} logger.info("netty server init success server={}",serverPublisher); }

首先开启NIO服务,由系统内核来判断是否支持epoll-EpollEventLoopGroup,如果不支持epoll采用IO多路复用的方式EpollEventLoopGroup,然后声明一个用户自定义线程池,这里有不清楚的读者肯定会问,netty本身支持连接线程和IO线程,为什么还要自定义声明自定义线程池,原因是假设在IO线程池中做的业务非常复杂,大量耗时,这样就会阻塞了netty线程的IO处理速度,影响吞吐量,这也就是reactor模型的设计理念,不让业务干扰连接线程和IO读写线程。NettyServerInitiator就是实际处理的业务handle了。

  Runtime.getRuntime().addShutdownHook(new Thread(){
@Override
public void run(){
logger.info ( "Shutdown by Runtime" );
if(zookeeperServer != null){
zookeeperServer.destroy ();
}
logger.info ( "wait for service over 3000ms" );
try {
Thread.sleep ( 3000 );
} catch (Exception e) {
}
if(executorService!=null){
executorService.shutdown ();
}
if(bossGroup != null) bossGroup.shutdownGracefully ();
if(workerGroup != null) workerGroup.shutdownGracefully ();
}
});

手动关闭钩子,服务关闭的时候要主动关闭节点信息。下面来看一下hander拦截器

package netty.initializer;

import io.netty.channel.ChannelInitializer;
import io.netty.channel.socket.SocketChannel;
import netty.hanlder.KoalasDecoder;
import netty.hanlder.KoalasEncoder;
import netty.hanlder.KoalasHandler;
import org.apache.thrift.TProcessor;
import server.config.AbstractKoalsServerPublisher; import java.util.concurrent.ExecutorService;
/**
* Copyright (C) 2018
* All rights reserved
* User: yulong.zhang
* Date:2018年11月23日11:13:33
*/
public class NettyServerInitiator extends ChannelInitializer<SocketChannel> { private ExecutorService executorService; private AbstractKoalsServerPublisher serverPublisher; public NettyServerInitiator(AbstractKoalsServerPublisher serverPublisher,ExecutorService executorService){
this.serverPublisher = serverPublisher;
this.executorService = executorService;
} @Override
protected void initChannel(SocketChannel ch) {
ch.pipeline ().addLast ( "decoder",new KoalasDecoder () );
ch.pipeline ().addLast ( "encoder",new KoalasEncoder ());
ch.pipeline ().addLast ( "handler",new KoalasHandler (serverPublisher,executorService) );
} }
decode负责拆包。encoder负责装包,handler是真正业务处理的逻辑,所有的业务处理都在这里的线程池中运行,结果通过ChannelHandlerContext 异步的返回给client端,通过这种方式真正的实现了reactor
下面我们看看拆包处理
 package netty.hanlder;

 import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import server.KoalasServerPublisher; import java.util.List;
/**
* Copyright (C) 2018
* All rights reserved
* User: yulong.zhang
* Date:2018年11月23日11:13:33
*/
public class KoalasDecoder extends ByteToMessageDecoder { private final static Logger logger = LoggerFactory.getLogger ( KoalasDecoder.class ); @Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) { try {
if (in.readableBytes () < 4) {
return;
} in.markReaderIndex ();
byte[] b = new byte[4];
in.readBytes ( b ); int length = decodeFrameSize ( b ); if (in.readableBytes () < length) {
//reset the readerIndex
in.resetReaderIndex ();
return;
} in.resetReaderIndex ();
ByteBuf fream = in.readRetainedSlice ( 4 + length );
in.resetReaderIndex (); in.skipBytes ( 4 + length );
out.add ( fream );
} catch (Exception e) {
logger.error ( "decode error",e );
} } public static final int decodeFrameSize(byte[] buf) {
return (buf[0] & 255) << 24 | (buf[1] & 255) << 16 | (buf[2] & 255) << 8 | buf[3] & 255;
}
}

通过读取四个字节的长度来决定消息体长度,然后根据消息体长度来读取所有的字节流数据。decodeFrameSize方法将四个字节流转成int类型。KoalasHandler处理器逻辑比较复杂,我们只看核心的内容,首先通过thrift解析字节流来获取transport

            ByteArrayInputStream inputStream = new ByteArrayInputStream ( b );
ByteArrayOutputStream outputStream = new ByteArrayOutputStream ( ); TIOStreamTransport tioStreamTransportInput = new TIOStreamTransport ( inputStream);
TIOStreamTransport tioStreamTransportOutput = new TIOStreamTransport ( outputStream); TKoalasFramedTransport inTransport = new TKoalasFramedTransport ( tioStreamTransportInput,2048000 );
inTransport.setReadMaxLength_ ( maxLength );
TKoalasFramedTransport outTransport = new TKoalasFramedTransport ( tioStreamTransportOutput,2048000,ifUserProtocol );

最终扔到线程池里去执行,将当前IO线程释放给下一个任务。

           try {
executorService.execute ( new NettyRunable ( ctx,in,out,outputStream,localTprocessor,b,privateKey,publicKey,className,methodName,koalasTrace,cat));
} catch (RejectedExecutionException e){
logger.error ( e.getMessage ()+ErrorType.THREAD+",className:" +className,e );
handlerException(b,ctx,e,ErrorType.THREAD,privateKey,publicKey,thriftNative);
}
RejectedExecutionException来负责当线程池不够用的时候返回给client端异常,因为server端的业务处理能力有限,所以这里适当的做了一下服务端保护防止雪崩的问题。当发现server端有大量的
RejectedExecutionException抛出,说明单机已经无法满足业务请求了,需要横向拓展机器来进行负载均衡。用户实际的业务执行是在Runable里,我们看看他到底做了什么
            try {
tprocessor.process ( in,out );
ctx.writeAndFlush (outputStream);
if(transaction!=null && cat)
transaction.setStatus ( Transaction.SUCCESS );
} catch (Exception e) {
if(transaction!=null && cat)
transaction.setStatus ( e );
logger.error ( e.getMessage () + ErrorType.APPLICATION+",className:"+className,e );
handlerException(this.b,ctx,e,ErrorType.APPLICATION,privateKey,publicKey,thriftNative);
}

通过thrift的process来执行业务逻辑,将结果通过ctx.writeAndFlush (outputStream),返回给client端。在catch里处理当出现异常之后返回给client端异常结果。这样netty server的实现就全部结束了,thrift服务端解析相关内容我们下一篇来说,里面当中有很多细节需要读者是跟着源码阅读,如果有问题欢迎加群825199617来交流,更多spring,spring mvc,aop,jdk等源码交流等你来!

JAVA RPC (九) netty服务端解析的更多相关文章

  1. JAVA RPC (十) nio服务端解析

    源码地址:https://gitee.com/a1234567891/koalas-rpc 企业生产级百亿日PV高可用可拓展的RPC框架.理论上并发数量接近服务器带宽,客户端采用thrift协议,服务 ...

  2. netty服务端启动--ServerBootstrap源码解析

    netty服务端启动--ServerBootstrap源码解析 前面的第一篇文章中,我以spark中的netty客户端的创建为切入点,分析了netty的客户端引导类Bootstrap的参数设置以及启动 ...

  3. Netty之旅三:Netty服务端启动源码分析,一梭子带走!

    Netty服务端启动流程源码分析 前记 哈喽,自从上篇<Netty之旅二:口口相传的高性能Netty到底是什么?>后,迟迟两周才开启今天的Netty源码系列.源码分析的第一篇文章,下一篇我 ...

  4. Netty 服务端启动过程

    在 Netty 中创建 1 个 NioServerSocketChannel 在指定的端口监听客户端连接,这个过程主要有以下  个步骤: 创建 NioServerSocketChannel 初始化并注 ...

  5. Netty 服务端创建

    参考:http://blog.csdn.net/suifeng3051/article/details/28861883?utm_source=tuicool&utm_medium=refer ...

  6. Netty服务端NioEventLoop启动及新连接接入处理

    一 Netty服务端NioEventLoop的启动 Netty服务端创建.初始化完成后,再向Selector上注册时,会将服务端Channel与NioEventLoop绑定,绑定之后,一方面会将服务端 ...

  7. Netty服务端的启动源码分析

    ServerBootstrap的构造: public class ServerBootstrap extends AbstractBootstrap<ServerBootstrap, Serve ...

  8. ASP.NET Core中间件(Middleware)实现WCF SOAP服务端解析

    ASP.NET Core中间件(Middleware)进阶学习实现SOAP 解析. 本篇将介绍实现ASP.NET Core SOAP服务端解析,而不是ASP.NET Core整个WCF host. 因 ...

  9. java开源即时通讯软件服务端openfire源码构建

    java开源即时通讯软件服务端openfire源码构建 本文使用最新的openfire主干代码为例,讲解了如何搭建一个openfire开源开发环境,正在实现自己写java聊天软件: 编译环境搭建 调试 ...

随机推荐

  1. 设计模式(三)——装饰器模式(Decorator Pattern)

    发现太过于刻意按照计划来写博客,有点不实际,刚好最近在一个网课上复习AOP的知识,讲到了装饰器模式和代理模式,顺便复习总结一下. 首先了解一下装饰器模式,从名字里面可以看出来,装饰器模式就类似于房子装 ...

  2. Ubuntu的apt-get代理设置

    三种方法 -o选项 # sudo apt-get -o Acquire::http::proxy="http://127.0.0.1:8080/" update 配置文件 # vi ...

  3. 安卓已过时的ProgressDialog对话框

    private ProgressDialog mDialog; private Handler mHandler; //初始化Handler //初始化 mDialog = new ProgressD ...

  4. golang的select实现原理剖析

    写在最前面 select为golang提供了多路IO复用机制,和其他IO复用一样,用于检测是否有读写事件是否ready. 本文将介绍一下golang的select的用法和实现原理. 实现原理 gola ...

  5. linux centos安装nginx1.7.4

    原文转自 jerryhe326:https://www.cnblogs.com/jerrypro/p/7062101.html一.安装准备 首先由于nginx的一些模块依赖一些lib库,所以在安装ng ...

  6. python-----opencv截取按帧截取视频

    最近有需求把一个视频从指定帧截取一部分,demo代码如下: import cv2 video_path = r'F:\temp\temp_0806\1\video\test.dat' videoCap ...

  7. docker在Linux环境下的安装

    在Centos6.8上安装 一.查看系统版本 二.安装EPEL 因为系统自带的repo中不带docker需要安装epel rpm -Uvh http://dl.fedoraproject.org/pu ...

  8. PAT Advanced 1108 Finding Average (20 分)

    The basic task is simple: given N real numbers, you are supposed to calculate their average. But wha ...

  9. C#中[STAThread]的作用

    [STAThread]STAThread:Single Thread Apartment Thread.(单一线程单元线程)[]是用来表示Attributes: [STAThread]是一种线程模型, ...

  10. 3.使用webpack配置文件webpack.confg.js配置打包文件的入口和出口

    在项目根目录下新建webpack.config.js文件 webpack.config.js文件配置如下: // Node的路径操作使用的是path模块 const path=require('pat ...