netty Getting Started--reference
reference from:http://docs.jboss.org/netty/3.1/guide/html/start.html
- 1.1. Before Getting Started
- 1.2. Writing a Discard Server
- 1.3. Looking into the Received Data
- 1.4. Writing an Echo Server
- 1.5. Writing a Time Server
- 1.6. Writing a Time Client
- 1.7. Dealing with a Stream-based Transport
- 1.8. Speaking in POJO instead of ChannelBuffer
- 1.9. Shutting Down Your Application
- 1.10. Summary
This chapter tours around the core constructs of Netty with simple examples to let you get started quickly. You will be able to write a client and a server on top of Netty right away when you are at the end of this chapter.
If you prefer top-down approach in learning something, you might want to start from Chapter 2, Architectural Overview and get back here.
1.1. Before Getting Started
The minimum requirements to run the examples which are introduced in this chapter are only two; the latest version of Netty and JDK 1.5 or above. The latest version of Netty is available in the project download page. To download the right version of JDK, please refer to your preferred JDK vendor's web site.
Is that all? To tell the truth, you should find these two are just enough to implement almost any type of protocols. Otherwise, please feel free to contact the Netty project community and let us know what's missing.
At last but not least, please refer to the API reference whenever you want to know more about the classes introduced here. All class names in this document are linked to the online API reference for your convenience. Also, please don't hesitate to contact the Netty project community and let us know if there's any incorrect information, errors in grammar and typo, and if you have a good idea to improve the documentation.
1.2. Writing a Discard Server
The most simplistic protocol in the world is not 'Hello, World!' but DISCARD. It's a protocol which discards any received data without any response.
To implement the DISCARD protocol, the only thing you need to do is to ignore all received data. Let us start straight from the handler implementation, which handles I/O events generated by Netty.
package org.jboss.netty.example.discard;
@ChannelPipelineCoverage("all")

public class DiscardServerHandler extends SimpleChannelHandler {

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {

}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {

e.getCause().printStackTrace();
Channel ch = e.getChannel();
ch.close();
}
}
|
|
|
|
|
|
|
|
We override the |
|
|
|
So far so good. We have implemented the first half of the DISCARD server. What's left now is to write the mainmethod which starts the server with the DiscardServerHandler.
package org.jboss.netty.example.discard; import java.net.InetSocketAddress;
import java.util.concurrent.Executors; public class DiscardServer { public static void main(String[] args) throws Exception {
ChannelFactoryfactory =
newNioServerSocketChannelFactory

(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());ServerBootstrapbootstrap = newServerBootstrap

(factory);
DiscardServerHandler handler = new DiscardServerHandler();
ChannelPipeline pipeline = bootstrap.getPipeline();
pipeline.addLast("handler", handler);

bootstrap.setOption("child.tcpNoDelay", true);

bootstrap.setOption("child.keepAlive", true);
bootstrap.bind(new InetSocketAddress(8080));

}
}
|
|
|
|
|
|
|
|
Here, we add the |
|
|
You can also set the parameters which are specific to the bootstrap.setOption("reuseAddress", true);
|
|
|
We are ready to go now. What's left is to bind to the port and to start the server. Here, we bind to the port |
Congratulations! You've just finished your first server on top of Netty.
1.3. Looking into the Received Data
Now that we have written our first server, we need to test if it really works. The easiest way to test it is to use the telnet command. For example, you could enter "telnet localhost 8080" in the command line and type something.
However, can we say that the server is working fine? We cannot really know that because it is a discard server. You will not get any response at all. To prove it is really working, let us modify the server to print what it has received.
We already know that MessageEvent is generated whenever data is received and the messageReceived handler method will be invoked. Let us put some code into the messageReceived method of the DiscardServerHandler:
@Override
public void messageReceived(ChannelHandlerContextctx,MessageEvente) {
ChannelBuffer

buf = (ChannelBuffer) e.getMessage();
while(buf.readable()) {
System.out.println((char) buf.readByte());
}
}
|
|
It is safe to assume the message type in socket transports is always Although it resembles to NIO |
If you run the telnet command again, you will see the server prints what has received.
The full source code of the discard server is located in the org.jboss.netty.example.discard package of the distribution.
1.4. Writing an Echo Server
So far, we have been consuming data without responding at all. A server, however, is usually supposed to respond to a request. Let us learn how to write a response message to a client by implementing the ECHOprotocol, where any received data is sent back.
The only difference from the discard server we have implemented in the previous sections is that it sends the received data back instead of printing the received data out to the console. Therefore, it is enough again to modify the messageReceived method:
@Override
public void messageReceived(ChannelHandlerContextctx,MessageEvente) {
Channel

ch = e.getChannel();
ch.write(e.getMessage());
}
|
|
A |
If you run the telnet command again, you will see the server sends back whatever you have sent to it.
The full source code of the echo server is located in the org.jboss.netty.example.echo package of the distribution.
1.5. Writing a Time Server
The protocol to implement in this section is the TIME protocol. It is different from the previous examples in that it sends a message, which contains a 32-bit integer, without receiving any requests and loses the connection once the message is sent. In this example, you will learn how to construct and send a message, and to close the connection on completion.
Because we are going to ignore any received data but to send a message as soon as a connection is established, we cannot use the messageReceived method this time. Instead, we should override thechannelConnected method. The following is the implementation:
package org.jboss.netty.example.time; @ChannelPipelineCoverage("all")
public class TimeServerHandler extendsSimpleChannelHandler{ @Override
public void channelConnected(ChannelHandlerContextctx,ChannelStateEvente) {

Channelch = e.getChannel();ChannelBuffertime =ChannelBuffers.buffer(4);

time.writeInt(System.currentTimeMillis() / 1000);
ChannelFuture f = ch.write(time);

f.addListener(new ChannelFutureListener() {

public void operationComplete(ChannelFuturefuture) {
Channelch = future.getChannel();
ch.close();
}
});
} @Override
public void exceptionCaught(ChannelHandlerContextctx,ExceptionEvente) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
|
|
As explained, |
|
|
To send a new message, we need to allocate a new buffer which will contain the message. We are going to write a 32-bit integer, and therefore we need a On the other hand, it is a good idea to use static imports for import static org.jboss.netty.buffer. |
|
|
As usual, we write the constructed message. But wait, where's the In contrast, NIO buffer does not provide a clean way to figure out where the message content starts and ends without calling the Another point to note is that the
Therefore, you need to call the |
|
|
How do we get notified when the write request is finished then? This is as simple as adding a Alternatively, you could simplify the code using a pre-defined listener: f.addListener(
|
1.6. Writing a Time Client
Unlike DISCARD and ECHO servers, we need a client for the TIME protocol because a human cannot translate a 32-bit binary data into a date on a calendar. In this section, we discuss how to make sure the server works correctly and learn how to write a client with Netty.
The biggest and only difference between a server and a client in Netty is that different Bootstrap andChannelFactory are required. Please take a look at the following code:
package org.jboss.netty.example.time; import java.net.InetSocketAddress;
import java.util.concurrent.Executors; public class TimeClient { public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);ChannelFactoryfactory =
newNioClientSocketChannelFactory

(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());ClientBootstrapbootstrap = newClientBootstrap

(factory);
TimeClientHandler handler = new TimeClientHandler();
bootstrap.getPipeline().addLast("handler", handler);
bootstrap.setOption("tcpNoDelay"

, true);
bootstrap.setOption("keepAlive", true); bootstrap.connect

(new InetSocketAddress(host, port));
}
}
|
|
|
|
|
|
|
|
Please note that there's no |
|
|
We should call the |
As you can see, it is not really different from the server side startup. What about the ChannelHandlerimplementation? It should receive a 32-bit integer from the server, translate it into a human readable format, print the translated time, and close the connection:
package org.jboss.netty.example.time; import java.util.Date; @ChannelPipelineCoverage("all")
public class TimeClientHandler extendsSimpleChannelHandler{ @Override
public void messageReceived(ChannelHandlerContextctx,MessageEvente) {
ChannelBufferbuf = (ChannelBuffer) e.getMessage();
long currentTimeMillis = buf.readInt() * 1000L;
System.out.println(new Date(currentTimeMillis));
e.getChannel().close();
} @Override
public void exceptionCaught(ChannelHandlerContextctx,ExceptionEvente) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
It looks very simple and does not look any different from the server side example. However, this handler sometimes will refuse to work raising an IndexOutOfBoundsException. We discuss why this happens in the next section.
1.7. Dealing with a Stream-based Transport
1.7.1. One Small Caveat of Socket Buffer
In a stream-based transport such as TCP/IP, received data is stored into a socket receive buffer. Unfortunately, the buffer of a stream-based transport is not a queue of packets but a queue of bytes. It means, even if you sent two messages as two independent packets, an operating system will not treat them as two messages but as just a bunch of bytes. Therefore, there is no guarantee that what you read is exactly what your remote peer wrote. For example, let us assume that the TCP/IP stack of an operating system has received three packets:
+-----+-----+-----+
| ABC | DEF | GHI |
+-----+-----+-----+
Because of this general property of a stream-based protocol, there's high chance of reading them in the following fragmented form in your application:
+----+-------+---+---+
| AB | CDEFG | H | I |
+----+-------+---+---+
Therefore, a receiving part, regardless it is server-side or client-side, should defrag the received data into one or more meaningful frames that could be easily understood by the application logic. In case of the example above, the received data should be framed like the following:
+-----+-----+-----+
| ABC | DEF | GHI |
+-----+-----+-----+
1.7.2. The First Solution
Now let us get back to the TIME client example. We have the same problem here. A 32-bit integer is a very small amount of data, and it is not likely to be fragmented often. However, the problem is that it can be fragmented, and the possibility of fragmentation will increase as the traffic increases.
The simplistic solution is to create an internal cumulative buffer and wait until all 4 bytes are received into the internal buffer. The following is the modified TimeClientHandler implementation that fixes the problem:
package org.jboss.netty.example.time; import static org.jboss.netty.buffer.ChannelBuffers.*; import java.util.Date; @ChannelPipelineCoverage("one")

public class TimeClientHandler extendsSimpleChannelHandler{ private finalChannelBufferbuf = dynamicBuffer();

@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
ChannelBuffer m = (ChannelBuffer) e.getMessage();
buf.writeBytes(m);

if (buf.readableBytes() >= 4) {

long currentTimeMillis = buf.readInt() * 1000L;
System.out.println(new Date(currentTimeMillis));
e.getChannel().close();
}
} @Override
public void exceptionCaught(ChannelHandlerContextctx,ExceptionEvente) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
|
|
This time, |
|
|
A dynamic buffer is a |
|
|
First, all received data should be cumulated into |
|
|
And then, the handler must check if |
There's another place that needs a fix. Do you remember that we added a TimeClientHandler instance to thedefault ChannelPipeline of the ClientBootstrap? It means one same TimeClientHandler instance is going to handle multiple Channels and consequently the data will be corrupted. To create a new TimeClientHandlerinstance per Channel, we have to implement a ChannelPipelineFactory:
package org.jboss.netty.example.time; public class TimeClientPipelineFactory implementsChannelPipelineFactory{ publicChannelPipelinegetPipeline() {
ChannelPipelinepipeline =Channels.pipeline();
pipeline.addLast("handler", new TimeClientHandler());
return pipeline;
}
}
Now let us replace the following lines of TimeClient:
TimeClientHandler handler = new TimeClientHandler();
bootstrap.getPipeline().addLast("handler", handler);
with the following:
bootstrap.setPipelineFactory(new TimeClientPipelineFactory());
It might look somewhat complicated at the first glance, and it is true that we don't need to introduceTimeClientPipelineFactory in this particular case because TimeClient creates only one connection.
However, as your application gets more and more complex, you will almost always end up with writing aChannelPipelineFactory, which yields much more flexibility to the pipeline configuration.
1.7.3. The Second Solution
Although the first solution has resolved the problem with the TIME client, the modified handler does not look that clean. Imagine a more complicated protocol which is composed of multiple fields such as a variable length field. Your ChannelHandler implementation will become unmaintainable very quickly.
As you may have noticed, you can add more than one ChannelHandler to a ChannelPipeline, and therefore, you can split one monolithic ChannelHandler into multiple modular ones to reduce the complexity of your application. For example, you could split TimeClientHandler into two handlers:
TimeDecoderwhich deals with the fragmentation issue, andthe initial simple version of
TimeClientHandler.
Fortunately, Netty provides an extensible class which helps you write the first one out of the box:
package org.jboss.netty.example.time;

public class TimeDecoder extendsFrameDecoder{ @Override
protected Object decode(
ChannelHandlerContextctx,Channelchannel,ChannelBufferbuffer)

{
if (buffer.readableBytes() < 4) {
return null;

}
return buffer.readBytes(4);

}
}
|
|
There's no |
|
|
|
|
|
If |
|
|
If non- |
If you are an adventurous person, you might want to try the ReplayingDecoder which simplifies the decoder even more. You will need to consult the API reference for more information though.
package org.jboss.netty.example.time; public class TimeDecoder extendsReplayingDecoder<VoidEnum> { @Override
protected Object decode(
ChannelHandlerContextctx,Channelchannel,
ChannelBufferbuffer,VoidEnumstate) { return buffer.readBytes(4);
}
}
Additionally, Netty provides out-of-the-box decoders which enables you to implement most protocols very easily and helps you avoid from ending up with a monolithic unmaintainable handler implementation. Please refer to the following packages for more detailed examples:
org.jboss.netty.example.factorialfor a binary protocol, andorg.jboss.netty.example.telnetfor a text line-based protocol.
1.8. Speaking in POJO instead of ChannelBuffer
All the examples we have reviewed so far used a ChannelBuffer as a primary data structure of a protocol message. In this section, we will improve the TIME protocol client and server example to use a POJO instead of a ChannelBuffer.
The advantage of using a POJO in your ChannelHandler is obvious; your handler becomes more maintainable and reusable by separating the code which extracts information from ChannelBuffer out from the handler. In the TIME client and server examples, we read only one 32-bit integer and it is not a major issue to useChannelBuffer directly. However, you will find it is necessary to make the separation as you implement a real world protocol.
First, let us define a new type called UnixTime.
package org.jboss.netty.example.time;
import java.util.Date;
public class UnixTime {
private final int value;
public UnixTime(int value) {
this.value = value;
}
public int getValue() {
return value;
}
@Override
public String toString() {
return new Date(value * 1000L).toString();
}
}
We can now revise the TimeDecoder to return a UnixTime instead of a ChannelBuffer.
@Override
protected Object decode(
ChannelHandlerContextctx,Channelchannel,ChannelBufferbuffer) {
if (buffer.readableBytes() < 4) {
return null;
} return new UnixTime(buffer.readInt());

}
|
|
|
With the updated decoder, the TimeClientHandler does not use ChannelBuffer anymore:
@Override
public void messageReceived(ChannelHandlerContextctx,MessageEvente) {
UnixTime m = (UnixTime) e.getMessage();
System.out.println(m);
e.getChannel().close();
}
Much simpler and elegant, right? The same technique can be applied on the server side. Let us update theTimeServerHandler first this time:
@Override
public void channelConnected(ChannelHandlerContextctx,ChannelStateEvente) {
UnixTime time = new UnixTime(System.currentTimeMillis() / 1000);
ChannelFuturef = e.getChannel().write(time);
f.addListener(ChannelFutureListener.CLOSE);
}
Now, the only missing piece is the ChannelHandler which translates a UnixTime back into a ChannelBuffer. It's much simpler than writing a decoder because there's no need to deal with packet fragmentation and assembly when encoding a message.
package org.jboss.netty.example.time; import static org.jboss.netty.buffer.ChannelBuffers.*; @ChannelPipelineCoverage("all")

public class TimeEncoder extendsSimpleChannelHandler{ public void writeRequested(ChannelHandlerContextctx,MessageEvent

e) {
UnixTime time = (UnixTime) e.getMessage();
ChannelBuffer buf = buffer(4);
buf.writeInt(time.getValue());
Channels.write(ctx, e.getFuture(), buf);

}
}
|
|
The |
|
|
An encoder overrides the |
|
|
Once done with transforming a POJO into a On the other hand, it is a good idea to use static imports for import static org.jboss.netty.channel. |
The last task left is to insert a TimeEncoder into the ChannelPipeline on the server side, and it is left as a trivial exercise.
1.9. Shutting Down Your Application
If you ran the TimeClient, you must have noticed that the application doesn't exit but just keep running doing nothing. Looking from the full stack trace, you will also find a couple I/O threads are running. To shut down the I/O threads and let the application exit gracefully, you need to release the resources allocated byChannelFactory.
The shutdown process of a typical network application is composed of the following three steps:
Close all server sockets if there are any,
Close all non-server sockets (i.e. client sockets and accepted sockets) if there are any, and
Release all resources used by
ChannelFactory.
To apply the three steps above to the TimeClient, TimeClient.main() could shut itself down gracefully by closing the only one client connection and releasing all resources used by ChannelFactory:
package org.jboss.netty.example.time;
public class TimeClient {
public static void main(String[] args) throws Exception {
...
ChannelFactory factory = ...;
ClientBootstrap bootstrap = ...;
...
ChannelFuture future

= bootstrap.connect(...);
future.awaitUninterruptible();

if (!future.isSuccess()) {
future.getCause().printStackTrace();

}
future.getChannel().getCloseFuture().awaitUninterruptibly();

factory.releaseExternalResources();

}
}
|
|
The |
|
|
Wait for the returned |
|
|
If failed, we print the cause of the failure to know why it failed. the |
|
|
Now that the connection attempt is over, we need to wait until the connection is closed by waiting for the Even if the connection attempt has failed the |
|
|
All connections have been closed at this point. The only task left is to release the resources being used by |
Shutting down a client was pretty easy, but how about shutting down a server? You need to unbind from the port and close all open accepted connections. To do this, you need a data structure that keeps track of the list of active connections, and it's not a trivial task. Fortunately, there is a solution, ChannelGroup.
ChannelGroup is a special extension of Java collections API which represents a set of open Channels. If a Channelis added to a ChannelGroup and the added Channel is closed, the closed Channel is removed from its ChannelGroupautomatically. You can also perform an operation on all Channels in the same group. For instance, you can close all Channels in a ChannelGroup when you shut down your server.
To keep track of open sockets, you need to modify the TimeServerHandler to add a new open Channel to the global ChannelGroup, TimeServer.allChannels:
@Override
public void channelOpen(ChannelHandlerContextctx,ChannelStateEvente) {
TimeServer.allChannels.add(e.getChannel());

}
|
|
Yes, |
Now that the list of all active Channels are maintained automatically, shutting down a server is as easy as shutting down a client:
package org.jboss.netty.example.time;
public class TimeServer {
static final ChannelGroup allChannels = new DefaultChannelGroup("time-server"

);
public static void main(String[] args) throws Exception {
...
ChannelFactory factory = ...;
ServerBootstrap bootstrap = ...;
...
Channel channel

= bootstrap.bind(...);
allChannels.add(channel);

waitForShutdownCommand();

ChannelGroupFuture future = allChannels.close();

future.awaitUninterruptibly();
factory.releaseExternalResources();
}
}
|
|
|
|
|
The |
|
|
Any type of |
|
|
|
|
|
You can perform the same operation on all channels in the same |
1.10. Summary
In this chapter, we had a quick tour of Netty with a demonstration on how to write a fully working network application on top of Netty. More questions you may have will be covered in the upcoming chapters and the revised version of this chapter. Please also note that the community is always waiting for your questions and ideas to help you and keep improving Netty based on your feed back.
netty Getting Started--reference的更多相关文章
- netty Architectural Overview --reference
reference from:http://docs.jboss.org/netty/3.1/guide/html/architecture.html 2.1. Rich Buffer Data St ...
- 《Netty in action》 读书笔记
声明:这篇文章是记录读书过程中的知识点,并加以归纳总结,成文.文中图片.代码出自<Netty in action>. 1. 为什么用Netty? 每个框架的流行,都一定有它出众的地方.Ne ...
- Reference counted objects
Reference counted objects · netty/netty Wiki https://github.com/netty/netty/wiki/Reference-counted-o ...
- [转]RPC 框架通俗介绍
关于RPC 你的题目是RPC(远程过程调用,Remote Procedure Call)框架,首先了解什么叫RPC,为什么要RPC,RPC是指远程过程调用,也就是说两台服务器A,B,一个应用部署在A服 ...
- RPC和Socket
RPC和Socket的区别 rpc是通过什么实现啊?socket! RPC(Remote Procedure Call,远程过程调用)是建立在Socket之上的,出于一种类比的愿望,在一台机器上运行的 ...
- 【Netty官方文档翻译】引用计数对象(reference counted objects)
知乎有关于引用计数和垃圾回收GC两种方式的详细讲解 https://www.zhihu.com/question/21539353 原文出处:http://netty.io/wiki/referenc ...
- netty 引用计数对象(reference counted objects)
[Netty官方文档翻译]引用计数对象(reference counted objects) http://damacheng009.iteye.com/blog/2013657
- netty系列之:JVM中的Reference count原来netty中也有
目录 简介 ByteBuf和ReferenceCounted ByteBuf的基本使用 ByteBuf的回收 ByteBuf的衍生方法 ChannelHandler中的引用计数 内存泄露 总结 简介 ...
- 基于Netty打造RPC服务器设计经验谈
自从在园子里,发表了两篇如何基于Netty构建RPC服务器的文章:谈谈如何使用Netty开发实现高性能的RPC服务器.Netty实现高性能RPC服务器优化篇之消息序列化 之后,收到了很多同行.园友们热 ...
- Netty源码分析之服务端启动过程
一.首先来看一段服务端的示例代码: public class NettyTestServer { public void bind(int port) throws Exception{ EventL ...
随机推荐
- 【Quick 3.3】资源脚本加密及热更新(一)脚本加密
[Quick 3.3]资源脚本加密及热更新(一)脚本加密 注:本文基于Quick-cocos2dx-3.3版本编写 一.脚本加密 quick框架已经封装好加密模块,与加密有关的文件在引擎目录/quic ...
- Python和C|C++的混编(一):Python调用C、C++---Boost库
不使用boost.python库来直接构建dll的话比较繁琐,下面实例是借助boost库实现python对C.C++的调用 1 首先确定已经安装python和boost库,本例测试环境是python2 ...
- 【HDOJ】1462 Word Crosses
字符串水题,这么做可能比较巧妙. /* 1462 */ #include <iostream> #include <string> #include <map> # ...
- WCF - IIS Hosting
WCF - IIS Hosting Hosting a WCF service in IIS (Internet Information Services) is a step-by-step pro ...
- 推荐GitHub上10 个开源深度学习框架
推荐GitHub上10 个开源深度学习框架 日前,Google 开源了 TensorFlow(GitHub),此举在深度学习领域影响巨大,因为 Google 在人工智能领域的研发成绩斐然,有着雄厚 ...
- com.google.common.eventbus.EventBus介绍
以下内容直接翻译了EventBus的注释: com.google.common.eventbus.EventBus介绍: 首先这个类是线程安全的, 分发事件到监听器,并提供相应的方式让监听器注册它们自 ...
- 【转】Optiplex 7010驱动下载链接(XP&Windows7
原文网址:http://zh.community.dell.com/support_forums/desktops/f/236/t/2606 x 7010驱动下载链接(XP&Windows7) ...
- 嵌入式开发软件环境:uboot、kernel、rootfs、data布局分析
uboot+linux的整体方案 开发板的datasheet中都有详细的地址空间的划分,其中比较重要的两块是:DDR地址空间和Flash地址空间.DDR空间是系统和应用的运行空间,一般由linux系统 ...
- 新浪微博2.5.1 for Android 去广告
新浪微博更新到2.5.1版,就开始来广告了,伤不起啊... 亲,看到没,手机屏幕就那么一小块,还要往里面塞东西,另外是一个在后台运行的AdCenter服务. 所需要用到的工具有:apktool,JD- ...
- SPOJ VLATTICE Visible Lattice Points 莫比乌斯反演
这样的点分成三类 1 不含0,要求三个数的最大公约数为1 2 含一个0,两个非零数互质 3 含两个0,这样的数只有三个,可以讨论 针对 1情况 定义f[n]为所有满足三个数最大公约数为n的三元组数量 ...