由于tomcat6的配置文件如下:

<Connector port="80" protocol="org.apache.coyote.http11.Http11NioProtocol"
		connectionTimeout="20000" URIEncoding="UTF-8" useBodyEncodingForURI="true"
		enableLookups="false" redirectPort="8443" />

所以在StandardService类中执行如下代码启动Connector时,其中的connector为Http11NioProtocol实现类。

synchronized (connectors) {
            for (int i = 0; i < connectors.length; i++) {
                try {
                    ((Lifecycle) connectors[i]).start();
                } catch (Exception e) {
                    log.error(sm.getString("standardService.connector.startFailed",connectors[i]), e);
                }
            }
        }

Connector类中调用org.apache.coyote.http11.Http11NioProtocol的start()方法。在Http11NioProtocol类中又调用了org.apache.tomcat.util.net.NioEndpoint的start()方法。

public void start()throws Exception {
        // Initialize socket if not done before
        if (!initialized) {
            init();
        }
        if (!running) {
            running = true;
            paused = false;

            // Create worker collection
            if (getUseExecutor()) {
                if ( executor == null ) {
                    TaskQueue taskqueue = new TaskQueue();
                    TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-",this);

                   /*
                    corePoolSize the number of threads to keep in the pool, even if they are idle, unless allowCoreThreadTimeOut is set
                    maximumPoolSize the maximum number of threads to allow in the pool
                    keepAliveTime when the number of threads is greater than the core, this is the maximum time that excess idle threads will wait for new tasks before terminating.
                    unit the time unit for the keepAliveTime argument
                    workQueue the queue to use for holding tasks before they are executed. This queue will hold only the Runnable tasks submitted by the execute method.
                    threadFactory the factory to use when the executor creates a new thread
                    */
                    executor = new ThreadPoolExecutor(
                    		getMinSpareThreads(),
                    		getMaxThreads(),
                    		60,
                    		TimeUnit.SECONDS,
                    		taskqueue,
                    		tf);
                    taskqueue.setParent( (ThreadPoolExecutor) executor, this);
                }
            } else if ( executor == null ) {//avoid two thread pools being created
                workers = new WorkerStack(maxThreads,this);
            }

            // Poller线程,由于Acceptor委托线程为客户端Socket注册了READ事件,当READ准备好时,就会进入Poller线程的循环,Poller线程也是委托线程池去做,
            // 线程池将NioChannel加入到ConcurrentLinkedQueue<NioChannel>队列中。该线程数目可配置,默认为1个

            // Start poller threads
            pollers = new Poller[getPollerThreadCount()];
            for (int i=0; i<pollers.length; i++) {
                pollers[i] = new Poller(this);
                Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-"+i);
                pollerThread.setPriority(threadPriority);
                pollerThread.setDaemon(true);
                pollerThread.start();
            }

            // Start acceptor threads
            for (int i = 0; i < acceptorThreadCount; i++) {
                Thread acceptorThread = new Thread(new Acceptor(this), getName() + "-Acceptor-" + i);
                acceptorThread.setPriority(threadPriority);
                acceptorThread.setDaemon(daemon);
                acceptorThread.start();
            }
        }
    }

默认情况下会启动一个Acceptor线程与4个Poller线程。

注意:本作者为了方便代码的阅读,将NioEndpoint类重新进行了整理,也就是为NioEndpoint中的所有内部类都新建为了public类。由于内部类需要用到NioEndpoint的一些变量,所以在new一个public为在的时候需要为这个类传递this,也就是当前的NioEndpoint对象。

看一下Acceptor线程的run()方法是怎么运行接收请求的。

                // Accept the next incoming connection from the server socket
                SocketChannel socket = endpoint.serverSock.accept();  // this is clientSocket
                // Hand this socket off to an appropriate processor
                //TODO FIXME - this is currently a blocking call, meaning we will be blocking
                //further accepts until there is a thread available.
                if ( endpoint.running && (!endpoint.paused) && socket != null ) {
                    //processSocket(socket);
                    if (!endpoint.setSocketOptions(socket)) {
                        try {
                            socket.socket().close();
                            socket.close();
                        } catch (IOException ix) {
//                            if (log.isDebugEnabled())
//                                log.debug("", ix);
                        }
                    }
                }

这个线程主要调用了setSocketOptions()方法,源代码如下:

 public boolean setSocketOptions(SocketChannel socket) {
        // Process the connection
        try {
            //disable blocking, APR style, we are gonna be polling it
            socket.configureBlocking(false);
            Socket sock = socket.socket();
            socketProperties.setProperties(sock);

            NioChannel channel = nioChannels.poll();
            if ( channel == null ) {
                // SSL setup
                if (sslContext != null) {
                    SSLEngine engine = createSSLEngine();
                    int appbufsize = engine.getSession().getApplicationBufferSize();
                    NioBufferHandler bufhandler = new NioBufferHandler(Math.max(appbufsize,socketProperties.getAppReadBufSize()),
                                                                       Math.max(appbufsize,socketProperties.getAppWriteBufSize()),
                                                                       socketProperties.getDirectBuffer());
                    channel = new SecureNioChannel(socket, engine, bufhandler, selectorPool);
                } else {
                    // normal tcp setup
                    NioBufferHandler bufhandler = new NioBufferHandler(socketProperties.getAppReadBufSize(),
                                                                       socketProperties.getAppWriteBufSize(),
                                                                       socketProperties.getDirectBuffer());

                    channel = new NioChannel(socket, bufhandler);
                }
            } else {
                channel.setIOChannel(socket);
                if ( channel instanceof SecureNioChannel ) {
                    SSLEngine engine = createSSLEngine();
                    ((SecureNioChannel)channel).reset(engine);
                } else {
                    channel.reset();
                }
            }
            getPoller0().register(channel);  // 初始化为4个Poller
        } catch (Throwable t) {
            try {
                log.error("",t);
            }catch ( Throwable tt){}
            // Tell to close the socket
            return false;
        }
        return true;
    }

getPoller0()方法通过循环均匀获取channel来register各个channel。看下一Poller线程的register()方法:

public void register(final NioChannel socket) {
		socket.setPoller(this);

		KeyAttachment key = endpoint.keyCache.poll();
		final KeyAttachment ka = key != null ? key : new KeyAttachment();
		ka.reset(this, socket, endpoint.getSocketProperties().getSoTimeout());
		ka.interestOps(SelectionKey.OP_READ);// this is what OP_REGISTER turns  into.

		PollerEvent r = endpoint.eventCache.poll();

		// 有PollerEvent对象就重复利用,没有就新建一个
		if (r == null)
			r = new PollerEvent(socket, ka, endpoint.OP_REGISTER, endpoint);
		else
			r.reset(socket, ka, endpoint.OP_REGISTER);

		addEvent(r);
	}

最后调用addEvent()方法向Poller类中的如下变量添加了这个PollerEvent对象

    protected ConcurrentLinkedQueue<Runnable> events = new ConcurrentLinkedQueue<Runnable>();

在Poller的run()方法中有一句代码如下:

hasEvents = (hasEvents | events());

调用了events()方法,如下:

public boolean events() {
        boolean result = false;
            Runnable r = null;
            result = (events.size() > 0);
            while ( (r = (Runnable)events.poll()) != null ) {
                try {
                    r.run();
                    if ( r instanceof PollerEvent ) {
                        ((PollerEvent)r).reset();
                        endpoint.eventCache.offer((PollerEvent)r);
                    }
                } catch ( Throwable x ) {
//                    log.error("",x);
                }
            }
        return result;
    }

这个events中如果有PollEvent对象,那么调用t.run()方法运行,然后将这个对象存入eventCache()中。看一下PollEvent对象的run()方法,如下:

    protected NioChannel socket;
    protected int interestOps;   // 感兴趣的集合
    protected KeyAttachment key;

    public void run() {
        if ( interestOps == endpoint.OP_REGISTER ) {  // 是注册事件
            try {
                socket.getIOChannel().register(socket.getPoller().getSelector(), SelectionKey.OP_READ, key);
            } catch (Exception x) {
//                log.error("", x);
            }
        } else {
            final SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
            try {
                boolean cancel = false;
                if (key != null) {
                    final KeyAttachment att = (KeyAttachment) key.attachment();
                    if ( att!=null ) {
                        //handle callback flag
                        if (att.getComet() && (interestOps & endpoint.OP_CALLBACK) == endpoint.OP_CALLBACK ) {
                            att.setCometNotify(true);
                        } else {
                            att.setCometNotify(false);
                        }
                        interestOps = (interestOps & (~endpoint.OP_CALLBACK));//remove the callback flag
                        att.access();//to prevent timeout
                        //we are registering the key to start with, reset the fairness counter.
                        int ops = key.interestOps() | interestOps;
                        att.interestOps(ops);
                        key.interestOps(ops);
                        att.setCometOps(ops);
                    } else {
                        cancel = true;
                    }
                } else {
                    cancel = true;
                }
                if ( cancel )
                	socket.getPoller().cancelledKey(key,SocketStatus.ERROR,false);
            }catch (CancelledKeyException ckx) {
                try {
                    socket.getPoller().cancelledKey(key,SocketStatus.DISCONNECT,true);
                }catch (Exception ignore) {}
            }
        }//end if
    }//run

最主要的功能就是为NioChannel注册感兴趣的事件。这样我们就可以回到Poller的run()方法中看具体的操作了。

                 Iterator iterator = keyCount > 0 ? selector.selectedKeys().iterator() : null;
                // Walk through the collection of ready keys and dispatch
                // any active event.
                while (iterator != null && iterator.hasNext()) {
                    SelectionKey sk = (SelectionKey) iterator.next();
                    KeyAttachment attachment = (KeyAttachment)sk.attachment();
                    // Attachment may be null if another thread has called
                    // cancelledKey()
                    if (attachment == null) {
                        iterator.remove();
                    } else {
                        attachment.access();
                        iterator.remove();
                        processKey(sk, attachment);
                    }
                }//while

如果有感兴趣的事件发生,那么进入while循环后调用processKey()方法进行处理:

if ( close ) {
                cancelledKey(sk, SocketStatus.STOP, false);
            } else if ( sk.isValid() && attachment != null ) {
                attachment.access();//make sure we don't time out valid sockets
                sk.attach(attachment);//cant remember why this is here
                NioChannel channel = attachment.getChannel();
                if (sk.isReadable() || sk.isWritable() ) {
                    if ( attachment.getSendfileData() != null ) {
                        processSendfile(sk,attachment,true, false);
                    } else if ( attachment.getComet() ) {
                        //check if thread is available
                        if ( endpoint.isWorkerAvailable() ) {
                            //set interest ops to 0 so we don't get multiple
                            //invokations for both read and write on separate threads
                            reg(sk, attachment, 0);
                            //read goes before write
                            if (sk.isReadable()) {
                                //read notification
                                if (!endpoint.processSocket(channel, SocketStatus.OPEN))
                                	endpoint.processSocket(channel, SocketStatus.DISCONNECT);
                            } else {
                                //future placement of a WRITE notif
                                if (!endpoint.processSocket(channel, SocketStatus.OPEN))
                                	endpoint.processSocket(channel, SocketStatus.DISCONNECT);
                            }
                        } else {
                            result = false;
                        }
                    } else {
                        //later on, improve latch behavior
                        if ( endpoint.isWorkerAvailable() ) {
                            unreg(sk, attachment,sk.readyOps());
                            boolean close = (!endpoint.processSocket(channel));
                            if (close) {
                                cancelledKey(sk,SocketStatus.DISCONNECT,false);
                            }
                        } else {
                            result = false;
                        }
                    }
                }
            } else {
                //invalid key
                cancelledKey(sk, SocketStatus.ERROR,false);
            }

进入后调用了NioEndpoint的processSocket()方法,如下:

 public boolean processSocket(NioChannel socket, SocketStatus status) {
        return processSocket(socket,status,true);
    }

    public boolean processSocket(NioChannel socket, SocketStatus status, boolean dispatch) {
        try {
            KeyAttachment attachment = (KeyAttachment)socket.getAttachment(false);
            attachment.setCometNotify(false); //will get reset upon next reg
            if (executor == null) {
                getWorkerThread().assign(socket, status);
            } else {
                SocketProcessor sc = processorCache.poll();
                if (sc == null ){
                	sc = new SocketProcessor(socket,status,this);
                }else{
                	sc.reset(socket,status);
                }
                if ( dispatch ) executor.execute(sc);
                else sc.run();
            }
        } catch (Throwable t) {
            // This means we got an OOM or similar creating a thread, or that
            // the pool and its queue are full
            log.error(sm.getString("endpoint.process.fail"), t);
            return false;
        }
        return true;
    }

如果sc为空则新建,否则从processorCache中取出重置后重复利用。使用线程池或者直接调用run()方法执行,SocketProcesser类的run()方法有如下代码:

boolean closed = (status == null) ?
(nioEndpoint.getHandler().process(socket) == Handler.SocketState.CLOSED)
: (nioEndpoint.getHandler().event(socket, status) == Handler.SocketState.CLOSED);

获取Http11ConnectionHandler的process()对socket进行处理,如下:

public SocketState process(NioChannel socket) {
        Http11NioProcessor processor = null;
        try {
            processor = connections.remove(socket);

            if (processor == null) {
                processor = recycledProcessors.poll();
            }
            if (processor == null) {
                processor = createProcessor();
            }

            if (processor instanceof ActionHook) {
                ((ActionHook) processor).action(ActionCode.ACTION_START, null);
            }

            if (proto.ep.isSSLEnabled() && (proto.sslImplementation != null)) {
                if (socket instanceof SecureNioChannel) {
                    SecureNioChannel ch = (SecureNioChannel)socket;
                    processor.setSslSupport(proto.sslImplementation.getSSLSupport(ch.getSslEngine().getSession()));
                }else processor.setSslSupport(null);
            } else {
                processor.setSslSupport(null);
            }

            SocketState state = processor.process(socket);
            if (state == SocketState.LONG) {
                // In the middle of processing a request/response. Keep the
                // socket associated with the processor.
                connections.put(socket, processor);
                socket.getPoller().add(socket);
            } else if (state == SocketState.OPEN) {
                // In keep-alive but between requests. OK to recycle
                // processor. Continue to poll for the next request.
                release(socket, processor);
                socket.getPoller().add(socket);
            } else {
                // Connection closed. OK to recycle the processor.
                release(socket, processor);
            }
            return state;

        } catch (Exception e) {
             e.printStackTrace();
        }
        release(socket, processor);
        return SocketState.CLOSED;
    }

  

  

 

  

 

  

  

  

剑指架构师系列-tomcat6通过IO复用实现connector的更多相关文章

  1. 剑指架构师系列-tomcat6通过伪异步实现connector

    首先在StandardService中start接收请求的线程,如下: synchronized (connectors) { for (int i = 0; i < connectors.le ...

  2. 剑指架构师系列-spring boot的logback日志记录

    Spring Boot集成了Logback日志系统. Logback的核心对象主要有3个:Logger.Appender.Layout 1.Logback Logger:日志的记录器 主要用于存放日志 ...

  3. 剑指架构师系列-Hibernate需要掌握的Annotation

    1.一对多的关系配置 @Entity @Table(name = "t_order") public class Order { @Id @GeneratedValue priva ...

  4. 剑指架构师系列-Linux下的调优

    1.I/O调优 CentOS下的iostat命令输出如下: $iostat -d -k 1 2 # 查看TPS和吞吐量 参数 -d 表示,显示设备(磁盘)使用状态:-k某些使用block为单位的列强制 ...

  5. 剑指架构师系列-Redis安装与使用

    1.安装Redis 我们在VMware中安装CentOS 64位系统后,在用户目录下下载安装Redis. 下载redis目前最稳定版本也是功能最完善,集群支持最好并加入了sentinel(哨兵-高可用 ...

  6. 剑指架构师系列-持续集成之Maven+Nexus+Jenkins+git+Spring boot

    1.Nexus与Maven 先说一下这个Maven是什么呢?大家都知道,Java社区发展的非常强大,封装各种功能的Jar包满天飞,那么如何才能方便的引入我们项目,为我所用呢?答案就是Maven,只需要 ...

  7. 剑指架构师系列-Struts2构造函数的循环依赖注入

    Struts2可以完成构造函数的循环依赖注入,来看看Struts2的大师们是怎么做到的吧! 首先定义IBlood与BloodImpl类: public interface IBlood { } pub ...

  8. 剑指架构师系列-Struts2的缓存

    Struts2的缓存中最重要的两个类就是ReferenceMap与ReferenceCache.下面来解释下ReferenceCache中的get()方法. public V get(final Ob ...

  9. 剑指架构师系列-InnoDB存储引擎、Spring事务与缓存

    事务与锁是不同的.事务具有ACID属性: 原子性:持久性:由redo log重做日志来保证事务的原子性和持久性,一致性:undo log用来保证事务的一致性隔离性:一个事务在操作过程中看到了其他事务的 ...

随机推荐

  1. NFS性能优化

    参考: http://www.techrepublic.com/blog/linux-and-open-source/tuning-nfs-for-better-performance/ 1.服务器端 ...

  2. 用Pomelo 搭建一个简易的推送平台

    前言 实际上,个人感觉,pomelo 目前提供的两个默认sioconnector和hybridconnector 使用的协议并不适合用于做手机推送平台,在pomelo的一份公开ppt里面,有提到过, ...

  3. Git中当add错误的时候怎么办?

    傻傻分不清楚. “git add .”是我常用的添加命令,添加完后来个“git status ”总是有那么几次发现有不想添加的东西.好多人用reset,nonono,这样不好会有个head错误爆出. ...

  4. BW标准数据源初始化设置

    在安装了一干补丁和做好了BW与R3的链接之后(此处有BISIS操心,具体事宜不详),我们就可以登录到R3系统看个究竟了. 磨刀不误砍柴工,先检查一下两边系统的补丁: R3端如下, ,貌似我们是19,通 ...

  5. 如何闪开安装VS2013必须要有安装IE10的限制

    把下面这一段文字,储存成.bat档案,然后右击以管理员角色去执行它.@ECHO OFF :IE10HACK REG ADD "HKLM\SOFTWARE\Wow6432Node\Micros ...

  6. 利用vba将excel中的图片链接直接转换为图片

    Sub test() Dim rg As Range, shp As Shape Rem --------------------------------------------------- Rem ...

  7. idea启动tomcat失败,1099端口被占用

    今天遇到一个问题,当使用idea启动一个tomat服务的时候,报错:不能连接本地1099端口. /Users/liqiu/soft/develop/apache-tomcat-/bin/catalin ...

  8. ubuntu 修改默认root及密码

    安装完Ubuntu后忽然意识到没有设 置root密码,不知道密码自然就无法进入根用户下.到网上搜了一下,原来是这麽回事.Ubuntu的默认root密码是随机的,即每次开机都有一个新的 root密码.我 ...

  9. Phantomjs 一些简单实用

    Phantomjs是一个基于webkit的服务器端JavaScirpt API.它全面支持web而不需要浏览器支持,并且原生支持web的各种标准:DOM处理,CSS选择器,JSON,Canvas和SV ...

  10. The file 'MemoryStream' is corrupted! 的解决办法

    The file 'MemoryStream' is corrupted! Remove it and launch unity again! [Position > ] 有时候我们会遇到这个报 ...