Storm-源码分析- bolt (backtype.storm.task)
Bolt关键的接口为execute,
Tuple的真正处理逻辑, 通过OutputCollector.emit发出新的tuples, 调用ack或fail处理的tuple
/**
* An IBolt represents a component that takes tuples as input and produces tuples
* as output. An IBolt can do everything from filtering to joining to functions
* to aggregations. It does not have to process a tuple immediately and may
* hold onto tuples to process later.
*
* <p>A bolt's lifecycle is as follows:</p>
*
* <p>IBolt object created on client machine. The IBolt is serialized into the topology
* (using Java serialization) and submitted to the master machine of the cluster (Nimbus).
* Nimbus then launches workers which deserialize the object, call prepare on it, and then
* start processing tuples.</p>
*
* <p>If you want to parameterize an IBolt, you should set the parameter's through its
* constructor and save the parameterization state as instance variables (which will
* then get serialized and shipped to every task executing this bolt across the cluster).</p>
*
* <p>When defining bolts in Java, you should use the IRichBolt interface which adds
* necessary methods for using the Java TopologyBuilder API.</p>
*/
public interface IBolt extends Serializable {
/**
* Called when a task for this component is initialized within a worker on the cluster.
* It provides the bolt with the environment in which the bolt executes.
*
* <p>This includes the:</p>
*
* @param stormConf The Storm configuration for this bolt. This is the configuration provided to the topology merged in with cluster configuration on this machine.
* @param context This object can be used to get information about this task's place within the topology, including the task id and component id of this task, input and output information, etc.
* @param collector The collector is used to emit tuples from this bolt. Tuples can be emitted at any time, including the prepare and cleanup methods. The collector is thread-safe and should be saved as an instance variable of this bolt object.
*/
void prepare(Map stormConf, TopologyContext context, OutputCollector collector); /**
* Process a single tuple of input. The Tuple object contains metadata on it
* about which component/stream/task it came from. The values of the Tuple can
* be accessed using Tuple#getValue. The IBolt does not have to process the Tuple
* immediately. It is perfectly fine to hang onto a tuple and process it later
* (for instance, to do an aggregation or join).
*
* <p>Tuples should be emitted using the OutputCollector provided through the prepare method.
* It is required that all input tuples are acked or failed at some point using the OutputCollector.
* Otherwise, Storm will be unable to determine when tuples coming off the spouts
* have been completed.</p>
*
* <p>For the common case of acking an input tuple at the end of the execute method,
* see IBasicBolt which automates this.</p>
*
* @param input The input tuple to be processed.
*/
void execute(Tuple input); /**
* Called when an IBolt is going to be shutdown. There is no guarentee that cleanup
* will be called, because the supervisor kill -9's worker processes on the cluster.
*
* <p>The one context where cleanup is guaranteed to be called is when a topology
* is killed when running Storm in local mode.</p>
*/
void cleanup();
}
首先OutputCollector, 主要是emit和emitDirect接口
List<Integer> emit(String streamId, Tuple anchor, List<Object> tuple)
emit, 3个参数, 发送到的streamid, anchors(来源tuples), tuple(values list)
如果streamid为空, 则发送到默认stream, Utils.DEFAULT_STREAM_ID
如果anchors为空, 则为unanchored tuple
1个返回值, 最终发送到的task ids
对比一下SpoutOutputCollector中的emit, 参数变化, 没有message-id, 多了anchors
而在在Bolt中, ack和fail接口在IOutputCollector中, 用于在execute中完成对上一级某tuple的处理和emit, 调用ack或fail
而在Spout中, ack和fail接口在ISpout中, 用于spout收到ack或fail tuple时调用
/**
* This output collector exposes the API for emitting tuples from an IRichBolt.
* This is the core API for emitting tuples. For a simpler API, and a more restricted
* form of stream processing, see IBasicBolt and BasicOutputCollector.
*/
public class OutputCollector implements IOutputCollector {
private IOutputCollector _delegate; public OutputCollector(IOutputCollector delegate) {
_delegate = delegate;
} /**
* Emits a new tuple to a specific stream with a single anchor. The emitted values must be
* immutable.
*
* @param streamId the stream to emit to
* @param anchor the tuple to anchor to
* @param tuple the new output tuple from this bolt
* @return the list of task ids that this new tuple was sent to
*/
public List<Integer> emit(String streamId, Tuple anchor, List<Object> tuple) {
return emit(streamId, Arrays.asList(anchor), tuple);
} /**
* Emits a tuple directly to the specified task id on the specified stream.
* If the target bolt does not subscribe to this bolt using a direct grouping,
* the tuple will not be sent. If the specified output stream is not declared
* as direct, or the target bolt subscribes with a non-direct grouping,
* an error will occur at runtime. The emitted values must be
* immutable.
*
* @param taskId the taskId to send the new tuple to
* @param streamId the stream to send the tuple on. It must be declared as a direct stream in the topology definition.
* @param anchor the tuple to anchor to
* @param tuple the new output tuple from this bolt
*/
public void emitDirect(int taskId, String streamId, Tuple anchor, List<Object> tuple) {
emitDirect(taskId, streamId, Arrays.asList(anchor), tuple);
} @Override
public List<Integer> emit(String streamId, Collection<Tuple> anchors, List<Object> tuple) {
return _delegate.emit(streamId, anchors, tuple);
} @Override
public void emitDirect(int taskId, String streamId, Collection<Tuple> anchors, List<Object> tuple) {
_delegate.emitDirect(taskId, streamId, anchors, tuple);
} @Override
public void ack(Tuple input) {
_delegate.ack(input);
} @Override
public void fail(Tuple input) {
_delegate.fail(input);
} @Override
public void reportError(Throwable error) {
_delegate.reportError(error);
}
}
Storm-源码分析- bolt (backtype.storm.task)的更多相关文章
- storm源码分析之任务分配--task assignment
在"storm源码分析之topology提交过程"一文最后,submitTopologyWithOpts函数调用了mk-assignments函数.该函数的主要功能就是进行topo ...
- Storm源码分析--Nimbus-data
nimbus-datastorm-core/backtype/storm/nimbus.clj (defn nimbus-data [conf inimbus] (let [forced-schedu ...
- JStorm与Storm源码分析(四)--均衡调度器,EvenScheduler
EvenScheduler同DefaultScheduler一样,同样实现了IScheduler接口, 由下面代码可以看出: (ns backtype.storm.scheduler.EvenSche ...
- JStorm与Storm源码分析(三)--Scheduler,调度器
Scheduler作为Storm的调度器,负责为Topology分配可用资源. Storm提供了IScheduler接口,用户可以通过实现该接口来自定义Scheduler. 其定义如下: public ...
- JStorm与Storm源码分析(二)--任务分配,assignment
mk-assignments主要功能就是产生Executor与节点+端口的对应关系,将Executor分配到某个节点的某个端口上,以及进行相应的调度处理.代码注释如下: ;;参数nimbus为nimb ...
- JStorm与Storm源码分析(一)--nimbus-data
Nimbus里定义了一些共享数据结构,比如nimbus-data. nimbus-data结构里定义了很多公用的数据,请看下面代码: (defn nimbus-data [conf inimbus] ...
- spark 源码分析之二十一 -- Task的执行流程
引言 在上两篇文章 spark 源码分析之十九 -- DAG的生成和Stage的划分 和 spark 源码分析之二十 -- Stage的提交 中剖析了Spark的DAG的生成,Stage的划分以及St ...
- storm源码分析之topology提交过程
storm集群上运行的是一个个topology,一个topology是spouts和bolts组成的图.当我们开发完topology程序后将其打成jar包,然后在shell中执行storm jar x ...
- Nimbus<三>Storm源码分析--Nimbus启动过程
Nimbus server, 首先从启动命令开始, 同样是使用storm命令"storm nimbus”来启动看下源码, 此处和上面client不同, jvmtype="-serv ...
随机推荐
- Sublime Text的列模式
Sublime Text的列模式如何操作? 听语音 | 浏览:6551 | 更新:2014-12-09 13:27 | 标签:软件 Sublime Text的列模式如何操作?各个系统不一样,请跟进系统 ...
- C# Lock 解读
一.Lock定义 lock 关键字可以用来确保代码块完成运行,而不会被其他线程中断.它可以把一段代码定义为互斥段(critical section),互斥段在一个时刻内只允许一个线程进入执行, ...
- javascript 函数声明和函数表达式
定义js函数的方法有两种,1.函数声明 2.函数表达式 这两种方式的区别是:1.函数声明可以先调用后定义(javascript引擎在解释的时候会把所有的函数声明提升)2.函数表达式必须先定义后使用.看 ...
- js将秒数换算成时分秒
转载自:http://jingyan.baidu.com/article/375c8e19a0413925f2a229d2.html <script language="javascr ...
- Problem b 莫比乌斯反演+枚举除法的取值
莫比乌斯反演+枚举除法的取值 第二种形式: f(n)表示gcd(x,y)=n的数量. F(n)表示gcd(x,y)是n的倍数的数量. /** 题目:Problem b 链接:https://vjudg ...
- Github使用之git回退到某个历史版本
1. 查找历史版本 使用git log命令查看所有的历史版本,获取你git的某个历史版本的id 假设查到历史版本的id是fae6966548e3ae76cfa7f38a461c438cf75ba965 ...
- EasyUI DataGrid 编辑单元格
如下图: 现改为单击某个单元格只对此单元格进行可编辑 <TABLE>标记添加 onClickCell <table id="dg" class="eas ...
- Python之pandas
official document: http://pandas.pydata.org/pandas-docs/stable/10min.html 基本数据结构:http://www.open-ope ...
- python笔记2-数据类型:字符串常用操作
这次主要介绍字符串常用操作方法及例子 1.python字符串 在python中声明一个字符串,通常有三种方法:在它的两边加上单引号.双引号或者三引号,如下: name = 'hello' name1 ...
- 判断下列语句是否正确,如果有错误,请指出错误所在?interface A{
判断下列语句是否正确,如果有错误,请指出错误所在? interface A{ int add(final A a); } class B implements A{ long add(final A ...