storm trident的filter和函数
目的:通过kafka输出的信息进行过滤,添加指定的字段后,进行打印
SentenceSpout:
package Trident; import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties; import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values; import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties; /**
* 从kafka获取数据 spout发射
* @author BFD-593
*
*/
public class SentenceSpout extends BaseRichSpout{
//TODO
private SpoutOutputCollector collector;
private ConsumerConnector consumer;
private int index=0;
@Override
public void nextTuple() {
Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put("helloworld", new Integer(1)); StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties());
Map<String, List<KafkaStream<String, String>>> consumerMap =
consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder);
KafkaStream<String, String> stream = consumerMap.get("helloworld").get(0);
ConsumerIterator<String, String> it = stream.iterator(); int messageCount = 0;
while (it.hasNext()){
String string = it.next().message().toString()+" 1"+" 2";
String name = string.split(" ")[0];
String value = string.split(" ")[1]==null?"":string.split(" ")[1];
String value2= string.split(" ")[2]==null?"":string.split(" ")[2];
this.collector.emit(new Values(name,value,value2));
}
} @Override
public void open(Map map, TopologyContext context, SpoutOutputCollector collector) {
this.collector = collector;
Properties props = new Properties();
// zookeeper 配置
props.put("zookeeper.connect", "192.168.170.185:2181"); // 消费者所在组
props.put("group.id", "testgroup"); // zk连接超时
props.put("zookeeper.session.timeout.ms", "4000");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "smallest"); // 序列化类
props.put("serializer.class", "kafka.serializer.StringEncoder"); ConsumerConfig config = new ConsumerConfig(props);
this.consumer = Consumer.createJavaConsumerConnector(config);
} @Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
Fields field = new Fields("name", "sentence","sentence2");
declarer.declare(field);
} }
FunctionBolt:
package Trident; import org.apache.storm.trident.operation.BaseFunction;
import org.apache.storm.trident.operation.TridentCollector;
import org.apache.storm.trident.tuple.TridentTuple;
import org.apache.storm.tuple.Values;
/**
* trident的函数操作:将spout发射的数据,添加一个fileds gender的
* 它不会替换掉原来的tuple
* @author BFD-593
*
*/
public class FunctionBolt extends BaseFunction{ @Override
public void execute(TridentTuple tuple, TridentCollector collector) {
String str = tuple.getStringByField("name");
if(str.equals("a")){
collector.emit(new Values("男"));
}else{
collector.emit(new Values("女"));
}
} }
MyFilter:
package Trident; import java.util.Map; import org.apache.storm.trident.operation.BaseFilter;
import org.apache.storm.trident.operation.TridentOperationContext;
import org.apache.storm.trident.tuple.TridentTuple;
/**
* trident的过滤操作:将spout的发送的tuple,过滤掉fields0是a并且fields1是b的tuple
* @author BFD-593
*
*/
public class MyFilter extends BaseFilter{
private TridentOperationContext context; @Override
public void prepare(Map conf, TridentOperationContext context) {
super.prepare(conf, context);
this.context = context;
}
@Override
public boolean isKeep(TridentTuple tuple) {
String name = tuple.getStringByField("name");
String value = tuple.getStringByField("sentence");
return (!"a".equals(name))||(!"b".equals(value));
} }
PrintFilter:
package Trident; import java.util.Iterator;
import java.util.Map; import org.apache.storm.trident.operation.BaseFilter;
import org.apache.storm.trident.operation.TridentOperationContext;
import org.apache.storm.trident.tuple.TridentTuple;
import org.apache.storm.tuple.Fields;
/**
* 过滤打印所有的fields以及值
* @author BFD-593
*
*/
public class PrintFilter extends BaseFilter{
private TridentOperationContext context = null; @Override
public void prepare(Map conf, TridentOperationContext context) {
super.prepare(conf, context);
this.context = context;
} @Override
public boolean isKeep(TridentTuple tuple) {
Fields fields = tuple.getFields();
Iterator<String> iterator = fields.iterator();
String str = "";
while(iterator.hasNext()){
String next = iterator.next();
Object value = tuple.getValueByField(next);
str = str + next +":"+ value+",";
}
System.out.println(str);
return true;
} }
TopologyTrident:
package Trident; import org.apache.kafka.common.utils.Utils;
import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.trident.TridentTopology;
import org.apache.storm.trident.operation.builtin.Count;
import org.apache.storm.tuple.Fields;
/**
* trident的过滤操作、函数操作、分驱聚合操作
* @author BFD-593
*
*/
public class TopologyTrident {
public static void main(String[] args) {
SentenceSpout spout = new SentenceSpout(); TridentTopology topology = new TridentTopology();
topology.newStream("spout", spout).each(new Fields("name"),new FunctionBolt(),new Fields("gender")).each(new Fields("name","sentence"), new MyFilter())
.each(new Fields("name","sentence","sentence2","gender"), new PrintFilter()); Config conf = new Config(); LocalCluster clu = new LocalCluster();
clu.submitTopology("mytopology", conf, topology.build()); Utils.sleep(100000000);
clu.killTopology("mytopology");
clu.shutdown(); }
}
package Trident;
import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.Properties;
import org.apache.storm.spout.SpoutOutputCollector;import org.apache.storm.task.TopologyContext;import org.apache.storm.topology.OutputFieldsDeclarer;import org.apache.storm.topology.base.BaseRichSpout;import org.apache.storm.tuple.Fields;import org.apache.storm.tuple.Values;
import kafka.consumer.Consumer;import kafka.consumer.ConsumerConfig;import kafka.consumer.ConsumerIterator;import kafka.consumer.KafkaStream;import kafka.javaapi.consumer.ConsumerConnector;import kafka.serializer.StringDecoder;import kafka.utils.VerifiableProperties;
/** * 从kafka获取数据 spout发射 * @author BFD-593 * */public class SentenceSpout extends BaseRichSpout{//TODOprivate SpoutOutputCollector collector;private ConsumerConnector consumer;private int index=0;@Overridepublic void nextTuple() {Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); topicCountMap.put("helloworld", new Integer(1)); StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties()); StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties()); Map<String, List<KafkaStream<String, String>>> consumerMap = consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder); KafkaStream<String, String> stream = consumerMap.get("helloworld").get(0); ConsumerIterator<String, String> it = stream.iterator(); int messageCount = 0; while (it.hasNext()){ String string = it.next().message().toString()+" 1"+" 2"; String name = string.split(" ")[0]; String value = string.split(" ")[1]==null?"":string.split(" ")[1]; String value2= string.split(" ")[2]==null?"":string.split(" ")[2]; this.collector.emit(new Values(name,value,value2)); } }
@Overridepublic void open(Map map, TopologyContext context, SpoutOutputCollector collector) {this.collector = collector;Properties props = new Properties(); // zookeeper 配置 props.put("zookeeper.connect", "192.168.170.185:2181"); // 消费者所在组 props.put("group.id", "testgroup"); // zk连接超时 props.put("zookeeper.session.timeout.ms", "4000"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); props.put("auto.offset.reset", "smallest"); // 序列化类 props.put("serializer.class", "kafka.serializer.StringEncoder"); ConsumerConfig config = new ConsumerConfig(props); this.consumer = Consumer.createJavaConsumerConnector(config);}
@Overridepublic void declareOutputFields(OutputFieldsDeclarer declarer) {Fields field = new Fields("name", "sentence","sentence2");declarer.declare(field);}
}
storm trident的filter和函数的更多相关文章
- storm trident function函数
package cn.crxy.trident; import java.util.List; import backtype.storm.Config; import backtype.storm. ...
- Strom-7 Storm Trident 详细介绍
一.概要 1.1 Storm(简介) Storm是一个实时的可靠地分布式流计算框架. 具体就不多说了,举个例子,它的一个典型的大数据实时计算应用场景:从Kafka消息队列读取消息( ...
- Storm Trident API
在Storm Trident中有五种操作类型 Apply Locally:本地操作,所有操作应用在本地节点数据上,不会产生网络传输 Repartitioning:数据流重定向,单纯的改变数据流向,不会 ...
- Storm专题二:Storm Trident API 使用具体解释
一.概述 Storm Trident中的核心数据模型就是"Stream",也就是说,Storm Trident处理的是Stream.可是实际上Stream是被成批处理的. ...
- storm trident 示例
Storm Trident的核心数据模型是一批一批被处理的“流”,“流”在集群的分区在集群的节点上,对“流”的操作也是并行的在每个分区上进行. Trident有五种对“流”的操作: 1. 不 ...
- storm trident merger
import java.util.List; import backtype.storm.Config; import backtype.storm.LocalCluster; import back ...
- Python【day 14-5】sorted filter map函数应用和练习
'''''' ''' 内置函数或者和匿名函数结合输出 4,用map来处理字符串列表,把列表中所有人都变成sb,比方alex_sb name=[‘oldboy’,'alex','wusir'] 5,用m ...
- lambda匿名函数sorted排序函数filter过滤函数map映射函数
lambda函数:表示匿名函数,不需要def来声明,一句话就能搞定. 语法:函数名=lamda 参数:返回值 求10的10次方 f=lambda n:n**n print(f(10)) 注意: 函数名 ...
- storm trident 的介绍与使用
一.trident 的介绍 trident 的英文意思是三叉戟,在这里我的理解是因为之前我们通过之前的学习topology spout bolt 去处理数据是没有问题的,但trident 的对spou ...
随机推荐
- P1816 忠诚
题目描述 老管家是一个聪明能干的人.他为财主工作了整整10年,财主为了让自已账目更加清楚.要求管家每天记k次账,由于管家聪明能干,因而管家总是让财主十分满意.但是由于一些人的挑拨,财主还是对管家产生了 ...
- FTP:文件传输协议(指令及响应代码)
文件传输协议(FTP)使得主机间可以共享文件. FTP 使用 TCP 生成一个虚拟连接用于控制信息,然后再生成一个单独的 TCP 连接用于数据传输.控制连接使用类似 TELNET 协议在主机间交换命令 ...
- Win10+CUDA9.0+cudnn7.1安装
CUDA下载 cudnn下载 CUDA默认安装即可. cudnn下载解压之后,将对应的文件分别拷贝到CUDA Toolkit中即可: 对应的文件夹为: 若为默认安装,则应分别拷贝到的文件夹如下: C ...
- Vue 变化检测问题
受现代Javascript的限制,Vue不能检测到对象属性的添加和删除,因为Vue在初始化时将属性转为getter/setter,所以属性必须在data对象上才能让Vue转换它,Vue不允许在已经创建 ...
- 管理SSIS 日志
转自:http://www.cnblogs.com/biwork/p/biworklog.html 一直准备写这么一篇有关 SSIS 日志系统的文章,但是发现很难一次写的很完整.因为这篇文章的内容可扩 ...
- Java: JavaMail 初试(一)
前言:以前的我,很喜欢写东西,写一写所想所见所闻所感,但是工作之后,总不能写出让自己满意的文章,突发奇想,能否利用写博客的时机,将其写成类似散文似的博文呢?哈哈... 邮件功能尝试:作为一个小菜鸟,对 ...
- Codeforces Round #408( Div2)
Bank Hacking 阅读题,读完之后手算一下可以发现每一个bank被hack所需要的strength无非分为三种情况. 1. $a_i$,当且仅当i为第一个选择的点. 2. $a_i+1$,当且 ...
- A - Toy Cars
Time Limit:1000MS Memory Limit:262144KB 64bit IO Format:%I64d & %I64u Description Little ...
- F#周报2019年第20期
新闻 2019年理事会活动 "实用的F#挑战"意见截止日期接近,不要忘记提交博客文章或者其它作品 接口中的默认实现 .NET Core 3.0里的性能增强 使用Try .NET创建 ...
- Codeforces - 65D - Harry Potter and the Sorting Hat - 简单搜索
https://codeforces.com/problemset/problem/65/D 哈利波特!一种新思路的状压记忆化dfs,记得每次dfs用完要减回去.而且一定是要在dfs外部进行加减!防止 ...