目的:

通过Spout发射kafka的数据,到bolt统计每一个单词的个数,将这些记录更新到mongodb中。

Spout的nextTuple方法会一直处于一个while循环这中,每一条数据发送给bolt后,bolt都会调用一次execute方法。

spout用于发射数据,bolt用于对数据进行处理。

MongoUtil:mongo工具类

package storm;

import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;

public class MongoUtil {
private MongoUtil(){}
private static MongoClient mongo;
private static DB db;
private static DBCollection collection;
static{
mongo = new MongoClient("192.168.170.185",27017);
db = mongo.getDB("mySpout");
collection = db.getCollection("myBolt");
}
public static Long getCount(){
return collection.count(new BasicDBObject("_id",1L));
}
public static void insert(String substring){
DBObject obj = new BasicDBObject();
obj.put("_id", 1);
obj.put("bolt", substring);
collection.insert(obj);
}
public static void update(String substring){
DBObject obj = new BasicDBObject();
obj.put("_id", 1);
DBObject obj2 = collection.findOne(obj);
obj2.put("bolt", substring);
collection.update(obj, obj2);
}

}

SentenceSpout:发射数据的spout,从kafka读取数据。

package storm;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import org.apache.kafka.common.utils.Utils;
import org.apache.storm.Constants;
import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;

import kafka.KafkaConsumer;
import kafka.KafkaProducer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;

public class SentenceSpout extends BaseRichSpout{
private SpoutOutputCollector collector;
private int index = 0;
private ConsumerConnector consumer;
private Map conf;
@Override
public void open(Map map, TopologyContext context, SpoutOutputCollector collector) {//尽量将初始化写在open方法中,否则可能会报错。
this.conf = map;
this.collector = collector;
Properties props = new Properties();

// zookeeper 配置 
props.put("zookeeper.connect", "192.168.170.185:2181");

// 消费者所在组 
props.put("group.id", "testgroup");

// zk连接超时 
props.put("zookeeper.session.timeout.ms", "4000"); 
props.put("zookeeper.sync.time.ms", "200"); 
props.put("auto.commit.interval.ms", "1000"); 
props.put("auto.offset.reset", "smallest");

// 序列化类 
props.put("serializer.class", "kafka.serializer.StringEncoder");

ConsumerConfig config = new ConsumerConfig(props);

this.consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config); 
}
@Override
public void nextTuple() {

Map<String, Integer> topicCountMap = new HashMap<String, Integer>(); 
topicCountMap.put("helloworld", new Integer(1));

StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties()); 
StringDecoder valueDecoder = new StringDecoder(new VerifiableProperties()); 
Map<String, List<KafkaStream<String, String>>> consumerMap = 
consumer.createMessageStreams(topicCountMap,keyDecoder,valueDecoder); 
KafkaStream<String, String> stream = consumerMap.get("helloworld").get(0); 
ConsumerIterator<String, String> it = stream.iterator();

int messageCount = 0; 
while (it.hasNext()){ 
this.collector.emit(new Values(it.next().message().toString()));

// index = (index+1>=sentences.length)?0:index+1;
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("sentence"));
}

}
SplitSentenceBolt:切割单词bolt

package storm;

import java.util.Map;

import org.apache.storm.Constants;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;

public class SplitSentenceBolt extends BaseRichBolt{
private OutputCollector collector;
private Map stormConf; 
@Override
public void prepare(Map map, TopologyContext context, OutputCollector collector) {
this.stormConf = map;
this.collector = collector;
}

@Override
public void execute(Tuple tuple) {
String str = tuple.getStringByField("sentence");
String[] split = str.split(" ");
for(String word : split){
this.collector.emit(new Values(word));
}
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word"));
}

}

WordCountBolt:计数的bolt

package storm;

import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.util.HashMap;
import java.util.Map;

import org.apache.storm.Config;
import org.apache.storm.Constants;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;

public class WordCountBolt extends BaseRichBolt{
private Map boltconf;
private OutputCollector collector;
private HashMap<String,Long> counts = null;
@Override
public void prepare(Map map, TopologyContext context, OutputCollector collector) {
this.boltconf = map;
this.collector=collector;
this.counts = new HashMap<String,Long>();
}

@Override
public void execute(Tuple tuple) {
String word = tuple.getStringByField("word");
this.counts.put(word, this.counts.containsKey(word)?this.counts.get(word)+1:1);
this.collector.emit(new Values(word,counts.get(word)));
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word","count"));
}

}

ReportBolt:打印记录结果,并将结果插入mongodb中bolt

package storm;

import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.apache.storm.Config;
import org.apache.storm.Constants;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Tuple;

import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBObject;
import com.mongodb.MongoClient;

public class ReportBolt extends BaseRichBolt{
private HashMap<String,Long> counts = null;
private Map boltconf;
private StringBuffer buf = null;
@Override
public void prepare(Map arg0, TopologyContext arg1, OutputCollector arg2) {
this.boltconf = arg0;
this.counts=new HashMap<String,Long>();
this.buf = new StringBuffer();
}

@Override
public void execute(Tuple tuple) {
String word = tuple.getStringByField("word");
Long counts = tuple.getLongByField("count");
this.counts.put(word, counts);
System.out.println("------统计结果------");
List<String> keys = new ArrayList<String>();
keys.addAll(this.counts.keySet());

buf.append("{");
for(String key : keys){

buf.append(key+":"+this.counts.get(key)).append(",");
System.out.println(key + " : " +this.counts.get(key));
}
System.out.println("------------------");
buf.append("}");
String substring = buf.delete(buf.length()-2, buf.length()-1).toString();

long count = MongoUtil.getCount();
if(count<=0){
MongoUtil.insert(substring);
}else{
MongoUtil.update(substring);
}
buf = buf.delete(0, buf.length());
}
@Override
public void declareOutputFields(OutputFieldsDeclarer arg0) {
// TODO Auto-generated method stub
}
/* @Override
public Map<String, Object> getComponentConfiguration() {
HashMap<String, Object> hashMap = new HashMap<String, Object>();
hashMap.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 10);
return hashMap;
}*/
}

WordCountTopology: topology,storm零件的组装

package storm;

import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;

public class WordCountTopology {
private static final String SENTENCE_SPOUT_ID = "sentence-spout";
private static final String SPLIT_BOLT_ID = "split-bolt";
private static final String COUNT_BOLT_ID = "count-bolt";
private static final String REPORT_BOLT_ID = "report-bolt";
private static final String TOPOLOGY_NAME = "word-count-topology";

public static void main(String[] args) throws Exception {

//--实例化Spout和Bolt
SentenceSpout spout = new SentenceSpout();
SplitSentenceBolt splitBolt = new SplitSentenceBolt();
WordCountBolt countBolt = new WordCountBolt();
ReportBolt reportBolt = new ReportBolt();
//--创建TopologyBuilder类实例
TopologyBuilder builder = new TopologyBuilder();

//--注册SentenceSpout
builder.setSpout(SENTENCE_SPOUT_ID, spout);
//--注册SplitSentenceBolt,订阅SentenceSpout发送的tuple
//此处使用了shuffleGrouping方法,此方法指定所有的tuple随机均匀的分发给SplitSentenceBolt的实例。
builder.setBolt(SPLIT_BOLT_ID, splitBolt).shuffleGrouping(SENTENCE_SPOUT_ID);
//--注册WordCountBolt,,订阅SplitSentenceBolt发送的tuple
//此处使用了filedsGrouping方法,此方法可以将指定名称的tuple路由到同一个WordCountBolt实例中
builder.setBolt(COUNT_BOLT_ID, countBolt).fieldsGrouping(SPLIT_BOLT_ID, new Fields("word"));
//--注册ReprotBolt,订阅WordCountBolt发送的tuple
//此处使用了globalGrouping方法,表示所有的tuple都路由到唯一的ReprotBolt实例中
builder.setBolt(REPORT_BOLT_ID, reportBolt).globalGrouping(COUNT_BOLT_ID);

//--创建配置对象
Config conf = new Config();

//--创建代表集群的对象,LocalCluster表示在本地开发环境来模拟一个完整的Storm集群
//本地模式是开发和测试的简单方式,省去了在分布式集群中反复部署的开销
//另外可以执行断点调试非常的便捷
LocalCluster cluster = new LocalCluster();

//--提交Topology给集群运行
cluster.submitTopology(TOPOLOGY_NAME, conf, builder.createTopology());

//--运行10秒钟后杀死Topology关闭集群
Thread.sleep(300000000);
cluster.killTopology(TOPOLOGY_NAME);
cluster.shutdown();
}
}

kafka-->storm-->mongodb的更多相关文章

  1. 简单测试flume+kafka+storm的集成

    集成 Flume/kafka/storm 是为了收集日志文件而引入的方法,最终将日志转到storm中进行分析.storm的分析方法见后面文章,这里只讨论集成方法. 以下为具体步骤及测试方法: 1.分别 ...

  2. Kafka+Storm+HDFS整合实践

    在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统计分析,但是对于实时的需求Hive就不合适了.实时应用场景可以使用Storm,它是一 ...

  3. Flume-ng+Kafka+storm的学习笔记

    Flume-ng Flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统. Flume的文档可以看http://flume.apache.org/FlumeUserGuide.html ...

  4. flume-ng+Kafka+Storm+HDFS 实时系统搭建

    转自:http://www.tuicool.com/articles/mMrQnu7 一 直以来都想接触Storm实时计算这块的东西,最近在群里看到上海一哥们罗宝写的Flume+Kafka+Storm ...

  5. [转]flume-ng+Kafka+Storm+HDFS 实时系统搭建

    http://blog.csdn.net/weijonathan/article/details/18301321 一直以来都想接触Storm实时计算这块的东西,最近在群里看到上海一哥们罗宝写的Flu ...

  6. Zookeeper+Kafka+Storm+HDFS实践

    Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据. Hadoop一般用在离线的分析计算中,而storm区别于hadoop,用在实时的流式计算中,被广泛用来 ...

  7. [转载] Kafka+Storm+HDFS整合实践

    转载自http://www.tuicool.com/articles/NzyqAn 在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统 ...

  8. Flume+Kafka+Storm+Hbase+HDSF+Poi整合

    Flume+Kafka+Storm+Hbase+HDSF+Poi整合 需求: 针对一个网站,我们需要根据用户的行为记录日志信息,分析对我们有用的数据. 举例:这个网站www.hongten.com(当 ...

  9. Flume+Kafka+Storm整合

    Flume+Kafka+Storm整合 1. 需求: 有一个客户端Client可以产生日志信息,我们需要通过Flume获取日志信息,再把该日志信息放入到Kafka的一个Topic:flume-to-k ...

  10. 大数据处理框架之Strom:Flume+Kafka+Storm整合

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk1.8 storm-0.9 apache-flume-1.6.0 ...

随机推荐

  1. 万径人踪灭(FFT+manacher)

    传送门 这题--我觉得像我这样的菜鸡选手难以想出来-- 题目要求求出一些子序列,使得其关于某个位置是对称的,而且不能是连续一段,求这样的子序列的个数.这个直接求很困难,但是我们可以先求出所有关于某个位 ...

  2. yolo原理学习

    1.[yolov1]    第一步:将图像划分为S*S的栅格(grid cell),这里分成了7*7的grid cell.栅格的任务是:检测中心落在该栅格中的物体(注意,栅格中心未必与物体的中心重合, ...

  3. vue全局配置

    Vue.config 是一个对象,包含Vue的全局配置.可以在启动应用之前修改下列的属性: Vue.config.slient=true;      取消Vue所有的日志与警告   默认值false ...

  4. Linux中如何开启8080端口供外界访问 和开启允许对外访问的端口8000

    举例: 开放10000端口的解决步骤如下: 1.修改/etc/sysconfig/iptables文件,增加如下一行: -A INPUT -m state --state NEW -m tcp -p ...

  5. android调试之adb

    ADB 其实大部分的PC开发机与Android设备的操作都是通过adb(android debug bridge)技术完成的,这是一个C/S架构的命令行工具,主要由三个部分组成 运行在PC开发机上的命 ...

  6. TypeScript完全解读(26课时)_9.TypeScript完全解读-TS中的类

    9.TypeScript完全解读-TS中的类 创建class.ts文件,并在index.ts内引用 创建一个类,这个类在创建好后有好几个地方都标红了 这是tslint的一些验证规则 一保存就会自动修复 ...

  7. UVA - 11624 Fire! 双向BFS追击问题

    Fire! Joe works in a maze. Unfortunately, portions of the maze have caught on fire, and the owner of ...

  8. React 从入门到进阶之路(七)

    之前的文章我们介绍了 React 表单详解 约束性和非约束性组件 input text checkbox radio  select  textarea  以及获取表单的内容.接下来我们将介绍 Rea ...

  9. 洛谷 - P4450 - 双亲数 - 整除分块

    https://www.luogu.org/fe/problem/P4450 应该不分块也可以. 求\(F(n,m,d)=\sum\limits_{i=1}^{n}\sum\limits_{j=1}^ ...

  10. 洛谷 - P2283 - 多边形 - 半平面交

    https://www.luogu.org/problemnew/show/P2283 需要注意max是求解顺序是从右到左,最好保证安全每次都清空就没问题了. #include<bits/std ...