大数据学习——kafka+storm+hdfs整合
1 需求
kafka,storm,hdfs整合是流式数据常用的一套框架组合,现在 根据需求使用代码实现该需求 需求:应用所学技术实现,kafka接收随机句子,对接到storm中;使用storm集群统计句子中每个单词重复出现的次数(wordcount),将统计结果存入hdfs中。

1 pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>bigdata</groupId>
<artifactId>homework</artifactId>
<version>1.0-SNAPSHOT</version> <dependencies>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<!--<scope>provided</scope>-->
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka-client</artifactId>
<!--<scope>provided</scope>-->
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-hdfs</artifactId>
<version>1.0.2</version>
<exclusions>
<exclusion>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies> <build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.1</version>
<configuration>
<createDependencyReducedPom>true</createDependencyReducedPom>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>storm.StormTopologyDriver</mainClass>
</transformer>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build> </project>
2 PullWords.java
package kafka; import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer; import java.util.*;
import java.util.concurrent.atomic.AtomicBoolean; /**
* @Description
* kafka消费者
*/
public class PullWords { private KafkaConsumer<String, String> consumer;
private AtomicBoolean isAutoCommit; // kafka topic
private final static String TOPIC = "wordCount"; public PullWords() {
isAutoCommit = new AtomicBoolean(false); // 默认非自动提交
Properties props = new Properties();
props.put("bootstrap.servers", "mini1:2181,mini2:2181,mini3:2181");
props.put("group.id", "wordCount"); // 设置消费者组,组内的所有消费者协调在一起来消费订阅主题
if (isAutoCommit.get()) {
props.put("enable.auto.commit", "true"); // 设置自动提交
props.put("auto.commit.interval.ms", "1000"); //配置自动提交消费进度的时间
} else {
props.put("enable.auto.commit", "false");
}
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(TOPIC));
this.isAutoCommit = isAutoCommit;
} public void subscribe(String... topic) {
consumer.subscribe(Arrays.asList(topic));
} public ConsumerRecords<String, String> pull() {
ConsumerRecords<String, String> records = consumer.poll(100);
consumer.commitSync();
return records;
} public ConsumerRecords<String, String> pullOneOrMore() {
ConsumerRecords<String, String> records = null;
List<String> values = new ArrayList<>();
while (true) {
records = consumer.poll(10);
if (records != null) {
records.forEach(e -> values.add(e.value()));
if (values.size() >= 1) {
consumer.commitSync();
values.clear();
break;
}
}
}
return records;
} public void close() {
consumer.close();
} }
3 PushWords.java
package kafka; import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata; import java.util.Properties;
import java.util.concurrent.Future; /**a
* @Description
* kafka生产者
* @Author hongzw@citycloud.com.cn
* @Date 2019-02-16 下午 7:08
*/
public class PushWords { private Producer<String, String> producer; // kafka topic
private final static String TOPIC = "words"; public PushWords() {
Properties props = new Properties();
props.put("bootstrap.servers", "storm01:9092,storm02:9092,storm03:9092");
props.put("acks", "all");
props.put("retries", 0); // 请求失败不再重试
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
producer = new KafkaProducer<>(props);
} // 发送句子到kafka集群
public Future<RecordMetadata> push(String key, String words) {
return producer.send(new ProducerRecord<>(TOPIC, key, words)); // send方法为异步调用
} public void close() {
producer.close();
} }
4 WordCount.java
package storm; import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values; import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map; public class WordCount extends BaseBasicBolt { Map<String, Integer> wordCountMap = new HashMap<>(); @Override
public void execute(Tuple tuple, BasicOutputCollector basicOutputCollector) {
String word = tuple.getValueByField("word").toString();
Integer count = Integer.valueOf(tuple.getValueByField("count").toString());
Integer integer = wordCountMap.get(word);
if (integer == null) {
wordCountMap.put(word, count);
} else {
wordCountMap.put(word, wordCountMap.get(word) + 1);
}
if (wordCountMap.size() > 20) { // map里面有超过20个单词则发送hfdsBolt
List<Object> list = new ArrayList<>(); // wordCountMap.forEach((k, v) -> {
// String result = new String(k + ":" + v);
// list.add(result);
// }); for (Map.Entry<String, Integer> entry : wordCountMap.entrySet()) {
String result = new String(entry.getKey() + ":" + entry.getValue());
list.add(result);
} wordCountMap.clear();
if (list.size() > 0) {
basicOutputCollector.emit(new Values(list));
}
}
} @Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
outputFieldsDeclarer.declare(new Fields("total"));
}
}
5 WordCountSplit.java
package storm; import org.apache.storm.topology.BasicOutputCollector;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseBasicBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values; public class WordCountSplit extends BaseBasicBolt { @Override
public void execute(Tuple tuple, BasicOutputCollector collector) {
String[] words = tuple.getValue(tuple.fieldIndex("value")).toString().split(" ");
for (String word : words) {
collector.emit(new Values(word, 1));
}
} @Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("word", "count"));
}
}
6 StormTopologyDriver.java
package storm; import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.StormSubmitter;
import org.apache.storm.generated.AlreadyAliveException;
import org.apache.storm.generated.AuthorizationException;
import org.apache.storm.generated.InvalidTopologyException;
import org.apache.storm.hdfs.bolt.HdfsBolt;
import org.apache.storm.hdfs.bolt.format.DefaultFileNameFormat;
import org.apache.storm.hdfs.bolt.format.DelimitedRecordFormat;
import org.apache.storm.hdfs.bolt.format.FileNameFormat;
import org.apache.storm.hdfs.bolt.format.RecordFormat;
import org.apache.storm.hdfs.bolt.rotation.FileRotationPolicy;
import org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy;
import org.apache.storm.hdfs.bolt.sync.CountSyncPolicy;
import org.apache.storm.hdfs.bolt.sync.SyncPolicy;
import org.apache.storm.kafka.spout.KafkaSpout;
import org.apache.storm.kafka.spout.KafkaSpoutConfig;
import org.apache.storm.topology.TopologyBuilder; public class StormTopologyDriver { public static void main(String[] args) throws InvalidTopologyException, AuthorizationException, AlreadyAliveException {
TopologyBuilder topologyBuilder = new TopologyBuilder();
KafkaSpoutConfig.Builder builder = new KafkaSpoutConfig.Builder("mini1:2181", "wordCount");
builder.setProp("group.id", "wordCount");
builder.setProp("enable.auto.commit", "true");
builder.setProp("auto.commit.interval.ms", "1000");
builder.setProp("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
builder.setProp("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
topologyBuilder.setSpout("kafkaSpout", new KafkaSpout<>(builder.build())); topologyBuilder.setBolt("wordCountSplit", new WordCountSplit()).shuffleGrouping("kafkaSpout");
topologyBuilder.setBolt("wordCount", new WordCount()).shuffleGrouping("wordCountSplit"); // 将文件保存到hdfs
// 设置输出目录
// 输出字段分隔符
RecordFormat format = new DelimitedRecordFormat().withFieldDelimiter(",");
// 每100个tuple同步到HDFS一次
SyncPolicy syncPolicy = new CountSyncPolicy(5);
// 每个写出文件的大小为5MB
FileRotationPolicy rotationPolicy = new FileSizeRotationPolicy(1.0f, FileSizeRotationPolicy.Units.MB);
FileNameFormat fileNameFormat = new DefaultFileNameFormat().withPath("/storm");
HdfsBolt hdfsBolt = new HdfsBolt().withFsUrl("hdfs://mini1:9000").withFileNameFormat(fileNameFormat)
.withRecordFormat(format).withRotationPolicy(rotationPolicy).withSyncPolicy(syncPolicy);
topologyBuilder.setBolt("hdfsBolt", hdfsBolt).shuffleGrouping("wordCount"); Config config = new Config();
config.setNumWorkers(2);
// 本地模式
// LocalCluster localCluster = new LocalCluster();
// localCluster.submitTopology("countWords", config, topologyBuilder.createTopology()); // 集群模式
StormSubmitter.submitTopology("countWords", config, topologyBuilder.createTopology());
}
}
大数据学习——kafka+storm+hdfs整合的更多相关文章
- 大数据学习:storm流式计算
Storm是一个分布式的.高容错的实时计算系统.Storm适用的场景: 1.Storm可以用来用来处理源源不断的消息,并将处理之后的结果保存到持久化介质中. 2.由于Storm的处理组件都是分布式的, ...
- Kafka+Storm+HDFS整合实践
在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统计分析,但是对于实时的需求Hive就不合适了.实时应用场景可以使用Storm,它是一 ...
- [转载] Kafka+Storm+HDFS整合实践
转载自http://www.tuicool.com/articles/NzyqAn 在基于Hadoop平台的很多应用场景中,我们需要对数据进行离线和实时分析,离线分析可以很容易地借助于Hive来实现统 ...
- 大数据学习系列之五 ----- Hive整合HBase图文详解
引言 在上一篇 大数据学习系列之四 ----- Hadoop+Hive环境搭建图文详解(单机) 和之前的大数据学习系列之二 ----- HBase环境搭建(单机) 中成功搭建了Hive和HBase的环 ...
- Kafka+Storm+HDFS 整合示例
消息通过各种方式进入到Kafka消息中间件,比如可以通过使用Flume来收集日志数据,然后在Kafka中路由暂存,然后再由实时计算程序Storm做实时分析,最后将结果保存在HDFS中,这时我们就需要将 ...
- 大数据学习之路-hdfs
1.什么是hadoop hadoop中有3个核心组件: 分布式文件系统:HDFS —— 实现将文件分布式存储在很多的服务器上 分布式运算编程框架:MAPREDUCE —— 实现在很多机器上分布式并行运 ...
- 大数据学习之测试hdfs和mapreduce(二)
上篇已经搭建好环境,本篇主要测试hadoop中的hdfs和mapreduce功能. 首先填坑:启动环境时发现DataNode启动不了.查看日志 从日志中可以看出,原因是因为datanode的clust ...
- 大数据学习——Kafka集群部署
1下载安装包 2解压安装包 -0.9.0.1.tgz -0.9.0.1 kafka 3修改配置文件 cp server.properties server.properties.bak # Lice ...
- 大数据学习——java操作hdfs环境搭建以及环境测试
1 新建一个maven项目 打印根目录下的文件的名字 添加pom依赖 pom.xml <?xml version="1.0" encoding="UTF-8&quo ...
随机推荐
- CSS 条纹背景深入
一.水平渐变 实现水平条纹很简单 <!DOCTYPE html> <html lang="en"> <head> <meta charse ...
- asp.net调试技巧
一眨眼的功夫,自己已经学习asp.net的有一年的功夫了.虽然称不上什么大神,但是也有一点知识的积累.就写一片调试的入门文章给那些刚刚入门迷茫的童鞋们.希望你学习了我这篇文章能从迷茫的生活中找回编程的 ...
- Java基础50题test1—不死神兔
[不死神兔] 题目:古典问题:有一对兔子,从出生后第 3 个月起每个月都生一对兔子,小兔子长到第三个月后每个月又生一对兔子,假如兔子都不死,问每个月的兔子对数为多少? 程序分析: 兔子的规律为数列 ...
- JavaScript 30 - 3 学习笔记
今天学习的是JavaScript 30-3 ---css Variables 实现的效果如下图所示. 废话不多,我们直接来看代码. html: <h1>大家好,这个一个<span c ...
- C#调用C库的注意事项
作者:朱金灿 来源:http://blog.csdn.net/clever101 注意事项一: 从C#的exe进入C库的源码进行调试,需要先"启用非托管代码调试",如下图: 注意事 ...
- uvm_factory——我们的工厂(三)
现在让我们回过头来想想factory 是用来干什么,它做了什么? fantory就是生产uvm_object 和 uvm_component.用factory 生产和用SV直接new有什么区别了? f ...
- OpenStack安装keyston 错误BError: (pymysql.err.InternalError) (1071, u‘Specified key was too long; max key length is 767 bytes‘) [SQL: u‘\nCREATE TABLE migrate_ver
折腾了两天的错误,BError: (pymysql.err.InternalError) (1071, u‘Specified key was too long; max key length is ...
- 如何找到SAP Cloud for Customer标准培训和认证方面的信息
有一些朋友询问我如何在SAP官网上找到和SAP Cloud for Customer相关的标准培训信息,我这里把步骤写出来: 登录SAP官网https://training.sap.com 输入和Cl ...
- myeclipse报错MA
以下问题萌新问了我很多次了,无奈写个随笔.之后问的我都在这个随笔里补充. 断电/自动关机导致的问题: Could not open the editor: the file does not exis ...
- LintCode 30插入区间
问题 给出一个无重叠的按照区间起始端点排序的区间列表. 在列表中插入一个新的区间,你要确保列表中的区间仍然有序且不重叠(如果有必要的话,可以合并区间). 样例 插入区间[2, 5] 到 [[1,2], ...