spark streaming 笔记
spark streaming项目 学习笔记

为什么要flume+kafka?
生成数据有高峰与低峰,如果直接高峰数据过来flume+spark/storm,实时处理容易处理不过来,扛不住压力。而选用flume+kafka添加了消息缓冲队列,spark可以去kafka里面取得数据,那么就可以起到缓冲的作用。

Flume架构:
参考学习:http://flume.apache.org/releases/content/1.9.0/FlumeUserGuide.html
启动一个agent:
bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console
添加example.conf:
|
# example.conf: A single-node Flume configuration # Name the components on this agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 # Describe the sink a1.sinks.k1.type = logger # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1 |
开一个终端测试:
|
$ telnet localhost 44444 T Trying 127.0.0.1... C Connected to localhost.localdomain (127.0.0.1). E Escape character is '^]'. H Hello world! <ENTER> O OK |
Flume将会输出:
|
12/06/19 15:32:19 INFO source.NetcatSource: Source starting 12/06/19 15:32:19 INFO source.NetcatSource: Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444] 12/06/19 15:32:34 INFO sink.LoggerSink: Event: { headers:{} body: 48 65 6C 6C 6F 20 77 6F 72 6C 64 21 0D Hello world!. } |
<二> kafka架构
producer:生产者
consumer:消费者
broker:缓冲代理
topic:主题

安装:
下载->解压->修改配置
添加环境变量:
|
$ vim ~/.bash_profile …… export ZK_HOME=/home/centos/develop/zookeeper export PATH=$ZK_HOME/bin/:$PATH export KAFKA_HOME=/home/centos/develop/kafka export PATH=$KAFKA_HOME/bin:$PATH |
启动zk:
zkServer.sh start
查看zk状态:
zkServer.sh status
|
$ vim config/server.properties: #需要修改配置内容 broker.id=1 listeners=PLAINTEXT://:9092 log.dirs=/home/centos/app/kafka-logs |
后台启动kafka:
nohup kafka-server-start.sh $KAFKA_HOME/config/server.properties &
创建topic:
kafka-topics.sh --create --zookeeper node1:2181 --replication-factor 1 --partitions 1 --topic halo
-- 注:这里2181是zk端口
查看topic列表:
kafka-topics.sh --list --zookeeper node1:2181
-- 注:这里2181是zk端口
生产一个主题halo:
kafka-console-producer.sh --broker-list node1:9092 --topic halo
-- 注:这里9092是kafka端口
消费主题halo数据:
kafka-console-consumer.sh --zookeeper node1:2181 --topic halo --from-beginning
Setting up a multi-broker cluster
复制server.properties :
|
> cp config/server.properties config/server-1.properties > cp config/server.properties config/server-2.properties |
编辑内容:
|
config/server-1.properties: broker.id=1 listeners=PLAINTEXT://:9093 log.dirs=/home/centos/app/kafka-logs-1 config/server-2.properties: broker.id=2 listeners=PLAINTEXT://:9094 log.dirs=/home/centos/app//kafka-logs-2 |
现在后台启动broker:
|
>nohup kafka-server-start.sh $KAFKA_HOME/config/server-1.properties & ... >nohup kafka-server-start.sh $KAFKA_HOME/config/server-2.properties & ... |
现在我们创建一个具有三个副本的主题:
|
> bin/kafka-topics.sh --create --zookeeper node1:2181 --replication-factor 3 --partitions 1 --topic replicated-halo |
好了,我们查看下topic主题下详细信息
|
> bin/kafka-topics.sh --describe --zookeeper node1:2181 --topic replicated-halo Topic:replicated-halo PartitionCount:1 ReplicationFactor:3 Configs: Topic: replicated-halo Partition: 0 Leader: 2 Replicas: 2,1,0 Isr: 2,1,0 |
- "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
- "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
- "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
【附:jps -m显示具体的进程信息】
一个kafka生产栗子:
package com.lin.spark.kafka; import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig; import java.util.Properties; /**
* Created by Administrator on 2019/6/1.
*/
public class KafkaProducer extends Thread { private String topic; private Producer<Integer, String> producer; public KafkaProducer(String topic) {
this.topic = topic;
Properties properties = new Properties();
properties.put("metadata.broker.list", KafkaProperities.BROKER_LIST);
properties.put("serializer.class", "kafka.serializer.StringEncoder");
properties.put("request.required.acks", "1");
producer = new Producer<Integer, String>(new ProducerConfig(properties)); } @Override
public void run() {
int messageNo = 1;
while (true) {
String message = "message_" + messageNo;
producer.send(new KeyedMessage<Integer, String>(topic,message));
System.out.println("Send:"+message);
messageNo++;
try{
Thread.sleep(2000);//2秒钟打印一次
}catch (Exception e){
e.printStackTrace();
}
}
} //测试
public static void main(String[] args){
KafkaProducer producer = new KafkaProducer("halo");
producer.run();
}
}
测试消费的数据:
> kafka-console-consumer.sh --zookeeper node1:2181 --topic halo --from-beginning

对应的消费者代码:
package com.lin.spark.kafka; import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector; import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties; /**
* Created by Administrator on 2019/6/2.
*/
public class KafkaConsumer extends Thread {
private String topic; public KafkaConsumer(String topic) {
this.topic = topic;
} private ConsumerConnector createConnector(){
Properties properties = new Properties();
properties.put("zookeeper.connect", KafkaProperities.ZK);
properties.put("group.id",KafkaProperities.GROUP_ID);
return Consumer.createJavaConsumerConnector(new ConsumerConfig(properties));
} @Override
public void run() {
ConsumerConnector consumer = createConnector();
Map<String,Integer> topicCountMap = new HashMap<String, Integer>();
topicCountMap.put(topic,1);
Map<String, List<KafkaStream<byte[], byte[]>>> streams = consumer.createMessageStreams(topicCountMap);
KafkaStream<byte[], byte[]> kafkaStream = streams.get(topic).get(0);
ConsumerIterator<byte[], byte[]> iterator = kafkaStream.iterator();
while (iterator.hasNext()){
String result = new String(iterator.next().message());
System.out.println("result:"+result);
}
}
public static void main(String[] args){
KafkaConsumer kafkaConsumer = new KafkaConsumer("halo");
kafkaConsumer.run();
}
}
一个简单kafka与spark streaming整合例子:
启动kafka,并生产数据
> kafka-console-producer.sh --broker-list 172.16.182.97:9092 --topic halo
参数固定:
package com.lin.spark import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext} object KafkaStreaming {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("SparkStreamingKakfaWordCount").setMaster("local[4]")
val ssc = new StreamingContext(conf,Seconds(5))
val topicMap = "halo".split(":").map((_, 1)).toMap
val zkQuorum = "hadoop:2181";
val group = "consumer-group"
val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
lines.print()
ssc.start()
ssc.awaitTermination()
}
}
参数输入:
package com.lin.spark import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext} object KafkaStreaming {
def main(args: Array[String]): Unit = {
if (args.length != 4) {
System.err.println("参数不对")
}
//args: hadoop:2181 consumer-group halo,hello_topic 2
val Array(zkQuorum, group, topics, numThreads) = args
val conf = new SparkConf().setAppName("SparkStreamingKakfaWordCount").setMaster("local[4]")
val ssc = new StreamingContext(conf, Seconds(5)) val topicMap = topics.split(",").map((_,numThreads.toInt)).toMap val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)
lines.print()
ssc.start()
ssc.awaitTermination()
}
}
spark streaming 笔记的更多相关文章
- Spark Streaming笔记
Spark Streaming学习笔记 liunx系统的习惯创建hadoop用户在hadoop根目录(/home/hadoop)上创建如下目录app 存放所有软件的安装目录 app/tmp 存放临时文 ...
- Spark Streaming笔记——技术点汇总
目录 目录 概况 原理 API DStream WordCount示例 Input DStream Transformation Operation Output Operation 缓存与持久化 C ...
- 【慕课网实战】Spark Streaming实时流处理项目实战笔记十五之铭文升级版
铭文一级:[木有笔记] 铭文二级: 第12章 Spark Streaming项目实战 行为日志分析: 1.访问量的统计 2.网站黏性 3.推荐 Python实时产生数据 访问URL->IP信息- ...
- 【慕课网实战】Spark Streaming实时流处理项目实战笔记七之铭文升级版
铭文一级: 第五章:实战环境搭建 Spark源码编译命令:./dev/make-distribution.sh \--name 2.6.0-cdh5.7.0 \--tgz \-Pyarn -Phado ...
- 学习笔记:Spark Streaming的核心
Spark Streaming的核心 1.核心概念 StreamingContext:要初始化Spark Streaming程序,必须创建一个StreamingContext对象,它是所有Spark ...
- 学习笔记:spark Streaming的入门
spark Streaming的入门 1.概述 spark streaming 是spark core api的一个扩展,可实现实时数据的可扩展,高吞吐量,容错流处理. 从上图可以看出,数据可以有很多 ...
- 【慕课网实战】Spark Streaming实时流处理项目实战笔记二十一之铭文升级版
铭文一级: DataV功能说明1)点击量分省排名/运营商访问占比 Spark SQL项目实战课程: 通过IP就能解析到省份.城市.运营商 2)浏览器访问占比/操作系统占比 Hadoop项目:userA ...
- 【慕课网实战】Spark Streaming实时流处理项目实战笔记十八之铭文升级版
铭文一级: 功能二:功能一+从搜索引擎引流过来的 HBase表设计create 'imooc_course_search_clickcount','info'rowkey设计:也是根据我们的业务需求来 ...
- 【慕课网实战】Spark Streaming实时流处理项目实战笔记十七之铭文升级版
铭文一级: 功能1:今天到现在为止 实战课程 的访问量 yyyyMMdd courseid 使用数据库来进行存储我们的统计结果 Spark Streaming把统计结果写入到数据库里面 可视化前端根据 ...
随机推荐
- JVM(3) 之 内存分配与回收策略
开发十年,就只剩下这套架构体系了! >>> 之前讲过虚拟机中的堆,他是整个内存模型中占用最大的一部分,而且不是连续的.当有需要分配内存的时候,一般有两个方法分配,指针碰撞和空闲列 ...
- Git相关命令整理
git config --global user.name //配置姓名git config --global user.email //配置邮箱git config --list //查看配置 ...
- 20180209-sys模块
sys模块常用操作如下: 1.命令行参数 sys.argv 第一个元素是程序本身路径 # 1.命令行参数 第一个元素是程序本身路径 ret = sys.argv print('命令行参数:',ret ...
- mpvue 微信小程序半屏弹框(half-screen-dialog)
<template> <div> <a @click="isShow">half-screen-dialog</a> <!-- ...
- while例子 求1到100的和
- 公私钥,数字证书,https
1.密钥对,在非对称加密技术中,有两种密钥,分为私钥和公钥,私钥是密钥对所有者持有,不可公布,公钥是密钥对持有者公布给他人的. 2.公钥,公钥用来给数据加密,用公钥加密的数据只能使用私钥解密. 3.私 ...
- demo board boot mode
demo扩展板 QSPI0_IO0_MIO2--A13--PS-MIO2 QSPI0_IO0_MIO3--A14--PS-MIO3 QSPI0_IO0_MIO4--B11--PS-MIO4 QSPI0 ...
- JAVA中位数排序
package quickSort; public class QuickSort { private static int count; /** * 测试 * @param args */ publ ...
- 51nod 1836:战忽局的手段(期望)
题目链接 公式比较好推 精度好难搞啊@_@ 下面记笔记@_@ ****在CodeBlocks中,输出double型变量要使用%f (参见http://bbs.csdn.net/topics/39193 ...
- mybatis框架之动态代理
坦白讲,动态代理在日常工作中真没怎么用过,也少见别人用过,网上见过不少示例,但总觉与装饰模式差别不大,都是对功能的增强,什么前置后置,其实也就那么回事,至于面试中经常被问的mybatis框架mappe ...