spark+kafka 小案例
(1)下载kafka的jar包
package com.sparkstreaming
import org.apache.spark.SparkConf
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.kafka.common.serialization.StringDeserializer
object SparkStreamKaflaWordCount {
def main(args: Array[String]): Unit = {
//创建streamingContext
var conf=new SparkConf().setMaster("spark://192.168.177.120:7077")
.setAppName("SparkStreamKaflaWordCount Demo");
var ssc=new StreamingContext(conf,Seconds());
//创建topic
//var topic=Map{"test" -> 1}
var topic=Array("test");
//指定zookeeper
//创建消费者组
var group="con-consumer-group"
//消费者配置
val kafkaParam = Map(
"bootstrap.servers" -> "192.168.177.120:9092,anotherhost:9092",//用于初始化链接到集群的地址
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
//用于标识这个消费者属于哪个消费团体
"group.id" -> group,
//如果没有初始化偏移量或者当前的偏移量不存在任何服务器上,可以使用这个配置属性
//可以使用这个配置,latest自动重置偏移量为最新的偏移量
"auto.offset.reset" -> "latest",
//如果是true,则这个消费者的偏移量会在后台自动提交
"enable.auto.commit" -> (false: java.lang.Boolean)
);
//创建DStream,返回接收到的输入数据
var stream=KafkaUtils.createDirectStream[String,String](ssc, PreferConsistent,Subscribe[String,String](topic,kafkaParam))
//每一个stream都是一个ConsumerRecord
stream.map(s =>(s.key(),s.value())).print();
ssc.start();
ssc.awaitTermination();
}
}
# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
dataDir=/home/zhangxs/datainfo/developmentData/zookeeper/zkdata1
# the port at which the clients will connect
clientPort=
server.=zhangxs::
zkServer.sh start zoo1.cfg
【bin/kafka-server-start.sh config/server.properties】
[root@zhangxs kafka_2.]# bin/kafka-server-start.sh config/server.properties
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delete.topic.enable = false
fetch.purgatory.purge.interval.requests =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 0.10.-IV0
leader.imbalance.check.interval.seconds =
[root@zhangxs kafka_2.]# bin/kafka-console-producer.sh --broker-list 192.168.177.120: --topic test
./spark-submit --class com.sparkstreaming.SparkStreamKaflaWordCount /usr/local/development/spark-2.0/jars/streamkafkademo.jar
zhang xing sheng
// :: INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task on executor id: hostname: 192.168.177.120.
// :: INFO storage.BlockManagerInfo: Added broadcast_99_piece0 in memory on 192.168.177.120: (size: 1913.0 B, free: 366.3 MB)
// :: INFO scheduler.TaskSetManager: Finished task 0.0 in stage 99.0 (TID ) in ms on 192.168.177.120 (/)
// :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 99.0, whose tasks have all completed, from pool
// :: INFO scheduler.DAGScheduler: ResultStage (print at SparkStreamKaflaWordCount.scala:) finished in 0.019 s
// :: INFO scheduler.DAGScheduler: Job finished: print at SparkStreamKaflaWordCount.scala:, took 0.023450 s
-------------------------------------------
Time: ms
-------------------------------------------
(null,zhang xing sheng)
<dependency><groupId>org.apache.spark</groupId><artifactId>spark-streaming_2.11</artifactId><version>2.1.0</version></dependency>
(2)在提交spark应用程序的时候,抛出类找不到
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/common/serialization/StringDeserializer
at com.sparkstreaming.SparkStreamKaflaWordCount$.main(SparkStreamKaflaWordCount.scala:)
at com.sparkstreaming.SparkStreamKaflaWordCount.main(SparkStreamKaflaWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
------------------------------------------------------------------------
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/kafka010/KafkaUtils$
at com.sparkstreaming.SparkStreamKaflaWordCount$.main(SparkStreamKaflaWordCount.scala:)
at com.sparkstreaming.SparkStreamKaflaWordCount.main(SparkStreamKaflaWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark+kafka 小案例的更多相关文章
- kafka 小案例【二】 --kafka 设置多个消费着集群
这个配是我在http://www.cnblogs.com/zhangXingSheng/p/6646972.html 的基础上再添加的配置 设置多个消息集群 (1)复制两份配置文件 > cp c ...
- kafka 小案例【一】---设置但个消息集群
启动kafka服务 [ bin/kafka-server-start.sh config/server.properties ] [root@zhangxs kafka_2.]# bin/kafka- ...
- 大数据Spark+Kafka实时数据分析案例
本案例利用Spark+Kafka实时分析男女生每秒购物人数,利用Spark Streaming实时处理用户购物日志,然后利用websocket将数据实时推送给浏览器,最后浏览器将接收到的数据实时展现, ...
- Scala进阶之路-Spark底层通信小案例
Scala进阶之路-Spark底层通信小案例 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Spark Master和worker通信过程简介 1>.Worker会向ma ...
- _00017 Kafka的体系结构介绍以及Kafka入门案例(0基础案例+Java API的使用)
博文作者:妳那伊抹微笑 itdog8 地址链接 : http://www.itdog8.com(个人链接) 博客地址:http://blog.csdn.net/u012185296 博文标题:_000 ...
- Spark Streaming updateStateByKey案例实战和内幕源码解密
本节课程主要分二个部分: 一.Spark Streaming updateStateByKey案例实战二.Spark Streaming updateStateByKey源码解密 第一部分: upda ...
- graph小案例
(小案例,有五个人他们参见相亲节目,这个五个人分别是0,1,2,3,4,号选手,计算出追随者年龄大于被追随者年龄的人数和平均年龄) scala> import org.apache.spark. ...
- 机械表小案例之transform的应用
这个小案例主要是对transform的应用. 时钟的3个表针分别是3个png图片,通过setInterval来让图片转动.时,分,秒的转动角度分别是30,6,6度. 首先,通过new Date函数获取 ...
- shell讲解-小案例
shell讲解-小案例 一.文件拷贝输出检查 下面测试文件拷贝是否正常,如果cp命令并没有拷贝文件myfile到myfile.bak,则打印错误信息.注意错误信息中basename $0打印脚本名.如 ...
随机推荐
- (1)oracle安装、卸载、启动、关闭、登陆以及同时遇到的问题
数据库概念 在oracle里数据库是一个静态的概念,数据库的资料保存在硬盘上,一个数据库可以有多个实例 数据库实例 数据库实例是一个动态的概念,它是进程+这个进程的内存块.就把它当成个指针吧,这个指针 ...
- (7)C#流程控制
一.判断语句 if if可以单独使用,else不能单独使用 ; ) { Console.WriteLine("aaa); } Console.WriteLine("xxx" ...
- Fiddler在fiddler option设置还是抓不了HTTPS包解决办法
1:请在“运行”,即下面这个地方输入certmgr.msc并回车,打开证书管理. 打开后,请点击操作--查找证书,如下所示: 然后输入“fiddler”查找所有相关证书,如下所示: 可以看到,我们找到 ...
- 【CodeForces 830C】奇怪的降复杂度
[pixiv] https://www.pixiv.net/member_illust.php?mode=medium&illust_id=60638239 description 有n棵竹子 ...
- activity间回传数据
1,布局 <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android ...
- Android获取视频音频的时长的方法
android当中获取视频音频的时长,我列举了三种. 1:获取视频URI后获取cursor cursor.getLong(cursor.getColumnIndexOrThrow(MediaStore ...
- 如何把自己的代码发布到npmjs(npm publish)
来源: https://www.cnblogs.com/calamus/p/8384318.html
- 数据库读写锁的实现(C++)
一.基本概念 在数据库中,对某数据的两个基本操作为写和读.分布有两种锁控制:排它锁(X锁).共享锁(S锁). 排它锁(x锁):若事务T对数据D加X锁,则其他不论什么事务都不能再对D加不论什么类型的锁. ...
- DBS:同学录
ylbtech-DatabaseDesgin:ylbtech-cnblogs(博客园)-数据库设计-2,Admin(用户后台) DatabaseName:同学录 Model: Type: Url: 1 ...
- as well as
一.as well 用法: 1.as well常用作状语,作“又:也”解,相当于too或also,常位于句末,无须用逗号与句子分开.如: I am going to London and my sis ...