spark+kafka 小案例
(1)下载kafka的jar包
package com.sparkstreaming
import org.apache.spark.SparkConf
import org.apache.spark.streaming.Seconds
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka010.KafkaUtils
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.kafka.common.serialization.StringDeserializer
object SparkStreamKaflaWordCount {
def main(args: Array[String]): Unit = {
//创建streamingContext
var conf=new SparkConf().setMaster("spark://192.168.177.120:7077")
.setAppName("SparkStreamKaflaWordCount Demo");
var ssc=new StreamingContext(conf,Seconds());
//创建topic
//var topic=Map{"test" -> 1}
var topic=Array("test");
//指定zookeeper
//创建消费者组
var group="con-consumer-group"
//消费者配置
val kafkaParam = Map(
"bootstrap.servers" -> "192.168.177.120:9092,anotherhost:9092",//用于初始化链接到集群的地址
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
//用于标识这个消费者属于哪个消费团体
"group.id" -> group,
//如果没有初始化偏移量或者当前的偏移量不存在任何服务器上,可以使用这个配置属性
//可以使用这个配置,latest自动重置偏移量为最新的偏移量
"auto.offset.reset" -> "latest",
//如果是true,则这个消费者的偏移量会在后台自动提交
"enable.auto.commit" -> (false: java.lang.Boolean)
);
//创建DStream,返回接收到的输入数据
var stream=KafkaUtils.createDirectStream[String,String](ssc, PreferConsistent,Subscribe[String,String](topic,kafkaParam))
//每一个stream都是一个ConsumerRecord
stream.map(s =>(s.key(),s.value())).print();
ssc.start();
ssc.awaitTermination();
}
}
# The number of milliseconds of each tick
tickTime=
# The number of ticks that the initial
# synchronization phase can take
initLimit=
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=
# the directory where the snapshot is stored.
dataDir=/home/zhangxs/datainfo/developmentData/zookeeper/zkdata1
# the port at which the clients will connect
clientPort=
server.=zhangxs::
zkServer.sh start zoo1.cfg
【bin/kafka-server-start.sh config/server.properties】
[root@zhangxs kafka_2.]# bin/kafka-server-start.sh config/server.properties
[-- ::,] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads =
broker.id =
broker.id.generation.enable = true
broker.rack = null
compression.type = producer
connections.max.idle.ms =
controlled.shutdown.enable = true
controlled.shutdown.max.retries =
controlled.shutdown.retry.backoff.ms =
controller.socket.timeout.ms =
create.topic.policy.class.name = null
default.replication.factor =
delete.topic.enable = false
fetch.purgatory.purge.interval.requests =
group.max.session.timeout.ms =
group.min.session.timeout.ms =
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 0.10.-IV0
leader.imbalance.check.interval.seconds =
[root@zhangxs kafka_2.]# bin/kafka-console-producer.sh --broker-list 192.168.177.120: --topic test
./spark-submit --class com.sparkstreaming.SparkStreamKaflaWordCount /usr/local/development/spark-2.0/jars/streamkafkademo.jar
zhang xing sheng
// :: INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Launching task on executor id: hostname: 192.168.177.120.
// :: INFO storage.BlockManagerInfo: Added broadcast_99_piece0 in memory on 192.168.177.120: (size: 1913.0 B, free: 366.3 MB)
// :: INFO scheduler.TaskSetManager: Finished task 0.0 in stage 99.0 (TID ) in ms on 192.168.177.120 (/)
// :: INFO scheduler.TaskSchedulerImpl: Removed TaskSet 99.0, whose tasks have all completed, from pool
// :: INFO scheduler.DAGScheduler: ResultStage (print at SparkStreamKaflaWordCount.scala:) finished in 0.019 s
// :: INFO scheduler.DAGScheduler: Job finished: print at SparkStreamKaflaWordCount.scala:, took 0.023450 s
-------------------------------------------
Time: ms
-------------------------------------------
(null,zhang xing sheng)
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.0</version>
</dependency>

(2)在提交spark应用程序的时候,抛出类找不到
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/common/serialization/StringDeserializer
at com.sparkstreaming.SparkStreamKaflaWordCount$.main(SparkStreamKaflaWordCount.scala:)
at com.sparkstreaming.SparkStreamKaflaWordCount.main(SparkStreamKaflaWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
------------------------------------------------------------------------
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/streaming/kafka010/KafkaUtils$
at com.sparkstreaming.SparkStreamKaflaWordCount$.main(SparkStreamKaflaWordCount.scala:)
at com.sparkstreaming.SparkStreamKaflaWordCount.main(SparkStreamKaflaWordCount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
spark+kafka 小案例的更多相关文章
- kafka 小案例【二】 --kafka 设置多个消费着集群
这个配是我在http://www.cnblogs.com/zhangXingSheng/p/6646972.html 的基础上再添加的配置 设置多个消息集群 (1)复制两份配置文件 > cp c ...
- kafka 小案例【一】---设置但个消息集群
启动kafka服务 [ bin/kafka-server-start.sh config/server.properties ] [root@zhangxs kafka_2.]# bin/kafka- ...
- 大数据Spark+Kafka实时数据分析案例
本案例利用Spark+Kafka实时分析男女生每秒购物人数,利用Spark Streaming实时处理用户购物日志,然后利用websocket将数据实时推送给浏览器,最后浏览器将接收到的数据实时展现, ...
- Scala进阶之路-Spark底层通信小案例
Scala进阶之路-Spark底层通信小案例 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Spark Master和worker通信过程简介 1>.Worker会向ma ...
- _00017 Kafka的体系结构介绍以及Kafka入门案例(0基础案例+Java API的使用)
博文作者:妳那伊抹微笑 itdog8 地址链接 : http://www.itdog8.com(个人链接) 博客地址:http://blog.csdn.net/u012185296 博文标题:_000 ...
- Spark Streaming updateStateByKey案例实战和内幕源码解密
本节课程主要分二个部分: 一.Spark Streaming updateStateByKey案例实战二.Spark Streaming updateStateByKey源码解密 第一部分: upda ...
- graph小案例
(小案例,有五个人他们参见相亲节目,这个五个人分别是0,1,2,3,4,号选手,计算出追随者年龄大于被追随者年龄的人数和平均年龄) scala> import org.apache.spark. ...
- 机械表小案例之transform的应用
这个小案例主要是对transform的应用. 时钟的3个表针分别是3个png图片,通过setInterval来让图片转动.时,分,秒的转动角度分别是30,6,6度. 首先,通过new Date函数获取 ...
- shell讲解-小案例
shell讲解-小案例 一.文件拷贝输出检查 下面测试文件拷贝是否正常,如果cp命令并没有拷贝文件myfile到myfile.bak,则打印错误信息.注意错误信息中basename $0打印脚本名.如 ...
随机推荐
- Linux文本过滤命令grep、awk、sed
grep的使用: 一.grep一般格式: grep [选项] 基本正则表达式 [文件] 这里的正则表达式可以为字符串.在grep命令中输入字符串参数时,最好将其用双引号括起来.调用变量时也可以使用双引 ...
- c# 中文字符(全角、半角)通用处理
声明:本文仅提供一种编程思路,所提供代码仅供参考,如需使用,请自行完善. 我们在做程序的的时候经常要处理用户输入,作为我们的主要语言中文,经常会出现全角.半角的问题,这会在查询时给我们带来很多麻烦.本 ...
- Java实现中文算数验证码(算数运算+-*/)
原文:http://blog.csdn.net/typa01_kk/article/details/45050091 /** * creat verification code * */ @Actio ...
- 咏南下拉列表数据敏感控件--TYNDBSearch
咏南下拉列表数据敏感控件--TYNDBSearch 拥有下拉列表控件可以大大地加速软件系统的开发. 控件适用于DELPHI5及以上版本安装并使用. 控件的用法: procedure Tfgoods.s ...
- 【微信】微信小程序 获取本次场景值
场景值: 代表从何处进入小程序的.代表小程序的入口场景值. 注意: 1>目前仅可以在 App 的 onlaunch 和 onshow 中获取上述场景值 获取场景值的方法: //在小程序的onLa ...
- JSP Response Set Status
JSP Response Set Status In this tutorial you will learn about how to set the HTTP status code in JSP ...
- hive on spark VS SparkSQL VS hive on tez
http://blog.csdn.net/wtq1993/article/details/52435563 http://blog.csdn.net/yeruby/article/details/51 ...
- swftools使用
为了支持gif转swf以及pdf转swf.编译swftools过程中遇见几个问题,记录一下. 首先下载swftools:http://www.swftools.org/ 它依赖几个包,这里我使用的版本 ...
- shell中declare命令
declare命令有如下选项: -a 声明一个数组 -i 声明一个整型 -f 打印所有函数定义 -F 仅打印函数名字 -r 声明一个readonly变量,该变量的值无法改变,并且不能为unset -x ...
- 2017.8.1 logstash基础语法学习
数据类型 bool:debug => true string:host => "hostname" int:port => 514 array:match =&g ...