kafka 0.11 spark 2.11 streaming例子
"""
Counts words in UTF8 encoded, '\n' delimited text received from the network every second.
Usage: kafka_wordcount.py <zk> <topic>
To run this on your local machine, you need to setup Kafka and create a producer first, see
http://kafka.apache.org/documentation.html#quickstart
and then run the example
`$ bin/spark-submit --jars \
external/kafka-assembly/target/scala-*/spark-streaming-kafka-assembly-*.jar \
examples/src/main/python/streaming/kafka_wordcount.py \
localhost:2181 test`
"""
from __future__ import print_function import sys from pyspark import SparkContext
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: kafka_wordcount.py <zk> <topic>", file=sys.stderr)
exit(-1) sc = SparkContext(appName="PythonStreamingKafkaWordCount")
ssc = StreamingContext(sc, 1) zkQuorum, topic = sys.argv[1:]
kvs = KafkaUtils.createStream(ssc, zkQuorum, "spark-streaming-consumer", {topic: 1})
lines = kvs.map(lambda x: x[1])
counts = lines.flatMap(lambda line: line.split(" ")) \
.map(lambda word: (word, 1)) \
.reduceByKey(lambda a, b: a+b)
counts.pprint() ssc.start()
ssc.awaitTermination()
/spark-kafka/spark-2.1.1-bin-hadoop2.6# ./bin/spark-submit --jars ~/spark-streaming-kafka-0-8-assembly_2.11-2.2.0.jar examples/src/main/python/streaming/kafka_wordcount.py localhost:2181 test
其中:spark-streaming-kafka-0-8-assembly_2.11-2.2.0.jar在 http://search.maven.org/#search%7Cga%7C1%7Cspark-streaming-kafka-0-8-assembly 下载
kafka 使用0.11版本:
1.3 Quick Start
This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use bin\windows\ instead of bin/, and change the script extension to .bat.
Step 1: Download the code
Download the 0.11.0.0 release and un-tar it.
|
1
2
|
> tar -xzf kafka_2.11-0.11.0.0.tgz> cd kafka_2.11-0.11.0.0 |
Step 2: Start the server
Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
|
1
2
3
|
> bin/zookeeper-server-start.sh config/zookeeper.properties[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)... |
Now start the Kafka server:
|
1
2
3
4
|
> bin/kafka-server-start.sh config/server.properties[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)... |
Step 3: Create a topic
Let's create a topic named "test" with a single partition and only one replica:
|
1
|
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test |
We can now see that topic if we run the list topic command:
|
1
2
|
> bin/kafka-topics.sh --list --zookeeper localhost:2181test |
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Step 4: Send some messages
Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.
|
1
2
3
|
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testThis is a messageThis is another message |
Step 5: Start a consumer
Kafka also has a command line consumer that will dump out messages to standard output.
|
1
2
3
|
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginningThis is a messageThis is another message |
kafka 0.11 spark 2.11 streaming例子的更多相关文章
- kafka 0.8+spark offset 提交至mysql
kafka版本:<kafka.version> 0.8.2.1</kafka.version> spark版本 <artifactId>spark-streamin ...
- 【原创】Kafka 0.11消息设计
Kafka 0.11版本增加了很多新功能,包括支持事务.精确一次处理语义和幂等producer等,而实现这些新功能的前提就是要提供支持这些功能的新版本消息格式,同时也要维护与老版本的兼容性.本文将详细 ...
- 【译】Flink + Kafka 0.11端到端精确一次处理语义的实现
本文是翻译作品,作者是Piotr Nowojski和Michael Winters.前者是该方案的实现者. 原文地址是https://data-artisans.com/blog/end-to-end ...
- Kafka 0.11.0.0 实现 producer的Exactly-once 语义(中文)
很高兴地告诉大家,具备新的里程碑意义的功能的Kafka 0.11.x版本(对应 Confluent Platform 3.3)已经release,该版本引入了exactly-once语义,本文阐述的内 ...
- Kafka 0.11.0.0 实现 producer的Exactly-once 语义(英文)
Exactly-once Semantics are Possible: Here’s How Kafka Does it I’m thrilled that we have hit an excit ...
- Kafka设计解析(二十二)Flink + Kafka 0.11端到端精确一次处理语义的实现
转载自 huxihx,原文链接 [译]Flink + Kafka 0.11端到端精确一次处理语义的实现 本文是翻译作品,作者是Piotr Nowojski和Michael Winters.前者是该方案 ...
- Kafka设计解析(十六)Kafka 0.11消息设计
转载自 huxihx,原文链接 [原创]Kafka 0.11消息设计 目录 一.Kafka消息层次设计 1. v1格式 2. v2格式 二.v1消息格式 三.v2消息格式 四.测试对比 Kafka 0 ...
- Kafka 0.11.0.0 实现 producer的Exactly-once 语义(官方DEMO)
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients&l ...
- Kafka 0.11新功能介绍:空消费组延迟rebalance
Kafka 0.11新功能介绍:空消费组延迟rebalance 在0.11之前的版本中,多个consumer实例加入到一个空消费组将导致多次的rebalance,这是由于每个consumer inst ...
随机推荐
- 2016.03.28,英语,《Vocabulary Builder》Unit 07
vis: comes from a Latin verb meaning 'see'. vision: ['vɪʒn] n. 视觉,先见之明,光景,视力,眼力,幻想,影像 vt. 幻想. ; vid- ...
- 杂项-分布式:Hadoop
ylbtech-杂项-分布式:Hadoop Hadoop是一个由Apache基金会所开发的分布式系统基础架构. 用户可以在不了解分布式底层细节的情况下,开发分布式程序.充分利用集群的威力进行高速运算和 ...
- CentOS7开启网络配置
虚拟机在安装时可以开启网络 如果没有开启的话 可以通过以下操作 ip addr 查看是否开启网络 没有开启的话 cd /etc/sysconfig/network-scripts/ 然后 执行 ls ...
- webform 下使用autofac
官网介绍: http://docs.autofac.org/en/latest/integration/webforms.html#quick-start HTTP 错误 500.22 - Inter ...
- Git 学习笔记(一)
某大牛曾经说过,版本控制的最大好处就是让你可以永远后悔,而 Git 无疑是众多版本控制软件当中的佼佼者,在开源社区更是备受青睐,那么它为何会诞生,和其他的版本控制软件项目又有什么不同?且让我们慢慢来看 ...
- Centos7 minimal 系列之桥接模式联网(二)
一.桥接模式联网 之前用NAT模式连接网络,Centos是可以上网,而且Centos可以ping通主机,但是主机ping不通虚拟机.后来发现Nat模式只能由内而外. 1.1设置虚拟机的网络适配器 1. ...
- Java高级——交通灯管理系统
本方法模拟了现实生活中的交通信号灯的情况 1.先构建Road类,此类可以创建12个方向的路 代码如下: package com.springtie.traffic; import java.util. ...
- JQuery (总结)
延迟触发事件 Ajax异步请求数据 Jquery事件: Focus获得焦点 blur失去焦点 Change内容在变化的时候 Click点击事件 ---------------------------- ...
- ASP.NET中各种缓存技术的特点及使用场景
对于一些不经常改变却经常被request的数据,我们喜欢将它们缓存在内存.这样用户请求时先到缓存中去取,如果缓存中没有,再去数据库拿,提高响应速度.缓存一般实现在BLL,这样可以与DAL分离,更换数据 ...
- DateUtil时间工具类
package utils; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util. ...