spark streaming中使用flume数据源
有两种方式,一种是sparkstreaming中的driver起监听,flume来推数据;另一种是sparkstreaming按照时间策略轮训的向flume拉数据。
最开始我以为只有第一种方法,但是尼玛问题在于driver起来的结点是没谱的,所以每次我重启streaming后发现尼玛每次都要修改flume的sinks,蛋疼死了,后来才发现有后面的方法,好吧,把不同的方法代码写出来,其实变化不大。(代码转自官方的githup)
第一种,监听端口:
package org.apache.spark.examples.streaming import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam /**
* Produces a count of events received from Flume.
*
* This should be used in conjunction with an AvroSink in Flume. It will start
* an Avro server on at the request host:port address and listen for requests.
* Your Flume AvroSink should be pointed to this address.
*
* Usage: FlumeEventCount <host> <port>
* <host> is the host the Flume receiver will be started on - a receiver
* creates a server and listens for flume events.
* <port> is the port the Flume receiver will listen on.
*
* To run this example:
* `$ bin/run-example org.apache.spark.examples.streaming.FlumeEventCount <host> <port> `
*/
object FlumeEventCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumeEventCount <host> <port>")
System.exit(1)
} StreamingExamples.setStreamingLogLevels() val Array(host, IntParam(port)) = args val batchInterval = Milliseconds(2000) // Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumeEventCount")
val ssc = new StreamingContext(sparkConf, batchInterval) // Create a flume stream
val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) // Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print() ssc.start()
ssc.awaitTermination()
}
}
第二种是轮训主动向flume拿数据
package org.apache.spark.examples.streaming import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam
import java.net.InetSocketAddress /**
* Produces a count of events received from Flume.
*
* This should be used in conjunction with the Spark Sink running in a Flume agent. See
* the Spark Streaming programming guide for more details.
*
* Usage: FlumePollingEventCount <host> <port>
* `host` is the host on which the Spark Sink is running.
* `port` is the port at which the Spark Sink is listening.
*
* To run this example:
* `$ bin/run-example org.apache.spark.examples.streaming.FlumePollingEventCount [host] [port] `
*/
object FlumePollingEventCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumePollingEventCount <host> <port>")
System.exit(1)
} StreamingExamples.setStreamingLogLevels() val Array(host, IntParam(port)) = args val batchInterval = Milliseconds(2000) // Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumePollingEventCount")
val ssc = new StreamingContext(sparkConf, batchInterval) // Create a flume stream that polls the Spark Sink running in a Flume agent
val stream = FlumeUtils.createPollingStream(ssc, host, port) // Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print() ssc.start()
ssc.awaitTermination()
}
}
spark streaming中使用flume数据源的更多相关文章
- Spark Streaming中向flume拉取数据
在这里看到的解决方法 https://issues.apache.org/jira/browse/SPARK-1729 请是个人理解,有问题请大家留言. 其实本身flume是不支持像KAFKA一样的发 ...
- Spark Streaming中的操作函数分析
根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transformations Window Operations J ...
- Spark Streaming中的操作函数讲解
Spark Streaming中的操作函数讲解 根据根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transform ...
- spark streaming中维护kafka偏移量到外部介质
spark streaming中维护kafka偏移量到外部介质 以kafka偏移量维护到redis为例. redis存储格式 使用的数据结构为string,其中key为topic:partition, ...
- Spark Streaming中动态Batch Size实现初探
本期内容 : BatchDuration与 Process Time 动态Batch Size Spark Streaming中有很多算子,是否每一个算子都是预期中的类似线性规律的时间消耗呢? 例如: ...
- flink和spark Streaming中的Back Pressure
Spark Streaming的back pressure 在讲flink的back pressure之前,我们先讲讲Spark Streaming的back pressure.Spark Strea ...
- spark streaming中使用checkpoint
从官方的Programming Guides中看到的 我理解streaming中的checkpoint有两种,一种指的是metadata的checkpoint,用于恢复你的streaming:一种是r ...
- Spark Streaming数据限流简述
Spark Streaming对实时数据流进行分析处理,源源不断的从数据源接收数据切割成一个个时间间隔进行处理: 流处理与批处理有明显区别,批处理中的数据有明显的边界.数据规模已知:而流处理数 ...
- Apache Spark 2.2.0 中文文档 - Spark Streaming 编程指南 | ApacheCN
Spark Streaming 编程指南 概述 一个入门示例 基础概念 依赖 初始化 StreamingContext Discretized Streams (DStreams)(离散化流) Inp ...
随机推荐
- 把一个一维数组转换为in ()
把一个一维数组转换为in()形式. function dbCreateIn($itemList) { if(empty($itemList )){ return " IN ('') &quo ...
- zstu.4022.旋转数阵(模拟)
旋转数阵 Time Limit: 1 Sec Memory Limit: 64 MB Submit: 1477 Solved: 102 Description 把1到n2的正整数从左上角开始由外层 ...
- [2012-4-10]ThinkPHP框架被爆任意代码执行漏洞(preg_replace)
昨日(2012.04.09)ThinkPHP框架被爆出了一个php代码任意执行漏洞,黑客只需提交一段特殊的URL就可以在网站上执行恶意代码. ThinkPHP作为国内使用比较广泛的老牌PHP MVC框 ...
- MySQL数据库服务器的架设
导读 MySQL数据库是Linux操作系统上用得最多的数据库系统,它可以非常方便的与其它服务器集成在一起,如Apache.Vsftpd.Postfix等.下面介绍RHEL 6平台MySQL数据库服务器 ...
- [codeforces 293]A. Weird Game
[codeforces 293]A. Weird Game 试题描述 Yaroslav, Andrey and Roman can play cubes for hours and hours. Bu ...
- notepad正则表达式
文件名称匹配 文件名称: boost_chrono-vc100-mt-1_49.dll 对应的notepad正则表达式: \w*_\w*-\w*-\w*-\w*-\w*.dll 移除空行 查找目标: ...
- hiho #1272 买零食 [Offer收割]编程练习赛2
#1272 : 买零食 时间限制:5000ms 单点时限:1000ms 内存限制:256MB 描述 小Ho很喜欢在课间去小卖部买零食.然而不幸的是,这个学期他又有在一教的课,而一教的小卖部姐姐以冷若冰 ...
- django signal 浅析
默认的signals极其参数 (django 1.6.5) 模型的(django/db/models/signal.py): from django.dispatch import Signal cl ...
- pro git 使用积累
http://www.zhihu.com/question/20070065 git相关问题的收集 Git 是 Linux 之父 Linus Trovalds,为管理 Linux 内核代码而建立的,被 ...
- cobbler部署机器的默认密码
修改cobbler的默认密码: 用 openssl 生成一串密码后加入到 cobbler 的配置文件(/etc/cobbler/settings)里,替换 default_password_crypt ...