spark streaming中使用flume数据源
有两种方式,一种是sparkstreaming中的driver起监听,flume来推数据;另一种是sparkstreaming按照时间策略轮训的向flume拉数据。
最开始我以为只有第一种方法,但是尼玛问题在于driver起来的结点是没谱的,所以每次我重启streaming后发现尼玛每次都要修改flume的sinks,蛋疼死了,后来才发现有后面的方法,好吧,把不同的方法代码写出来,其实变化不大。(代码转自官方的githup)
第一种,监听端口:
package org.apache.spark.examples.streaming import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam /**
* Produces a count of events received from Flume.
*
* This should be used in conjunction with an AvroSink in Flume. It will start
* an Avro server on at the request host:port address and listen for requests.
* Your Flume AvroSink should be pointed to this address.
*
* Usage: FlumeEventCount <host> <port>
* <host> is the host the Flume receiver will be started on - a receiver
* creates a server and listens for flume events.
* <port> is the port the Flume receiver will listen on.
*
* To run this example:
* `$ bin/run-example org.apache.spark.examples.streaming.FlumeEventCount <host> <port> `
*/
object FlumeEventCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumeEventCount <host> <port>")
System.exit(1)
} StreamingExamples.setStreamingLogLevels() val Array(host, IntParam(port)) = args val batchInterval = Milliseconds(2000) // Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumeEventCount")
val ssc = new StreamingContext(sparkConf, batchInterval) // Create a flume stream
val stream = FlumeUtils.createStream(ssc, host, port, StorageLevel.MEMORY_ONLY_SER_2) // Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print() ssc.start()
ssc.awaitTermination()
}
}
第二种是轮训主动向flume拿数据
package org.apache.spark.examples.streaming import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._
import org.apache.spark.util.IntParam
import java.net.InetSocketAddress /**
* Produces a count of events received from Flume.
*
* This should be used in conjunction with the Spark Sink running in a Flume agent. See
* the Spark Streaming programming guide for more details.
*
* Usage: FlumePollingEventCount <host> <port>
* `host` is the host on which the Spark Sink is running.
* `port` is the port at which the Spark Sink is listening.
*
* To run this example:
* `$ bin/run-example org.apache.spark.examples.streaming.FlumePollingEventCount [host] [port] `
*/
object FlumePollingEventCount {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumePollingEventCount <host> <port>")
System.exit(1)
} StreamingExamples.setStreamingLogLevels() val Array(host, IntParam(port)) = args val batchInterval = Milliseconds(2000) // Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumePollingEventCount")
val ssc = new StreamingContext(sparkConf, batchInterval) // Create a flume stream that polls the Spark Sink running in a Flume agent
val stream = FlumeUtils.createPollingStream(ssc, host, port) // Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events." ).print() ssc.start()
ssc.awaitTermination()
}
}
spark streaming中使用flume数据源的更多相关文章
- Spark Streaming中向flume拉取数据
在这里看到的解决方法 https://issues.apache.org/jira/browse/SPARK-1729 请是个人理解,有问题请大家留言. 其实本身flume是不支持像KAFKA一样的发 ...
- Spark Streaming中的操作函数分析
根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transformations Window Operations J ...
- Spark Streaming中的操作函数讲解
Spark Streaming中的操作函数讲解 根据根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transform ...
- spark streaming中维护kafka偏移量到外部介质
spark streaming中维护kafka偏移量到外部介质 以kafka偏移量维护到redis为例. redis存储格式 使用的数据结构为string,其中key为topic:partition, ...
- Spark Streaming中动态Batch Size实现初探
本期内容 : BatchDuration与 Process Time 动态Batch Size Spark Streaming中有很多算子,是否每一个算子都是预期中的类似线性规律的时间消耗呢? 例如: ...
- flink和spark Streaming中的Back Pressure
Spark Streaming的back pressure 在讲flink的back pressure之前,我们先讲讲Spark Streaming的back pressure.Spark Strea ...
- spark streaming中使用checkpoint
从官方的Programming Guides中看到的 我理解streaming中的checkpoint有两种,一种指的是metadata的checkpoint,用于恢复你的streaming:一种是r ...
- Spark Streaming数据限流简述
Spark Streaming对实时数据流进行分析处理,源源不断的从数据源接收数据切割成一个个时间间隔进行处理: 流处理与批处理有明显区别,批处理中的数据有明显的边界.数据规模已知:而流处理数 ...
- Apache Spark 2.2.0 中文文档 - Spark Streaming 编程指南 | ApacheCN
Spark Streaming 编程指南 概述 一个入门示例 基础概念 依赖 初始化 StreamingContext Discretized Streams (DStreams)(离散化流) Inp ...
随机推荐
- gvim设置成不备份文件
打开gVim,进入“编辑”-“启动设定” 在“behave mswin”下行位置添加 set nobackup 语句 退出并保存配置文件 :wq
- 【原创】ui.router源码解析
Angular系列文章之angular路由 路由(route),几乎所有的MVC(VM)框架都应该具有的特性,因为它是前端构建单页面应用(SPA)必不可少的组成部分. 那么,对于angular而言,它 ...
- Robot Motion(imitate)
Robot Motion Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 11065 Accepted: 5378 Des ...
- 第12章 使用Samba或NFS实现文件共享
章节简述: 本章节为读者讲述文件共享系统的作用,了解Samba与NFS服务程序的开发背景以及用法. 详细逐条讲解Samba服务配置参数,演示安全共享文件的配置策方法,并使用autofs服务程序自动挂载 ...
- Leetcode 之Populating Next Right Pointers in Each Node II(51)
void connect(TreeLinkNode *root) { while (root) { //每一层循环时重新初始化 TreeLinkNode *prev = nullptr; TreeLi ...
- Android 云服务器的搭建和友盟APP自动更新功能的实现
setContentView(R.layout.activity_splash); //Bmob SDK初始化--只需要这一段代码即可完成初始化 //请到Bmob官网(http://www.bmob. ...
- MyBatis调用存储过程
MySQL存储过程 DROP PROCEDURE IF EXISTS transferMoney; -- 实现转账功能的存储过程 CREATE PROCEDURE transferMoney ( IN ...
- mysql 多表连接
现有表R,S如下: 笛卡尔积 select * from R,S; 结果: 注:不需要任何条件.结果为两张表函数相乘(3x3=9). 自连接 select e.empno,e.ename,m.empn ...
- 【SpringMVC】SpringMVC系列12之数据类型转换、格式化、校验
12.数据类型转换.格式化.校验 12.1.数据绑定流程 Spring MVC 主框架将 ServletRequest 对象及目标方法的入参实例传递给 WebDataBinderFacto ...
- PYTHON实现HTTP基本认证(BASIC AUTHENTICATION)
参考: http://www.voidspace.org.uk/python/articles/authentication.shtml#id20 http://zh.wikipedia.org/wi ...