spark-streaming与flume整合  push

package cn.my.sparkStream

import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._ /** */
object SparkFlumePush {
def main(args: Array[String]) {
if (args.length < ) {
System.err.println(
"Usage: FlumeEventCount <host> <port>")
System.exit()
}
LogLevel.setStreamingLogLevels()
val Array(host, port) = args
val batchInterval = Milliseconds()
// Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumeEventCount").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, batchInterval)
// Create a flume stream
val stream = FlumeUtils.createStream(ssc, host, port.toInt, StorageLevel.MEMORY_ONLY_SER_2)
// Print out the count of events received from this server in each batch
stream.count().map(cnt => "Received " + cnt + " flume events.").print()
//拿到消息中的event,从event中拿出body,body是真正的消息体
stream.flatMap(t=>{new String(t.event.getBody.array()).split(" ")}).map((_,)).reduceByKey(_+_).print ssc.start()
ssc.awaitTermination()
}
}

package cn.my.sparkStream

import java.net.InetSocketAddress

import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.flume._ /**
*
*/
object SparkFlumePull {
def main(args: Array[String]) {
if (args.length < 2) {
System.err.println(
"Usage: FlumeEventCount <host> <port>")
System.exit(1)
}
LogLevel.setStreamingLogLevels()
val Array(host, port) = args
val batchInterval = Milliseconds(2000)
// Create the context and set the batch size
val sparkConf = new SparkConf().setAppName("FlumeEventCount").setMaster("local[2]")
val ssc = new StreamingContext(sparkConf, batchInterval)
// Create a flume stream //val stream = FlumeUtils.createStream(ssc, host, port.toInt, StorageLevel.MEMORY_ONLY_SER_2)
// val flumeStream = FlumeUtils.createPollingStream(ssc, host, port.toInt)
/*
def createPollingStream(
jssc: JavaStreamingContext,
addresses: Array[InetSocketAddress],
storageLevel: StorageLevel
):
*/
//当sink有多个的时候
val flumesinklist = Array[InetSocketAddress](new InetSocketAddress("mini1", 8888))
val flumeStream = FlumeUtils.createPollingStream(ssc, flumesinklist, StorageLevel.MEMORY_ONLY_2) flumeStream.count().map(cnt => "Received " + cnt + " flume events.").print()
flumeStream.flatMap(t => {
new String(t.event.getBody.array()).split(" ")
}).map((_, 1)).reduceByKey(_ + _).print() // Print out the count of events received from this server in each batch
//stream.count().map(cnt => "Received " + cnt + " flume events.").print()
//拿到消息中的event,从event中拿出body,body是真正的消息体
//stream.flatMap(t=>{new String(t.event.getBody.array()).split(" ")}).map((_,1)).reduceByKey(_+_).print ssc.start()
ssc.awaitTermination()
}
}
 

http://spark.apache.org/docs/1.6.3/streaming-flume-integration.html

spark与flume整合的更多相关文章

  1. Spark Streaming + Flume整合官网文档阅读及运行示例

    1,基于Flume的Push模式(Flume-style Push-based Approach)      Flume被用于在Flume agents之间推送数据.在这种方式下,Spark Stre ...

  2. Flume整合Spark Streaming

    Spark版本1.5.2,Flume版本:1.6 Flume agent配置文件:spool-8.51.conf agent.sources = source1 agent.channels = me ...

  3. <Spark Streaming><Flume><Integration>

    Overview Flume:一个分布式的,可靠的,可用的服务,用于有效地收集.聚合.移动大规模日志数据 我们搭建一个flume + Spark Streaming的平台来从Flume获取数据,并处理 ...

  4. spark第十篇:Spark与Kafka整合

    spark与kafka整合需要引入spark-streaming-kafka.jar,该jar根据kafka版本有2个分支,分别是spark-streaming-kafka-0-8和spark-str ...

  5. flume 整合 kafka

    flume 整合 kafka:   flume:高可用的,高可靠的,分布式的海量日志采集.聚合和传输的系统. kafka:分布式的流数据平台.   flume 采集业务日志,发送到kafka   一. ...

  6. IDEA Spark Streaming Flume数据源 --解决无法转化为实际输入数据,及中文乱码(Scala)

    需要三步: 1.shell:往 1234 端口写数据 nc localhost 1234 2.shell: 启动flume服务 cd /usr/local2/flume/bin ./flume-ng ...

  7. Spark Streaming + Kafka整合(Kafka broker版本0.8.2.1+)

    这篇博客是基于Spark Streaming整合Kafka-0.8.2.1官方文档. 本文主要讲解了Spark Streaming如何从Kafka接收数据.Spark Streaming从Kafka接 ...

  8. 必读:Spark与kafka010整合

    版权声明:本文为博主原创文章,未经博主同意不得转载. https://blog.csdn.net/rlnLo2pNEfx9c/article/details/79648890 SparkStreami ...

  9. Spark之 SparkSql整合hive

    整合: 1,需要将hive-site.xml文件拷贝到Spark的conf目录下,这样就可以通过这个配置文件找到Hive的元数据以及数据存放位置. 2,如果Hive的元数据存放在Mysql中,我们还需 ...

随机推荐

  1. iOS利用SDWebImage实现缓存的计算与清理

    概述 可以仅仅清理图片缓存, 也可以清理所有的缓存文件(包括图片.视频.音频等). 详细 代码下载:http://www.demodashi.com/demo/10717.html 一般我们项目中的缓 ...

  2. HDU 1757 A Simple Math Problem (矩阵乘法)

    A Simple Math Problem Time Limit: 3000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Ot ...

  3. MySQL 获取子分类ID的所有父分类ID和Name的集合

    CREATE DEFINER=`sa`@`%` PROCEDURE `proc_Product_leimu_ParentIds`( IN pID INT ) BEGIN ) vars, product ...

  4. Google Map 学习过程中的代码

    <!DOCTYPE html><html> <head> <title>Simple click event</title> <met ...

  5. Python学习笔记010——递归函数

    1 递归定义 函数直接或间接调用函数本身,则该函数称为递归函数 2 递归特点 Python函数递归调用,会用到栈 – 这里的栈是函数/程序运行时系统为其分配的一段内存区 – 栈具有 后进先出 的特性 ...

  6. Python 之ConfigParser 学习笔记

    一.ConfigParser简介 ConfigParser 是用来读取配置文件的包.配置文件的格式如下:中括号“[ ]”内包含的为section.section 下面为类似于key-value 的配置 ...

  7. PO_标准采购流程请购采购接受入库(流程)

    2014-06-03 Created By BaoXinjian

  8. DBA_实践指南系列2_Oracle Erp R12系统安装配置设定Setup(案例)

    2013-12-02 Created By BaoXinjian

  9. unity3d 通过添加rigidBody来指明物体是动态的,以避免cache开销

    unity3d 通过添加rigidBody来指明物体是动态的,以避免cache开销. 如果不添加rigidBody,则unity会认为它是静态的,会对物理计算进行cache,但如果此物体实际上tran ...

  10. 《Effective Java》读书笔记四(泛型)

    Java1.5发行版本中增加了泛型(Generic).在没有泛型之前,从集合中读取到的每一个对象都必须进行转换.如果有人不小心插入了错误的类型对象,在运行时的转换处理就会出错.有了泛型之后,可以告诉编 ...