Overview

  • 整个项目的整体架构如下:

  

  • 关于SparkStreaming的部分:
    1. Flume传数据到SparkStreaming:为了简单使用的是push-based的方式。这种方式可能会丢失数据,但是简单。
    2. SparkStreaming因为micro-batch的架构,跟我们这个实时热点的应用还是比较契合的。
    3. SparkStreaming这边是基于sliding window实现实时热搜的,batch interval待定(1min左右),window也待定(3~N* batch interval),slide就等于batch interval。

Step1:Flume Configuration

  • Flume端将之前的配置扩展成多channel + 多sink,即sink到HDFS和Spark Streaming。关于Hadoop端的配置,参见Nginx+Flume+Hadoop日志分析,Ngram+AutoComplete
  • Flume + SparkStreaming的集成这部分,暂时选用push-based的方法。简单,但容错性不行。

Flume多channel&多sink

  • 首先,Flume支持多channel+多sink;
  • 具体的实现:
    1. 在channels和sinks下面加上要add的channel和sink即可

      • clusterLogAgent.sinks = HDFS sink2
        clusterLogAgent.channels = ch1 ch2
    2. 确定所选用的selector:
      • 关于selector,我们之前在flume源码解读篇中有了解到,这里选择的是replicating的selector,也就是把source中的events复制到各个channels中。
    3. 这个多sink应该还是配在hadoop cluster端,一个avro sink加一个hdfs sink。要是配在web server端理论上还要浪费网络带宽。
  • 用FlumeEventCount.scala测试了下,一切ok,如下~
    • web server端运行 bin/flume-ng agent -n WebAccLo-c conf -f conf/flume-avro.conf
    • spark端运行 ./bin/spark-submit --class com.wttttt.spark.FlumeEventCount --master yarn --deploy-mode client --driver-memory 1g --executor-memory 1g --executor-cores 2 /home/hhh/RealTimeLog.jar 10.3.242.99 4545 30000
-------------------------------------------
Time: 1495098360000 ms
-------------------------------------------
Received 10 flume events. 17/05/18 17:06:00 INFO JobScheduler: Finished job streaming job 1495098360000 ms.0 from job set of time 1495098360000 ms
17/05/18 17:06:00 INFO JobScheduler: Total delay: 0.298 s for time 1495098360000 ms (execution: 0.216 s)
17/05/18 17:06:00 INFO ReceivedBlockTracker: Deleting batches:
17/05/18 17:06:00 INFO InputInfoTracker: remove old batch metadata:
17/05/18 17:06:13 INFO BlockManagerInfo: Added input-0-1495098373000 in memory on host99:42342 (size: 319.0 B, free: 366.3 MB)
17/05/18 17:06:17 INFO BlockManagerInfo: Added input-0-1495098377200 in memory on host99:42342 (size: 620.0 B, free: 366.3 MB)
17/05/18 17:06:20 INFO BlockManagerInfo: Added input-0-1495098380400 in memory on host99:42342 (size: 620.0 B, free: 366.3 MB)
17/05/18 17:06:30 INFO JobScheduler: Added jobs for time 1495098390000 ms
17/05/18 17:06:30 INFO JobScheduler: Starting job streaming job 1495098390000 ms.0 from job set of time 1495098390000 ms
17/05/18 17:06:30 INFO SparkContext: Starting job: print at FlumeEventCount.scala:30
17/05/18 17:06:30 INFO DAGScheduler: Registering RDD 14 (union at DStream.scala:605)
17/05/18 17:06:30 INFO DAGScheduler: Got job 4 (print at FlumeEventCount.scala:30) with 1 output partitions
17/05/18 17:06:30 INFO DAGScheduler: Final stage: ResultStage 8 (print at FlumeEventCount.scala:30)
17/05/18 17:06:30 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 7)
17/05/18 17:06:30 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 7)
17/05/18 17:06:30 INFO DAGScheduler: Submitting ShuffleMapStage 7 (UnionRDD[14] at union at DStream.scala:605), which has no missing parents
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_8 stored as values in memory (estimated size 3.3 KB, free 399.5 MB)
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_8_piece0 stored as bytes in memory (estimated size 2.0 KB, free 399.5 MB)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on 10.3.242.99:36107 (size: 2.0 KB, free: 399.6 MB)
17/05/18 17:06:30 INFO SparkContext: Created broadcast 8 from broadcast at DAGScheduler.scala:996
17/05/18 17:06:30 INFO DAGScheduler: Submitting 4 missing tasks from ShuffleMapStage 7 (UnionRDD[14] at union at DStream.scala:605)
17/05/18 17:06:30 INFO YarnScheduler: Adding task set 7.0 with 4 tasks
17/05/18 17:06:30 INFO TaskSetManager: Starting task 0.0 in stage 7.0 (TID 88, host99, executor 1, partition 0, NODE_LOCAL, 7290 bytes)
17/05/18 17:06:30 INFO TaskSetManager: Starting task 3.0 in stage 7.0 (TID 89, host101, executor 2, partition 3, PROCESS_LOCAL, 7470 bytes)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on host99:42342 (size: 2.0 KB, free: 366.3 MB)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_8_piece0 in memory on host101:45692 (size: 2.0 KB, free: 366.3 MB)
17/05/18 17:06:30 INFO TaskSetManager: Starting task 1.0 in stage 7.0 (TID 90, host99, executor 1, partition 1, NODE_LOCAL, 7290 bytes)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 0.0 in stage 7.0 (TID 88) in 22 ms on host99 (executor 1) (1/4)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 3.0 in stage 7.0 (TID 89) in 27 ms on host101 (executor 2) (2/4)
17/05/18 17:06:30 INFO TaskSetManager: Starting task 2.0 in stage 7.0 (TID 91, host99, executor 1, partition 2, NODE_LOCAL, 7290 bytes)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 1.0 in stage 7.0 (TID 90) in 12 ms on host99 (executor 1) (3/4)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 2.0 in stage 7.0 (TID 91) in 11 ms on host99 (executor 1) (4/4)
17/05/18 17:06:30 INFO YarnScheduler: Removed TaskSet 7.0, whose tasks have all completed, from pool
17/05/18 17:06:30 INFO DAGScheduler: ShuffleMapStage 7 (union at DStream.scala:605) finished in 0.045 s
17/05/18 17:06:30 INFO DAGScheduler: looking for newly runnable stages
17/05/18 17:06:30 INFO DAGScheduler: running: Set(ResultStage 2)
17/05/18 17:06:30 INFO DAGScheduler: waiting: Set(ResultStage 8)
17/05/18 17:06:30 INFO DAGScheduler: failed: Set()
17/05/18 17:06:30 INFO DAGScheduler: Submitting ResultStage 8 (MapPartitionsRDD[17] at map at FlumeEventCount.scala:30), which has no missing parents
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_9 stored as values in memory (estimated size 3.8 KB, free 399.5 MB)
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_9_piece0 stored as bytes in memory (estimated size 2.1 KB, free 399.5 MB)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on 10.3.242.99:36107 (size: 2.1 KB, free: 399.6 MB)
17/05/18 17:06:30 INFO SparkContext: Created broadcast 9 from broadcast at DAGScheduler.scala:996
17/05/18 17:06:30 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 8 (MapPartitionsRDD[17] at map at FlumeEventCount.scala:30)
17/05/18 17:06:30 INFO YarnScheduler: Adding task set 8.0 with 1 tasks
17/05/18 17:06:30 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 92, host101, executor 2, partition 0, NODE_LOCAL, 7069 bytes)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_9_piece0 in memory on host101:45692 (size: 2.1 KB, free: 366.3 MB)
17/05/18 17:06:30 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to 10.3.242.101:41672
17/05/18 17:06:30 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 2 is 164 bytes
17/05/18 17:06:30 INFO TaskSetManager: Finished task 0.0 in stage 8.0 (TID 92) in 26 ms on host101 (executor 2) (1/1)
17/05/18 17:06:30 INFO YarnScheduler: Removed TaskSet 8.0, whose tasks have all completed, from pool
17/05/18 17:06:30 INFO DAGScheduler: ResultStage 8 (print at FlumeEventCount.scala:30) finished in 0.027 s
17/05/18 17:06:30 INFO DAGScheduler: Job 4 finished: print at FlumeEventCount.scala:30, took 0.089621 s
17/05/18 17:06:30 INFO SparkContext: Starting job: print at FlumeEventCount.scala:30
17/05/18 17:06:30 INFO DAGScheduler: Got job 5 (print at FlumeEventCount.scala:30) with 3 output partitions
17/05/18 17:06:30 INFO DAGScheduler: Final stage: ResultStage 10 (print at FlumeEventCount.scala:30)
17/05/18 17:06:30 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 9)
17/05/18 17:06:30 INFO DAGScheduler: Missing parents: List()
17/05/18 17:06:30 INFO DAGScheduler: Submitting ResultStage 10 (MapPartitionsRDD[17] at map at FlumeEventCount.scala:30), which has no missing parents
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_10 stored as values in memory (estimated size 3.8 KB, free 399.5 MB)
17/05/18 17:06:30 INFO MemoryStore: Block broadcast_10_piece0 stored as bytes in memory (estimated size 2.1 KB, free 399.5 MB)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on 10.3.242.99:36107 (size: 2.1 KB, free: 399.6 MB)
17/05/18 17:06:30 INFO SparkContext: Created broadcast 10 from broadcast at DAGScheduler.scala:996
17/05/18 17:06:30 INFO DAGScheduler: Submitting 3 missing tasks from ResultStage 10 (MapPartitionsRDD[17] at map at FlumeEventCount.scala:30)
17/05/18 17:06:30 INFO YarnScheduler: Adding task set 10.0 with 3 tasks
17/05/18 17:06:30 INFO TaskSetManager: Starting task 0.0 in stage 10.0 (TID 93, host99, executor 1, partition 1, PROCESS_LOCAL, 7069 bytes)
17/05/18 17:06:30 INFO TaskSetManager: Starting task 1.0 in stage 10.0 (TID 94, host101, executor 2, partition 2, PROCESS_LOCAL, 7069 bytes)
17/05/18 17:06:30 INFO TaskSetManager: Starting task 2.0 in stage 10.0 (TID 95, host101, executor 2, partition 3, PROCESS_LOCAL, 7069 bytes)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on host99:42342 (size: 2.1 KB, free: 366.3 MB)
17/05/18 17:06:30 INFO BlockManagerInfo: Added broadcast_10_piece0 in memory on host101:45692 (size: 2.1 KB, free: 366.3 MB)
17/05/18 17:06:30 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to 10.3.242.99:35937
17/05/18 17:06:30 INFO TaskSetManager: Finished task 0.0 in stage 10.0 (TID 93) in 19 ms on host99 (executor 1) (1/3)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 2.0 in stage 10.0 (TID 95) in 21 ms on host101 (executor 2) (2/3)
17/05/18 17:06:30 INFO TaskSetManager: Finished task 1.0 in stage 10.0 (TID 94) in 22 ms on host101 (executor 2) (3/3)
17/05/18 17:06:30 INFO YarnScheduler: Removed TaskSet 10.0, whose tasks have all completed, from pool
17/05/18 17:06:30 INFO DAGScheduler: ResultStage 10 (print at FlumeEventCount.scala:30) finished in 0.025 s
17/05/18 17:06:30 INFO DAGScheduler: Job 5 finished: print at FlumeEventCount.scala:30, took 0.032431 s
-------------------------------------------
Time: 1495098390000 ms
-------------------------------------------
Received 5 flume events.
  • 下一步会用Flume先做一次filter,去除掉没有搜索记录的log event。

Step2:Spark Streaming

  • Spark Streaming这边只要编写程序:
    1. new一个StreamingContext
    2. FlumeUtils.createStream接收Flume传过来的events
    3. mapPartitions(优化):对每个partiiton建一个Pattern和hashMap
    4. reduceByKeyAndWindow(优化): 滑动窗口对hashMap的相同key进行加减
    5. sortByKey: sort之后取前N个
  • 基于上述方式,现在本地作测试:
    • sparkStreaming处理:
    • object LocalTest {
      val logger = LoggerFactory.getLogger("LocalTest")
      def main(args: Array[String]) { val batchInterval = Milliseconds(10000)
      val slideInterval = Milliseconds(5000) val conf = new SparkConf()
      .setMaster("local[2]")
      .setAppName("LocalTest")
      // WARN StreamingContext: spark.master should be set as local[n], n > 1 in local mode if you have receivers to get data,
      // otherwise Spark jobs will not get resources to process the received data.
      val sc = new StreamingContext(conf, Milliseconds(5000))
      sc.checkpoint("flumeCheckpoint/") val stream = sc.socketTextStream("localhost", 9998) val counts = stream.mapPartitions{ events =>
      val pattern = Pattern.compile("\\?Input=[^\\s]*\\s")
      val map = new mutable.HashMap[String, Int]()
      logger.info("Handling events, events is empty: " + events.isEmpty)
      while (events.hasNext){ // par is an Iterator!!!
      val line = events.next()
      val m = pattern.matcher(line)
      if (m.find()) {
      val words = line.substring(m.start(), m.end()).split("=")(1).toLowerCase()
      logger.info(s"Processing words $words")
      map.put(words, map.getOrElse(words, 0) + 1)
      }
      }
      map.iterator
      } val window = counts.reduceByKeyAndWindow(_+_, _-_, batchInterval, slideInterval)
      // window.print() // transform和它的变体trnasformWith运行在DStream上任意的RDD-to-RDD函数;
      // 可以用来使用那些不包含在DStrema API中RDD操作
      val sorted = window.transform(rdd =>{
      val sortRdd = rdd.map(t => (t._2, t._1)).sortByKey(false).map(t => (t._2, t._1))
      val more = sortRdd.take(2)
      more.foreach(println)
      sortRdd
      }) sorted.print() sc.start()
      sc.awaitTermination()
      }
      }
    • 同时,另外运行一个程序,产生log,并向9998端口发送:
    • object GenerateChar {
      def main(args: Array[String]) {
      val listener = new ServerSocket(9998)
      while(true){
      val socket = listener.accept()
      new Thread(){
      override def run() = {
      println("Got client connected from :"+ socket.getInetAddress)
      val out = new PrintWriter(socket.getOutputStream,true)
      while(true){
      Thread.sleep(3000)
      val context1 = "GET /result.html?Input=test1 HTTP/1.1"
      println(context1)
      val context2 = "GET /result.html?Input=test2 HTTP/1.1"
      println(context2)
      val context3 = "GET /result.html?Input=test3 HTTP/1.1"
      println(context3)
      out.write(context1 + '\n' + context2 + "\n" + context2 + "\n" + context3 + "\n" + context3 + "\n" + context3 + "\n" + context3 + "\n")
      out.flush()
      }
      socket.close()
      }
      }.start()
      }
      }
      }
  • 以上,本地完全没有问题。但是!!!一打包到集群,就各种bug,没有输出。打的logger info也没有输出,System.out.println也没有(stdout文件为空...)。而且会报错shuffleException。
  • 基于上述问题,google了很多都说是内存的问题,但是我的数据量已经不能更小了...  我又测试了下在集群上跑,但不连flume,而是在driver本地跑了个generateLog的程序向9998端口发数据。事实是仍然可能报错如下:
    • 17/05/24 15:07:17 ERROR ShuffleBlockFetcherIterator: Failed to get block(s) from host101:37940
      java.io.IOException: Failed to connect to host101/10.3.242.101:37940

      但是结果是正确的...

TODO

  • 整个架构还有很多可改进的地方。因为我现在只剩两台机器了,就先不折腾了。 - -
  • 其中最大的问题还是容错:
    • flume是push-based,所以一旦有events冲击波,HDFS可能负载不了高强度的写操作,从而出问题;
    • spark-streaming那边因为也是直接使用这种push-based(没有定制receiver,我嫌麻烦),所以也会有问题。
  • 后续的话,还是要使用经典的Kafka + Flume的架构。

SparkStreaming实时日志分析--实时热搜词的更多相关文章

  1. 苏宁基于Spark Streaming的实时日志分析系统实践 Spark Streaming 在数据平台日志解析功能的应用

    https://mp.weixin.qq.com/s/KPTM02-ICt72_7ZdRZIHBA 苏宁基于Spark Streaming的实时日志分析系统实践 原创: AI+落地实践 AI前线 20 ...

  2. 【转】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台

    [转自]https://my.oschina.net/itblog/blog/547250 摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticS ...

  3. ELK实时日志分析平台环境部署--完整记录

    在日常运维工作中,对于系统和业务日志的处理尤为重要.今天,在这里分享一下自己部署的ELK(+Redis)-开源实时日志分析平台的记录过程(仅依据本人的实际操作为例说明,如有误述,敬请指出)~ ==== ...

  4. 【Big Data - ELK】ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台

    摘要: 前段时间研究的Log4j+Kafka中,有人建议把Kafka收集到的日志存放于ES(ElasticSearch,一款基于Apache Lucene的开源分布式搜索引擎)中便于查找和分析,在研究 ...

  5. [Big Data - ELK] ELK(ElasticSearch, Logstash, Kibana)搭建实时日志分析平台

    ELK平台介绍 在搜索ELK资料的时候,发现这篇文章比较好,于是摘抄一小段: 以下内容来自: http://baidu.blog.51cto.com/71938/1676798 日志主要包括系统日志. ...

  6. ELK实时日志分析平台环境部署--完整记录(转)

    在日常运维工作中,对于系统和业务日志的处理尤为重要.今天,在这里分享一下自己部署的ELK(+Redis)-开源实时日志分析平台的记录过程(仅依据本人的实际操作为例说明,如有误述,敬请指出)~ ==== ...

  7. 开源实时日志分析ELK

    开源实时日志分析ELK 2018-01-04 转自:开源实时日志分析ELK平台部署 日志主要包括系统日志.应用程序日志和安全日志.系统运维和开发人员可以通过日志了解服务器软硬件信息.检查配置过程中的错 ...

  8. ELK搭建实时日志分析平台之二Logstash和Kibana搭建

    本文书接前回<ELK搭建实时日志分析平台之一ElasticSearch> 文:铁乐与猫 四.安装Logstash logstash是一个数据分析软件,主要目的是分析log日志. 1)下载和 ...

  9. ELK搭建实时日志分析平台之一ElasticSearch搭建

    文:铁乐与猫 系统:CentOS Linux release 7.3.1611 (Core) 注:我这里为测试和实验方便,ELK整套都装在同一台服务器环境中了,生产环境的话,可以分开搭建在不同的服务器 ...

随机推荐

  1. Windows 上安装 Scala

    在安装 Scala 之前需要先安装 Java 环境,具体安装的详细方法就不在这里描述了. 您可以自行搜索我们网站中的内容获得其他网站的帮助来获得如何安装 Java 环境的方法. 接下来,我们可以从 S ...

  2. windows如何简单安装mongodb

    windows如何安装mongodb 步骤: 1.下载地址 2.选择zip(解压版本) 3.压缩文件解压到  /D:盘 4.在 D:盘  下建一个 data文件夹,data下建 db文件夹:   D: ...

  3. Search中的剪枝-奇偶剪枝

    设有一矩阵如下: 0 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 从为 0 的格子走一步,必然走向为 1 的格子 .//只能走四 ...

  4. python2.7 目录下没有scripts

    1.python2.7 配置环境完成或,python目录下没有scripts目录,先下载setuptools,cmd下载该目录下,输入python setup.py install 2.完成后,pyt ...

  5. 安装sql 2008步骤以及所遇到的问题

    下载网址:http://www.xiazaiba.com/html/4610.html 安装步骤: 1.  在Windows7操作系统系,启动Microsoft SQL 2008安装程序后,系统兼容性 ...

  6. token原理详解

    概念与使用流程 是计算机术语:令牌,令牌是一种能够控制站点占有媒体的特殊帧,以区别数据帧及其他控制帧.token其实说的更通俗点可以叫暗号,在一些数据传输之前,要先进行暗号的核对,不同的暗号被授权不同 ...

  7. MongoDB 教程(三):MongoDB 的下载、安装和配置

    一.下载 下载地址:https://www.mongodb.com/download-center#community(这里是Windows 版,其他版本也可以在该网页进行下载) 版本选择: Mong ...

  8. ECharts 报表事件联动系列四:柱状图,折线图,饼状图实现联动

    代码如下: <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" c ...

  9. 创建型模式篇(工厂模式Factory Pattern)

    一.工厂模式(Factory Pattern) 1.定义: 在软件系统,经常面临着“某个对象”的创建工作,由于需求的变化,这个对象的具体实现经常面临着剧烈的变化,但是它却拥有比较稳定的接口.提供一种封 ...

  10. Segment set(线段并查集)

    Segment set Time Limit : 3000/1000ms (Java/Other)   Memory Limit : 32768/32768K (Java/Other) Total S ...