本讲从二个方面阐述:

  1. 数据清理原因和现象
  2. 数据清理代码解析

Spark Core从技术研究的角度讲 对Spark Streaming研究的彻底,没有你搞不定的Spark应用程序。

Spark Streaming一直在运行,不断计算,每一秒中在不断运行都会产生大量的累加器、广播变量,所以需要对对象及

元数据需要定期清理。每个batch duration运行时不断触发job后需要清理rdd和元数据。Clinet模式

可以看到打印的日志,从文件日志也可以看到清理日志内容。

现在要看其背后的事情:

Spark运行在jvm上,jvm会产生对象,jvm需要对对象进行回收工作,如果

我们不管理gc(对象产生和回收),jvm很快耗尽。现在研究的是Spark Streaming的Spark GC

。Spark Streaming对rdd的数据管理、元数据管理相当jvm对gc管理。

数据、元数据是操作DStream时产生的,数据、元数据的回收则需要研究DStream的产生和回收。

看下DStream的继承结构:

接收数据靠InputDStream,数据输入、数据操作、数据输出,整个生命周期都是基于DStream构建的;得出结论:DStream负责rdd的生命周期,rrd是DStream产生的,对rdd的操作也是对DStream的操作,所以不断产生batchDuration的循环,所以研究对rdd的操作也就是研究对DStream的操作。

源码分析:

通过对DirectKafkaInputDStream 会产生kafkardd:
override def compute(validTime: Time): Option[KafkaRDD[K, V, U, T, R]] = {
  val untilOffsets = clamp(latestLeaderOffsets(maxRetries))
  val rdd = KafkaRDD[K, V, U, T, R](
    context.sparkContext, kafkaParams, currentOffsets, untilOffsets, messageHandler)
  // Report the record number and metadata of this batch interval to InputInfoTracker.
  val offsetRanges = currentOffsets.map { case (tp, fo) =>
    val uo = untilOffsets(tp)
    OffsetRange(tp.topic, tp.partition, fo, uo.offset)
  }
  val description = offsetRanges.filter { offsetRange =>
    // Don't display empty ranges.
    offsetRange.fromOffset != offsetRange.untilOffset
  }.map { offsetRange =>
    s"topic: ${offsetRange.topic}\tpartition: ${offsetRange.partition}\t" +
      s"offsets: ${offsetRange.fromOffset} to ${offsetRange.untilOffset}"
  }.mkString("\n")
  // Copy offsetRanges to immutable.List to prevent from being modified by the user
  val metadata = Map(
    "offsets" -> offsetRanges.toList,
    StreamInputInfo.METADATA_KEY_DESCRIPTION -> description)
  val inputInfo = StreamInputInfo(id, rdd.count, metadata)
  ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)
  currentOffsets = untilOffsets.map(kv => kv._1 -> kv._2.offset)
  Some(rdd)
}

foreachRDD会触发ForEachDStream:

/**
 * An internal DStream used to represent output operations like DStream.foreachRDD.
 * @param parent        Parent DStream
 * @param foreachFunc   Function to apply on each RDD generated by the parent DStream
 * @param displayInnerRDDOps Whether the detailed callsites and scopes of the RDDs generated
 *                           by
`foreachFunc` will be displayed in the UI; only the scope and
 *                           callsite of
`DStream.foreachRDD` will be displayed.
 */
private[streaming]
class ForEachDStream[T: ClassTag] (
    parent: DStream[T],
    foreachFunc: (RDD[T], Time) => Unit,
    displayInnerRDDOps: Boolean
  ) extends DStream[Unit](parent.ssc) {
  override def dependencies: List[DStream[_]] = List(parent)
  override def slideDuration: Duration = parent.slideDuration
  override def compute(validTime: Time): Option[RDD[Unit]] = None
  override def generateJob(time: Time): Option[Job] = {
    parent.getOrCompute(time) match {
      case Some(rdd) =>
        val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) {
          foreachFunc(rdd, time)
        }
        Some(new Job(time, jobFunc))
      case None => None
    }
  }
}

再看DStream源码foreachRDD:

/**
 * Apply a function to each RDD in this DStream. This is an output operator, so
 * 'this' DStream will be registered as an output stream and therefore materialized.
 * @param foreachFunc foreachRDD function
 * @param displayInnerRDDOps Whether the detailed callsites and scopes of the RDDs generated
 *                           in the
`foreachFunc` to be displayed in the UI. If `false`, then
 *                           only the scopes and callsites of
`foreachRDD` will override those
 *                           of the RDDs on the display.
 */
private def foreachRDD(
    foreachFunc: (RDD[T], Time) => Unit,
    displayInnerRDDOps: Boolean): Unit = {
  new ForEachDStream(this,
    context.sparkContext.clean(foreachFunc, false), displayInnerRDDOps).register()
}

/**

 * Get the RDD corresponding to the given time; either retrieve it from cache
 * or compute-and-cache it.
 */
private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {
  // If RDD was already generated, then retrieve it from HashMap,
  // or else compute the RDD
  generatedRDDs.get(time).orElse {
    // Compute the RDD if time is valid (e.g. correct time in a sliding window)
    // of RDD generation, else generate nothing.
    if (isTimeValid(time)) {
      val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {
        // Disable checks for existing output directories in jobs launched by the streaming
        // scheduler, since we may need to write output to an existing directory during checkpoint
        // recovery; see SPARK-4835 for more details. We need to have this call here because
        // compute() might cause Spark jobs to be launched.
        PairRDDFunctions.disableOutputSpecValidation.withValue(true) {
          compute(time)
        }
      }
      rddOption.foreach { case newRDD =>
        // Register the generated RDD for caching and checkpointing
        if (storageLevel != StorageLevel.NONE) {
          newRDD.persist(storageLevel)
          logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")
        }
        if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {
          newRDD.checkpoint()
          logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")
        }
        generatedRDDs.put(time, newRDD)
      }
      rddOption
    } else {
      None
    }
  }

DStream随着时间进行,不断在内存数据结构,generatorRDD中时间窗口和窗口下的rdd实例,

按照batchDuration存储rdd以及删除掉rdd的。有时候会调用DStream的cache操作,cache就是persist操作,其实是对rdd的cache操作。

Rdd本身释放,产生rdd有数据源和元数据,释放rdd时山方面都需要考虑。数据周期性产生和周期性释放,需要找到时钟,需要找jobGenerator下的时钟:

private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds,
  longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator")

根据时间发给eventloop,这边receive的时候不断的有generatorjobs产生:

/** Generate jobs and perform checkpoint for the given `time`.  */
private def generateJobs(time: Time) {
  // Set the SparkEnv in this thread, so that job generation code can access the environment
  // Example: BlockRDDs are created in this thread, and it needs to access BlockManager
  // Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.
  SparkEnv.set(ssc.env)
  Try {
    jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batch
    graph.generateJobs(time) // generate jobs using allocated block
  } match {
    case Success(jobs) =>
      val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)
      jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))
    case Failure(e) =>
      jobScheduler.reportError("Error generating jobs for time " + time, e)
  }
  eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))
}

短短几行代码把整个作业的生命周期处理的清清楚楚。

/** Processes all events */
private def processEvent(event: JobGeneratorEvent) {
  logDebug("Got event " + event)
  event match {
    case GenerateJobs(time) => generateJobs(time)
    case ClearMetadata(time) => clearMetadata(time)
    case DoCheckpoint(time, clearCheckpointDataLater) =>
      doCheckpoint(time, clearCheckpointDataLater)
    case ClearCheckpointData(time) => clearCheckpointData(time)
  }
}

看下clearMetadata方法:

/** Clear DStream metadata for the given `time`. */
private def clearMetadata(time: Time) {
  ssc.graph.clearMetadata(time)
  // If checkpointing is enabled, then checkpoint,
  // else mark batch to be fully processed
  if (shouldCheckpoint) {
    eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = true))
  } else {
    // If checkpointing is not enabled, then delete metadata information about
    // received blocks (block data not saved in any case). Otherwise, wait for
    // checkpointing of this batch to complete.
    val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
 jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
    jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
    markBatchFullyProcessed(time)
  }
}

Inputinfotracker里面是保存了元数据。

defclearMetadata(time: Time) {
  logDebug("Clearing metadata for time " + time)
  this.synchronized {
    outputStreams.foreach(_.clearMetadata(time))
  }
  logDebug("Cleared old metadata for time " + time)
}

清理完成后输出日志。

有很多类型数据输出,先清理outputds的内容,有不同的outputds,其实就是foreachds。

继续跟踪ds类的清理方法:

/**
 * Clear metadata that are older than
`rememberDuration` of this DStream.
 * This is an internal method that should not be called directly. This default
 * implementation clears the old generated RDDs. Subclasses of DStream may override
 * this to clear their own metadata along with the generated RDDs.
 */
private[streaming] def clearMetadata(time: Time) {
  val unpersistData = ssc.conf.getBoolean("spark.streaming.unpersist", true)
  val oldRDDs = generatedRDDs.filter(_._1 <= (time - rememberDuration))//batchdration的倍数
  logDebug("Clearing references to old RDDs: [" +
    oldRDDs.map(x => s"${x._1} -> ${x._2.id}").mkString(", ") + "]")
  generatedRDDs --= oldRDDs.keys
  if (unpersistData) {
    logDebug("Unpersisting old RDDs: " + oldRDDs.values.map(_.id).mkString(", "))
    oldRDDs.values.foreach { rdd =>
      rdd.unpersist(false)
      // Explicitly remove blocks of BlockRDD
      rdd match {
        case b: BlockRDD[_] =>
          logInfo("Removing blocks of RDD " + b + " of time " + time)
          b.removeBlocks()
        case _ =>
      }
    }
  }
  logDebug("Cleared " + oldRDDs.size + " RDDs that were older than " +
    (time - rememberDuration) + ": " + oldRDDs.keys.mkString(", "))
  dependencies.foreach(_.clearMetadata(time))
}

除了清理rdd还需要清理元数据。

随着时间推移,不断收到清理的消息,不用担心driver内存问题。

接下来需要删除RDD:

/**
 * Remove the data blocks that this BlockRDD is made from. NOTE: This is an
 * irreversible operation, as the data in the blocks cannot be recovered back
 * once removed. Use it with caution.
 */
private[spark] def removeBlocks() {
  blockIds.foreach { blockId =>
    sparkContext.env.blockManager.master.removeBlock(blockId)
  }
  _isValid = false
}

基于rdd肯定背blockmanager,需要删除block的话需要告诉blockmanager master来做。

接下来需要处理depanedcied foreach需要把依赖的父ds都会被清理掉。

最后一个问题:清理是在什么时候被触发的?

根据源码分析,作业产生的jobGenerator类中有下面的方法:

/**
 * Callback called when a batch has been completely processed.
 */
def onBatchCompletion(time: Time) {
  eventLoop.post(ClearMetadata(time))
}
/**
 * Callback called when the checkpoint of a batch has been written.
 */
def onCheckpointCompletion(time: Time, clearCheckpointDataLater: Boolean) {
  if (clearCheckpointDataLater) {
    eventLoop.post(ClearCheckpointData(time))
  }
}

每个batchDuration处理完成后都会被回调、发消息,checkpoint完成之后也会调用checkpointdata,需要从作业运行来分析:JobScheduler类下的jobHandler方法:private def processEvent(event: JobSchedulerEvent) {

  try {
    event match {
      case JobStarted(job, startTime) => handleJobStart(job, startTime)
      case JobCompleted(job, completedTime) => handleJobCompletion(job, completedTime)
      case ErrorReported(m, e) => handleError(m, e)
    }
  } catch {
    case e: Throwable =>
      reportError("Error in job scheduler", e)
  }
}
private def handleJobCompletion(job: Job, completedTime: Long) {
  val jobSet = jobSets.get(job.time)
  jobSet.handleJobCompletion(job)
  job.setEndTime(completedTime)
  listenerBus.post(StreamingListenerOutputOperationCompleted(job.toOutputOperationInfo))
  logInfo("Finished job " + job.id + " from job set of time " + jobSet.time)
  if (jobSet.hasCompleted) {
    jobSets.remove(jobSet.time)
    jobGenerator.onBatchCompletion(jobSet.time)
    logInfo("Total delay: %.3f s for time %s (execution: %.3f s)".format(
      jobSet.totalDelay / 1000.0, jobSet.time.toString,
      jobSet.processingDelay / 1000.0
    ))
    listenerBus.post(StreamingListenerBatchCompleted(jobSet.toBatchInfo))
  }
  job.result match {
    case Failure(e) =>
      reportError("Error running job " + job, e)
    case _ =>
  }
}

完成后调用onBatchCompletion:

/**
 * Callback called when a batch has been completely processed.
 */
def onBatchCompletion(time: Time) {
  eventLoop.post(ClearMetadata(time))
}

总结:

Spark Streaming在batchDuration处理完成后都会对产生的信息做清理,对输出DStream清理、依赖关系进行清理、清理默认也会清理rdd数据信息、元数据清理。

感谢王家林老师的知识分享

Spark Streaming发行版笔记16

新浪微博:http://weibo.com/ilovepains

微信公众号:DT_Spark

博客:http://blog.sina.com.cn/ilovepains

手机:18610086859

QQ:1740415547

邮箱:18610086859@vip.126.com

Spark Streaming数据清理内幕彻底解密的更多相关文章

  1. Spark Streaming源码解读之数据清理内幕彻底解密

    本期内容 : Spark Streaming数据清理原理和现象 Spark Streaming数据清理代码解析 Spark Streaming一直在运行的,在计算的过程中会不断的产生RDD ,如每秒钟 ...

  2. Spark Streaming揭秘 Day28 在集成开发环境中详解Spark Streaming的运行日志内幕

    Spark Streaming揭秘 Day28 在集成开发环境中详解Spark Streaming的运行日志内幕 今天会逐行解析一下SparkStreaming运行的日志,运行的是WordCountO ...

  3. Spark Streaming 数据接收过程

    SparkStreaming 源码分析 一节中从源码角度,描述了Streaming执行时代码的调用过程.下边就接收转化阶段过程再简单分析一下,为分析backpressure作准备. SparkStre ...

  4. Spark Streaming数据限流简述

      Spark Streaming对实时数据流进行分析处理,源源不断的从数据源接收数据切割成一个个时间间隔进行处理:   流处理与批处理有明显区别,批处理中的数据有明显的边界.数据规模已知:而流处理数 ...

  5. 16.Spark Streaming源码解读之数据清理机制解析

    原创文章,转载请注明:转载自 听风居士博客(http://www.cnblogs.com/zhouyf/) 本期内容: 一.Spark Streaming 数据清理总览 二.Spark Streami ...

  6. Spark Streaming揭秘 Day16 数据清理机制

    Spark Streaming揭秘 Day16 数据清理机制 今天主要来讲下Spark的数据清理机制,我们都知道,Spark是运行在jvm上的,虽然jvm本身就有对象的自动回收工作,但是,如果自己不进 ...

  7. Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考

    本期内容 : 数据接收架构设计模式 数据接收源码彻底研究 一.Spark Streaming数据接收设计模式   Spark Streaming接收数据也相似MVC架构: 1. Mode相当于Rece ...

  8. Spark Streaming源码解读之流数据不断接收和全生命周期彻底研究和思考

    本节的主要内容: 一.数据接受架构和设计模式 二.接受数据的源码解读 Spark Streaming不断持续的接收数据,具有Receiver的Spark 应用程序的考虑. Receiver和Drive ...

  9. Spark Streaming事务处理彻底掌握

    本篇文章主要从二个方面展开: 一.Exactly Once 二.输出不重复 事务: 银行转帐为例,A用户转账给B用户,B用户可能收到多笔钱,如何保证事务的一致性,也就是说事务输出,能够输出且只会输出一 ...

随机推荐

  1. react native native module

    React Native Native Modules,官方地址:https://facebook.github.io/react-native/docs/native-modules-android ...

  2. AngularJS自定义指令及指令配置项

    两种写法 //第一种 angular.module('MyApp',[]) .directive('zl1',zl1) .controller('con1',['$scope',func1]); fu ...

  3. POJ 3259 Wormholes【最短路/SPFA判断负环模板】

    农夫约翰在探索他的许多农场,发现了一些惊人的虫洞.虫洞是很奇特的,因为它是一个单向通道,可让你进入虫洞的前达到目的地!他的N(1≤N≤500)个农场被编号为1..N,之间有M(1≤M≤2500)条路径 ...

  4. HRBUST 1217 统计单词个数

    $dp$. 设$dp[i][j]$为到$i$位置,切成了$j$段的最大收益,然后枚举一下$f$,$dp[i][j]=max(dp[f][j-1]+v[f+1][i])$.一段区间的价值可以用区间$dp ...

  5. SpringBoot+Mybatis整合实例

    前言 大家都知道springboot有几大特点:能创建独立的Spring应用程序:能嵌入Tomcat,无需部署WAR文件:简化Maven配置:自动配置Spring等等.这里整合mybatis,创建一个 ...

  6. 爬楼梯(LintCode)

    爬楼梯 假设你正在爬楼梯,需要n步你才能到达顶部.但每次你只能爬一步或者两步,你能有多少种不同的方法爬到楼顶部? 样例 比如n=3,中不同的方法 返回 3 用递归又超时了..于是又换了DP,dp并不熟 ...

  7. 洛谷P4587 [FJOI2016]神秘数(主席树)

    题面 洛谷 题解 考虑暴力,对于询问中的一段区间\([l,r]\),我们先将其中的数升序排序,假设当前可以表示出\([1,k]\)目前处理\(a_i\),假如\(a_i>k+1\),则答案就是\ ...

  8. 洛谷——P1680 奇怪的分组

    P1680 奇怪的分组 题目背景 终于解出了dm同学的难题,dm同学同意帮v神联络.可dm同学有个习惯,就是联络同学的时候喜欢分组联络,而且分组的方式也很特别,要求第i组的的人数必须大于他指定的个数c ...

  9. BZOJ 1123 [POI2008]BLO(Tarjan算法)

    [题目链接] http://www.lydsy.com/JudgeOnline/problem.php?id=1123 [题目大意] Byteotia城市有n个towns,m条双向roads. 每条r ...

  10. 【预处理】Codeforces Round #407 (Div. 2) C. Functions again

    考虑枚举每个子串开头的位置,然后答案转化成询问这个位置之后 哪个位置的前缀和 - 这位置的前缀和 最大(当然是已经把绝对值和正负的情况处理好了,可以发现按奇偶,这个序列只有两种情况),只需要预处理这两 ...