# if open wal
org.apache.spark.SparkException: Could not read data from write ahead log record FileBasedWriteAheadLogSegment

SparkStreaming开启了checkpoint wal后有时会出现如上报错,但不会影响整体程序,只会丢失报错的那个job的数据。其根本原因是wal文件被删了,被sparkstreaming自己的清除机制删掉了。通常意味着一定程度流式程序上存在速率不匹配或堆积问题。

查看driver日志可发现类似如下的日志:

-- :: INFO  [Logging.scala:] Attempting to clear  old log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248380000:
-- :: INFO [Logging.scala:] Attempting to clear old log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248470000: hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata/log-1490248404471-1490248464471
-- :: INFO [Logging.scala:] Cleared log files in hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedBlockMetadata older than 1490248470000
-- :: ERROR [Logging.scala:] Task in stage 35.0 failed times; aborting job
-- :: ERROR [Logging.scala:] Error running job streaming job ms.
org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 35.0 failed times, most recent failure: Lost task 41.3 in stage 35.0 (TID , alps60): org.apache.spark.SparkException: Could not read data from write ahead log record FileBasedWriteAheadLogSegment(hdfs://alps-cluster/tmp/banyan/checkpoint/RhinoWechatConsumer/receivedData/0/log-1490248403649-1490248463649,44333482,118014)
at org.apache.spark.streaming.rdd.WriteAheadLogBackedBlockRDD.org$apache$spark$streaming$rdd$WriteAheadLogBackedBlockRDD$$getBlockFromWriteAheadLog$(WriteAheadLogBackedBlockRDD.scala:)

可以发现 1490248403649 的日志被删除程序删除了(cleared log older than 1490248470000),然后这个wal就报错了。

Spark官方文档没有任何关于这个的配置,因此直接看源码。(spark很多这样的坑,得看源码才知道如何hack或有些隐藏配置)。

1.FileBasedWriteAheadLogSegment 类中根据日志搜索发现了clean方法(后面的逻辑就是具体删除逻辑,暂不关心),核心就是如何调整这个threshTime了。

 /**
* Delete the log files that are older than the threshold time.
*
* Its important to note that the threshold time is based on the time stamps used in the log
* files, which is usually based on the local system time. So if there is coordination necessary
* between the node calculating the threshTime (say, driver node), and the local system time
* (say, worker node), the caller has to take account of possible time skew.
*
* If waitForCompletion is set to true, this method will return only after old logs have been
* deleted. This should be set to true only for testing. Else the files will be deleted
* asynchronously.
*/
def clean(threshTime: Long, waitForCompletion: Boolean): Unit = {
val oldLogFiles = synchronized {
val expiredLogs = pastLogs.filter { _.endTime < threshTime }
pastLogs --= expiredLogs
expiredLogs
}
logInfo(s"Attempting to clear ${oldLogFiles.size} old log files in $logDirectory " +
s"older than $threshTime: ${oldLogFiles.map { _.path }.mkString("\n")}")

2.一步步看调用追踪出去,ReceivedBlockHandler -> ReceiverSupervisorImpl -> CleanUpOldBlocks 。这里有个和ReceiverTracker通信的rpc,因此直接搜索CleanUpOldBlocks -> ReceiverTracker -> JobGenerator

在JobGenerator.clearCheckpointData 中有这么一段逻辑

 /** Clear DStream checkpoint data for the given `time`. */
private def clearCheckpointData(time: Time) {
ssc.graph.clearCheckpointData(time) // All the checkpoint information about which batches have been processed, etc have
// been saved to checkpoints, so its safe to delete block metadata and data WAL files
val maxRememberDuration = graph.getMaxInputStreamRememberDuration()
jobScheduler.receiverTracker.cleanupOldBlocksAndBatches(time - maxRememberDuration)
jobScheduler.inputInfoTracker.cleanup(time - maxRememberDuration)
markBatchFullyProcessed(time)
}

发现了 ssc.graph有个 maxRememberDuration 的成员属性!这就意味着有机会通过ssc去修改它。

搜索一下代码便发现了相关方法:

jssc.remember(new Duration(2 * 3600 * 1000));

反思:

从之前的日志我们发现默认的清除间隔是几十秒左右,但是在代码中我们可以发现这个参数只能被设置一次(每次设置都会检查当前为null才生效,初始值为null)。所以问题来了,这几十秒在哪里设置的?代码一时没找到,于是项目直接搜索 remember,发现了在DStream里的初始化代码(其中slideDuration初始化来自InputDStream)。根据计算,我们的batchInterval为15s,其他两个没有设置,则checkpointDuration 为15s,rememberDuration为30s。

override def slideDuration: Duration = {
if (ssc == null) throw new Exception("ssc is null")
if (ssc.graph.batchDuration == null) throw new Exception("batchDuration is null")
ssc.graph.batchDuration
} /**
* Initialize the DStream by setting the "zero" time, based on which
* the validity of future times is calculated. This method also recursively initializes
* its parent DStreams.
*/
private[streaming] def initialize(time: Time) {
if (zeroTime != null && zeroTime != time) {
throw new SparkException("ZeroTime is already initialized to " + zeroTime
+ ", cannot initialize it again to " + time)
}
zeroTime = time // Set the checkpoint interval to be slideDuration or 10 seconds, which ever is larger
if (mustCheckpoint && checkpointDuration == null) {
checkpointDuration = slideDuration * math.ceil(Seconds(10) / slideDuration).toInt
logInfo("Checkpoint interval automatically set to " + checkpointDuration)
} // Set the minimum value of the rememberDuration if not already set
var minRememberDuration = slideDuration
if (checkpointDuration != null && minRememberDuration <= checkpointDuration) {
// times 2 just to be sure that the latest checkpoint is not forgotten (#paranoia)
minRememberDuration = checkpointDuration * 2
}
if (rememberDuration == null || rememberDuration < minRememberDuration) {
rememberDuration = minRememberDuration
} // Initialize the dependencies
dependencies.foreach(_.initialize(zeroTime))
}

SparkStreaming “Could not read data from write ahead log record” 报错分析解决的更多相关文章

  1. sass-loader使用data引入公用文件或全局变量报错

    报错信息: ValidationError: Invalid options object. Sass Loader has been initialised using an options obj ...

  2. vue调用组件,组件回调给data中的数组赋值,报错Invalid prop type check failed for prop value. Expecte

    报错信息: 代码信息:调用一个tree组件,选择一些信息 <componentsTree ref="typeTreeComponent" @treeCheck="t ...

  3. 详细解读 :java.sql.SQLException: Connection is read-only. Queries leading to data modification are not allowed,Java报错之Connection is read-only.

    问题分析: 实际开发项目中,进行insert的时候,产生这个问题是Spring框架的一个安全权限保护方法,对于方法调用的事物保护,一般配置如下: <!-- 事务管理 属性 --> < ...

  4. filebeat+kafka+SparkStreaming程序报错及解决办法

    // :: WARN RandomBlockReplicationPolicy: Expecting replicas with only peer/s. // :: WARN BlockManage ...

  5. @Data注解使用后get set报错解决方法

    Maven项目中已经导入相关的lombok.jar包但是使用后仍提示无set/get方法 .在idea中安装如下插件,安装后重启idea可用不报错. 转载于:https://www.cnblogs.c ...

  6. The data property "dialogVisble" is already declared as a prop. Use prop default value instead报错原因

    vue中使用props传递数据就不能在子组件的data中用同样的名字(比如dialogVisble)了,否则会报错.解决方法直接去掉data中的相同名字改为其他的.

  7. jQuery Ajax请求(关于火狐下SyntaxError: missing ] after element list ajax返回json,var json = eval("("+data+")"); 报错)

    $.ajax({    contentType: "application/x-www-form-urlencoded;charset=UTF-8" ,    type: &quo ...

  8. 1125MySQL Sending data导致查询很慢的问题详细分析

    -- 问题1 tablename使用主键索引反而比idx_ref_id慢的原因EXPLAIN SELECT SQL_NO_CACHE COUNT(id) FROM dbname.tbname FORC ...

  9. HBase的Write Ahead Log (WAL) —— 整体架构、线程模型

    解决的问题 HBase的Write Ahead Log (WAL)提供了一种高并发.持久化的日志保存与回放机制.每一个业务数据的写入操作(PUT / DELETE)执行前,都会记账在WAL中. 如果出 ...

随机推荐

  1. [转] React风格的企业前端技术

    亲爱的各位朋友们,大家下午好! 首先祝大家国庆节快乐! 很高兴可以在国庆前夕,可以为大家分享一下React风格的企业前端技术. 谈到前端,可能以前大家的第一感觉就是,前端嘛,无非就是做做页面切图,顶多 ...

  2. Java基础知识➣集合整理(三)

    概述 集合框架是一个用来代表和操纵集合的统一架构.所有的集合框架都包含如下内容: 接口:是代表集合的抽象数据类型.接口允许集合独立操纵其代表的细节.在面向对象的语言,接口通常形成一个层次. 实现(类) ...

  3. centos的基本操作

    1.ssh连接阿里云一段时间不操作自动断开打开/etc/ssh/sshd_config添加或修改: ClientAliveInterval 120ClientAliveCountMax 0 2.挂载数 ...

  4. 查看当前的app运行的是哪个Activity

    1.确认手机连接了adb-->检查方式:adb devices 2.手机运行任意app,随意进入一个页面 3.此时cmd运行:adb shell "dumpsys window | g ...

  5. BZOJ2084 [Poi2010]Antisymmetry Manachar

    题目传送门 - BZOJ2084 题解 对于一个0我们把它看作01,1看作10,然后只要原串中的某个子串可以通过这两个变换成为回文串就可以满足条件了. 对于转换过的串,Manachar随便弄几下就可以 ...

  6. ibatis 多种传参方式

    1,在公司项目yuda遇到的传入in语句,如果直接拼接in语句:in (....),sqlmap中使用#...#输出是不行的. 为需要使用: 第三种:in后面的数据确定,使用string传入      ...

  7. Practice| 类型转换| 逻辑运算

    类型转换 class Pratice1{ public static void main(String[] args){ int a = 3; //float f = 4.5;//TypeChange ...

  8. SEED-DVS6467_SDK的交叉编译环境搭建问题

    今天在ubuntu16.04上安装arm的交叉编译器arm_v5t_le-gcc,环境变量配置好以后,运行arm_v5t_le-gcc命令,总提示No such file or directory.然 ...

  9. Java实现检验一串数字的出栈合法性

    题目描述: 解题思路: 判断出栈合法性的关键在于,对于每一个数,在它后面出栈且比它小的数,必是以降序排列的. 比如说3 4 2 1 5这一组数,对于第一个数 3 来说,后面比它小的数有 1.2,而在4 ...

  10. POJ 1733 Parity game 【带权并查集】+【离散化】

    <题目链接> 题目大意: 一个由0,1组成的序列,每次给出一段区间的奇偶,问哪一条信息不合法. 解题分析: 我们用s[i]表示前i个数的前缀和,那么a b even意味着s[b]和s[a- ...