使用spark2.4跟spark2.3 做替代公司现有的hive选项。

跑个别任务spark有以下错误

java.io.EOFException: Premature EOF from inputStream
at com.hadoop.compression.lzo.LzopInputStream.readFully(LzopInputStream.java:74)
at com.hadoop.compression.lzo.LzopInputStream.readHeader(LzopInputStream.java:115)
at com.hadoop.compression.lzo.LzopInputStream.<init>(LzopInputStream.java:54)
at com.hadoop.compression.lzo.LzopCodec.createInputStream(LzopCodec.java:112)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:129)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:269)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:268)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:226)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:97)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:330)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:294)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

排查原因 发现是读取0 size 大小的文件时出错

并没有发现spark官方有修复该bug

手动修改代码 过滤掉这种文件

在 HadoopRDD.scala 类相应位置修改如图即可

      // We get our input bytes from thread-local Hadoop FileSystem statistics.
// If we do a coalesce, however, we are likely to compute multiple partitions in the same
// task and in the same thread, in which case we need to avoid override values written by
// previous partitions (SPARK-13071).
private def updateBytesRead(): Unit = {
getBytesReadCallback.foreach { getBytesRead =>
inputMetrics.setBytesRead(existingBytesRead + getBytesRead())
}
} private var reader: RecordReader[K, V] = null
private val inputFormat = getInputFormat(jobConf)
HadoopRDD.addLocalConfiguration(
new SimpleDateFormat("yyyyMMddHHmmss", Locale.US).format(createTime),
context.stageId, theSplit.index, context.attemptNumber, jobConf) reader =
try {
if (split.inputSplit.value.getLength != 0) { //文件大小不为零 采取读取
inputFormat.getRecordReader(split.inputSplit.value, jobConf, Reporter.NULL)
} else {
logWarning(s"Skipped the file size 0 file: ${split.inputSplit}")
finished = true //大小为0 即结束 跳过
null
}
} catch {
case e: FileNotFoundException if ignoreMissingFiles =>
logWarning(s"Skipped missing file: ${split.inputSplit}", e)
finished = true
null
// Throw FileNotFoundException even if `ignoreCorruptFiles` is true
case e: FileNotFoundException if !ignoreMissingFiles => throw e
case e: IOException if ignoreCorruptFiles =>
logWarning(s"Skipped the rest content in the corrupted file: ${split.inputSplit}", e)
finished = true
null
}
// Register an on-task-completion callback to close the input stream.
context.addTaskCompletionListener[Unit] { context =>
// Update the bytes read before closing is to make sure lingering bytesRead statistics in
// this thread get correctly added.
updateBytesRead()
closeIfNeeded()
}

spark 执行报错 java.io.EOFException: Premature EOF from inputStream的更多相关文章

  1. 关于spark入门报错 java.io.FileNotFoundException: File file:/home/dummy/spark_log/file1.txt does not exist

    不想看废话的可以直接拉到最底看总结 废话开始: master: master主机存在文件,却报 执行spark-shell语句:  ./spark-shell  --master spark://ma ...

  2. Spark启动报错|java.io.FileNotFoundException: File does not exist: hdfs://hadoop101:9000/directory

    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:) at org.a ...

  3. hadoop MR 任务 报错 &quot;Error: java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io&quot;

    错误原文分析 文件操作超租期,实际上就是data stream操作过程中文件被删掉了.一般是由于Mapred多个task操作同一个文件.一个task完毕后删掉文件导致. 这个错误跟dfs.datano ...

  4. hbase_异常_03_java.io.EOFException: Premature EOF: no length prefix available

    一.异常现象 更改了hadoop的配置文件:core-site.xml  和   mapred-site.xml  之后,重启hadoop 和 hbase 之后,发现hbase日志中抛出了如下异常: ...

  5. Spark报错java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    Spark 读取 JSON 文件时运行报错 java.io.IOException: Could not locate executable null\bin\winutils.exe in the ...

  6. 关于SpringMVC项目报错:java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xxxx.xml]

    关于SpringMVC项目报错:java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xxxx ...

  7. Kafka 启动报错java.io.IOException: Can't resolve address.

    阿里云上 部署Kafka 启动报错java.io.IOException: Can't resolve address. 本地调试的,报错 需要在本地添加阿里云主机的 host 映射   linux ...

  8. 文件上传报错java.io.FileNotFoundException拒绝访问

    局部代码如下: File tempFile = new File("G:/tempfileDir"+"/"+fileName); if(!tempFile.ex ...

  9. 完美解决JavaIO流报错 java.io.FileNotFoundException: F:\ (系统找不到指定的路径。)

    完美解决JavaIO流报错 java.io.FileNotFoundException: F:\ (系统找不到指定的路径.) 错误原因 读出文件的路径需要有被拷贝的文件名,否则无法解析地址 源代码(用 ...

随机推荐

  1. Hihocoder #1333 : 平衡树·Splay2

    1333 : 平衡树·Splay2 时间限制:10000ms 单点时限:1000ms 内存限制:256MB 描述 小Ho:好麻烦啊~ 小Hi:小Ho你在干嘛呢? 小Ho:我在干活啊!前几天老师让我帮忙 ...

  2. jumpserver官方手动安装

    测试环境 CPU: 64位双核处理器 内存: 4G DDR3 数据库:mysql 版本大于等于 5.6 mariadb 版本大于等于 5.5.6 环境 系统: CentOS 7 IP: 192.168 ...

  3. Irrlicht引擎剖析一

    代码风格:  1.接口以I开头,实现以C开头,保存数据的结构体以S开头  2.函数名以小写字母开头,变量以大字母开头  3.接口的公共函数,其参数大部分给了默认值  4.采用名字空间    名字空间i ...

  4. PHP学习之观察者模式

    <?php //观察者模式涉及到两个类 //男人类 和女朋友类 //男人类对象小明, 女朋友类对象小花.小丽 class Man { //用了存放观察者 protected $observers ...

  5. 如何在国内使用google

    而Google却一直坚持“机器算法”至上,让信息以公正的排序结果呈现,对于IT人员来说国内不能用google进行搜索是很痛苦的. 公司邮件介绍了一些方法,mark一下还是很有用的. http://ww ...

  6. PC通过netsh获取wifi密码

    1.查看当前系统所有保存wifi的ssid netsh wlan show profiles 2.根据指定ssid查看wifi密码,密码就是关键内容 netsh wlan show profile n ...

  7. scikit-learn机器学习(四)使用决策树做分类,并画出决策树,随机森林对比

    数据来自 UCI 数据集 匹马印第安人糖尿病数据集 载入数据 # -*- coding: utf-8 -*- import pandas as pd import matplotlib matplot ...

  8. Ubuntu 18.04 下 PostgreSQL 10 的安装与基础配置

    下载安装 在命令行执行如下语句: apt-get install postgresql-10 该指令会帮助你下载如下PostgreSQL组件: name |explain | ------------ ...

  9. Android之View的内容

    View的事件体系 本章介绍View的事件分发和滑动冲突问题的解决方案. 3.1 view的基础知识 View的位置参数.MotionEvent和TouchSlop对象.VelocityTracker ...

  10. IDEA在引入Maven项目后Dependencies中在出现红色波浪线

    解决方法: 移除pom.xml中相关依赖,再重新添加即可. 情况及具体解决方法如下: 1.在Maven Project中 Dependencies 出现红色波浪线,如图所示 2.查询本地仓库:jar包 ...