spark读取空orc文件时报错java.lang.RuntimeException: serious problem at OrcInputFormat.generateSplitsInfo
问题复现:
G:\bigdata\spark-2.3.3-bin-hadoop2.7\bin>spark-shell
2020-12-26 10:20:48 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://DESKTOP-01KN1P4:4040
Spark context available as 'sc' (master = local[*], app id = local-1608949256544).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.3
/_/ Using Scala version 2.11.8 (Java HotSpot(TM) Client VM, Java 1.8.0_201)
Type in expressions to have them evaluated.
Type :help for more information. scala> sql("create table empty_orc(a int) stored as orc location '/tmp/empty_orc'").show
++
||
++
++ (其他窗口新建一个空文件) touch /tmp/empty_orc/zero.orc scala> sql("select * from empty_orc").show java.lang.RuntimeException: serious problem
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1021)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:340)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3278)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2489)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
at org.apache.spark.sql.Dataset.show(Dataset.scala:723)
at org.apache.spark.sql.Dataset.show(Dataset.scala:682)
at org.apache.spark.sql.Dataset.show(Dataset.scala:691)
... 49 elided
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)
at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1010)
... 99 more
该问题的主要原因是在读取orc表时,遇到有空文件时报错,bug记录地址:
SPARK-19809:NullPointerException on zero-size ORC file(https://issues.apache.org/jira/browse/SPARK-19809)
SPARK-29773:Unable to process empty ORC files in Hive Table using Spark SQL(https://issues.apache.org/jira/browse/SPARK-29773)
解决办法:使用参数spark.sql.hive.convertMetastoreOrc=true
G:\bigdata\spark-2.3.3-bin-hadoop2.7\bin>spark-shell --conf spark.sql.hive.convertMetastoreOrc=true
2020-12-26 10:29:06 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://DESKTOP-01KN1P4:4040
Spark context available as 'sc' (master = local[*], app id = local-1608949754291).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.3.3
/_/ Using Scala version 2.11.8 (Java HotSpot(TM) Client VM, Java 1.8.0_201)
Type in expressions to have them evaluated.
Type :help for more information. scala> sql("select * from empty_orc").show +---+
| a|
+---+
+---+
spark的帮助文档种介绍如下:
ORC Files
Since Spark 2.3, Spark supports a vectorized ORC reader with a new ORC file format for ORC files. To do that, the following configurations are newly added. The vectorized reader is used for the native ORC tables (e.g., the ones created using the clause USING ORC) when spark.sql.orc.impl is set to native and spark.sql.orc.enableVectorizedReader is set to true. For the Hive ORC serde tables (e.g., the ones created using the clause USING HIVE OPTIONS (fileFormat 'ORC')), the vectorized reader is used when spark.sql.hive.convertMetastoreOrc is also set to true.
https://spark.apache.org/docs/2.3.3/sql-programming-guide.html#orc-files
spark读取空orc文件时报错java.lang.RuntimeException: serious problem at OrcInputFormat.generateSplitsInfo的更多相关文章
- sparksql读取hive数据报错:java.lang.RuntimeException: serious problem
问题: Caused by: java.util.concurrent.ExecutionException: java.lang.IndexOutOfBoundsException: Index: ...
- shiro使用redis作为缓存,出现要清除缓存时报错 java.lang.Exception: Failed to deserialize at org.crazycake.shiro.SerializeUtils.deserialize(SerializeUtils.java:41) ~[shiro-redis-2.4.2.1-RELEASE.jar:na]
shiro使用redis作为缓存,出现要清除缓存时报错 java.lang.Exception: Failed to deserialize at org.crazycake.shiro.Serial ...
- 使用RestTemplate时报错java.lang.IllegalStateException: No instances available for 127.0.0.1
我在RestTemplate的配置类里使用了 @LoadBalanced@Componentpublic class RestTemplateConfig { @Bean @LoadBalanced ...
- 云笔记项目- 上传文件报错"java.lang.IllegalStateException: File has been moved - cannot be read again"
在做文件上传时,当写入上传的文件到文件时,会报错“java.lang.IllegalStateException: File has been moved - cannot be read again ...
- hive启动时报错 java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.fs.Path.initialize
错误提示信息如下 错误信息如下 [root@node1 bin]# ./hive Logging initialized -bin/lib/hive-common-.jar!/hive-log4j.p ...
- 运用反射时报错java.lang.NoSuchMethodException,以解决,记录一下
问题:想调用service类中的私有方法时, Method target=clz.getMethod("say", String.class);用Class的getMethod报错 ...
- storm supervisor启动报错java.lang.RuntimeException: java.io.EOFException
storm因机器断电或其他异常导致的supervisor意外终止,再次启动时报错: 1. 2013-09-24 09:15:44,361 INFO [main] daemon.supervisor ( ...
- Android Studio 首次安装报错 Java.lang.RuntimeException:java.lang.NullPointerException...错
下次安装报:Java.lang.RuntimeException: java.lang.NullPointerException......错 只需在文件..\Android Studio\bin\i ...
- 我的Android进阶之旅------>Android中MediaRecorder.stop()报错 java.lang.RuntimeException: stop failed.【转】
本文转载自:http://blog.csdn.net/ouyang_peng/article/details/48048975 今天在调用MediaRecorder.stop(),报错了,java.l ...
- maven报错 java.lang.RuntimeException: com.google.inject.CreationException: Unable to create injector, see the following errors
2 errors java.lang.RuntimeException: com.google.inject.CreationException: Unable to create injector, ...
随机推荐
- hyper-v虚拟机中ubuntu连不上网络的解决办法
首先重启下hyper-v的服务,看下情况: 1.检查hyper-v相关的服务有没有开启 2.如果开启了服务,unbuntu仍然不能连网,则在ubtuntu中进行接下来的步骤: 2.1 设置网络连接为N ...
- SD 信用模拟检查增强
一.业务流程中需要进行信用模拟检查,但逻辑梳理较为复杂,因此借用交货单创建时信用检查逻辑.但是当交货单信用检查通过时,不创建交货单,因此需要对BAPI:BAPI_OUTB_DELIVERY_CREAT ...
- .net 温故知新【17】:Asp.Net Core WebAPI 中间件
一.前言 到这篇文章为止,关于.NET "温故知新"系列的基础知识就完结了,从这一系列的系统回顾和再学习,对于.NET core.ASP.NET CORE又有了一个新的认识. 不光 ...
- Gosper's Hack (生成 n元集合所有 k 元子集
Gosper's Hack是一种生成 n元集合所有 k元子集的算法,它巧妙地利用了位运算 void GospersHack(int k, int n) { int cur = (1 << ...
- 每天学五分钟 Liunx 0111 | 服务篇:进程权限
程序存储在硬盘中,需要执行的时候被加载到内存里,内存中的程序以进程的方式运行,进程会根据程序的内容去做读写文件,执行指令等操作. 文件/指令等都有自己的执行权限,符合权限的才能被执行.相应的,进程也需 ...
- Ubuntu 安装 MinIO
MinIO是一个开源的高性能对象存储解决方案,支持多种安装方式,本例仅介绍最基础的单机安装方式. 下载安装文件 直接从MinIO官网下载安装文件. 下载服务端 wget https://dl.min. ...
- iframe访问页面,出现 ERR_BLOCKED_BY_RESPONSE
那是因为服务器输出了 X-Frame-Options 头,只要把这个头删除掉,就没问题了
- Kubeadm 安装支持IPV6 K8S1.28.x的简单过程
Kubeadm 安装支持IPV6 K8S的简单过程 背景 手贱 找了一个晚上想尝试安装一个K8S集群 并且可以支持IPV6 协议栈的 然后就开始各种百度. 各种处理 找到了一堆歪门邪道. 但是还不知道 ...
- [转帖]如何通过dba_hist_active_sess_history分析数据库历史性能问题
https://www.cnblogs.com/DataArt/p/10018932.html 在数据库运行的过程中,我们有时会碰到数据库hung住的问题,在这个时候很多人会选择尽快让它恢复正常而不是 ...
- [转帖]Kafka 核心技术与实战学习笔记(七)kafka集群参数配置(上)
一.Broker 端参数 Broke存储信息配置 log.dirs:非常重要,指定Broker需要使用的若干文件目录路径,没有默认值必须亲自指定. log.dir:他只能表示单个路径,补充上一个参数用 ...