spark 2.1.1

spark里执行sql报错

insert overwrite table test_parquet_table select * from dummy

报错如下:

org.apache.spark.SparkException: Task failed while writing rows.
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.writeToFile(hiveWriterContainers.scala:333)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:210)
at org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:210)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:99)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Parquet record is malformed: empty fields are illegal, the field should be ommited completely instead
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
at parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
at org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.writeToFile(hiveWriterContainers.scala:321)
... 8 more
Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead
at parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:241)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
at org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
... 16 more

跟进代码

org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter

    private void writeMap(Object value, MapObjectInspector inspector, GroupType type) {
GroupType repeatedType = type.getType(0).asGroupType();
this.recordConsumer.startGroup();
this.recordConsumer.startField(repeatedType.getName(), 0);
Map<?, ?> mapValues = inspector.getMap(value);
Type keyType = repeatedType.getType(0);
String keyName = keyType.getName();
ObjectInspector keyInspector = inspector.getMapKeyObjectInspector();
Type valuetype = repeatedType.getType(1);
String valueName = valuetype.getName();
ObjectInspector valueInspector = inspector.getMapValueObjectInspector(); for(Iterator i$ = mapValues.entrySet().iterator(); i$.hasNext(); this.recordConsumer.endGroup()) {
Entry<?, ?> keyValue = (Entry)i$.next();
this.recordConsumer.startGroup();
if (keyValue != null) {
Object keyElement = keyValue.getKey();
this.recordConsumer.startField(keyName, 0);
this.writeValue(keyElement, keyInspector, keyType);
this.recordConsumer.endField(keyName, 0);
Object valueElement = keyValue.getValue();
if (valueElement != null) {
this.recordConsumer.startField(valueName, 1);
this.writeValue(valueElement, valueInspector, valuetype);
this.recordConsumer.endField(valueName, 1);
}
}
} this.recordConsumer.endField(repeatedType.getName(), 0);
this.recordConsumer.endGroup();
} private void writeValue(Object value, ObjectInspector inspector, Type type) {
if (type.isPrimitive()) {
this.checkInspectorCategory(inspector, Category.PRIMITIVE);
this.writePrimitive(value, (PrimitiveObjectInspector)inspector);
} else {
GroupType groupType = type.asGroupType();
OriginalType originalType = type.getOriginalType();
if (originalType != null && originalType.equals(OriginalType.LIST)) {
this.checkInspectorCategory(inspector, Category.LIST);
this.writeArray(value, (ListObjectInspector)inspector, groupType);
} else if (originalType != null && originalType.equals(OriginalType.MAP)) {
this.checkInspectorCategory(inspector, Category.MAP);
this.writeMap(value, (MapObjectInspector)inspector, groupType);
} else {
this.checkInspectorCategory(inspector, Category.STRUCT);
this.writeGroup(value, (StructObjectInspector)inspector, groupType);
}
} } private void writePrimitive(Object value, PrimitiveObjectInspector inspector) {
if (value != null) {
switch(inspector.getPrimitiveCategory()) {
case VOID:
return;
case DOUBLE:
this.recordConsumer.addDouble(((DoubleObjectInspector)inspector).get(value));
break;
case BOOLEAN:
this.recordConsumer.addBoolean(((BooleanObjectInspector)inspector).get(value));
break;
case FLOAT:
this.recordConsumer.addFloat(((FloatObjectInspector)inspector).get(value));
break;
case BYTE:
this.recordConsumer.addInteger(((ByteObjectInspector)inspector).get(value));
break;
case INT:
this.recordConsumer.addInteger(((IntObjectInspector)inspector).get(value));
break;
case LONG:
this.recordConsumer.addLong(((LongObjectInspector)inspector).get(value));
break;
case SHORT:
this.recordConsumer.addInteger(((ShortObjectInspector)inspector).get(value));
break;
case STRING:
String v = ((StringObjectInspector)inspector).getPrimitiveJavaObject(value);
this.recordConsumer.addBinary(Binary.fromString(v));
break;
case CHAR:
String vChar = ((HiveCharObjectInspector)inspector).getPrimitiveJavaObject(value).getStrippedValue();
this.recordConsumer.addBinary(Binary.fromString(vChar));
break;
case VARCHAR:
String vVarchar = ((HiveVarcharObjectInspector)inspector).getPrimitiveJavaObject(value).getValue();
this.recordConsumer.addBinary(Binary.fromString(vVarchar));
break;
case BINARY:
byte[] vBinary = ((BinaryObjectInspector)inspector).getPrimitiveJavaObject(value);
this.recordConsumer.addBinary(Binary.fromByteArray(vBinary));
break;
case TIMESTAMP:
Timestamp ts = ((TimestampObjectInspector)inspector).getPrimitiveJavaObject(value);
this.recordConsumer.addBinary(NanoTimeUtils.getNanoTime(ts, false).toBinary());
break;
case DECIMAL:
HiveDecimal vDecimal = (HiveDecimal)inspector.getPrimitiveJavaObject(value);
DecimalTypeInfo decTypeInfo = (DecimalTypeInfo)inspector.getTypeInfo();
this.recordConsumer.addBinary(this.decimalToBinary(vDecimal, decTypeInfo));
break;
case DATE:
Date vDate = ((DateObjectInspector)inspector).getPrimitiveJavaObject(value);
this.recordConsumer.addInteger(DateWritable.dateToDays(vDate));
break;
default:
throw new IllegalArgumentException("Unsupported primitive data type: " + inspector.getPrimitiveCategory());
} }
}

parquet.io.MessageColumnIO.MessageColumnIORecordConsumer

        public void startField(String field, int index) {
try {
if (MessageColumnIO.DEBUG) {
this.log("startField(" + field + ", " + index + ")");
} this.currentColumnIO = ((GroupColumnIO)this.currentColumnIO).getChild(index);
this.emptyField = true;
if (MessageColumnIO.DEBUG) {
this.printState();
} } catch (RuntimeException var4) {
throw new ParquetEncodingException("error starting field " + field + " at " + index, var4);
}
} public void endField(String field, int index) {
if (MessageColumnIO.DEBUG) {
this.log("endField(" + field + ", " + index + ")");
} this.currentColumnIO = this.currentColumnIO.getParent();
if (this.emptyField) {
throw new ParquetEncodingException("empty fields are illegal, the field should be ommited completely instead");
} else {
this.fieldsWritten[this.currentLevel].markWritten(index);
this.r[this.currentLevel] = this.currentLevel == 0 ? 0 : this.r[this.currentLevel - 1];
if (MessageColumnIO.DEBUG) {
this.printState();
} }
} public void addInteger(int value) {
if (MessageColumnIO.DEBUG) {
this.log("addInt(" + value + ")");
} this.emptyField = false;
this.getColumnWriter().write(value, this.r[this.currentLevel], this.currentColumnIO.getDefinitionLevel());
this.setRepetitionLevel();
if (MessageColumnIO.DEBUG) {
this.printState();
} }

DataWritableWriter报错的关键代码是这几行

                Object keyElement = keyValue.getKey();
this.recordConsumer.startField(keyName, 0);
this.writeValue(keyElement, keyInspector, keyType);
this.recordConsumer.endField(keyName, 0);

代码流程梳理如下:

DataWritableWriter.writeMap

MessageColumnIORecordConsumer.startField

注释:this.emptyField = true;

迭代entry

处理key

Object keyElement = keyValue.getKey();

MessageColumnIORecordConsumer.startField

DataWritableWriter.writeValue

DataWritableWriter.isPrimitive

DataWritableWriter.writePrimitive

1)if (value == null) 或是Void

注释:this.emptyField依旧为true

2)if (value != null) MessageColumnIORecordConsumer.addInteger

注释:this.emptyField = false;

MessageColumnIORecordConsumer.endField

MessageColumnIORecordConsumer.endField

注释:if (this.emptyField) {throw new ParquetEncodingException("empty fields are illegal, the field should be ommited completely instead");}

当map<?,?>或array<?>类型的列插入空集合或者map中存在key为null的情形时,就会触发这个错误,

后来发现官方已经有讨论:https://issues.apache.org/jira/browse/HIVE-11625

要避免这个问题有两种方式:

1 改用hive执行sql;

2 增加udf函数filter_map,当map为空集合时置为null,当map不为空集合时过滤掉map值中所有key为null的entry

spark.udf.register("filter_map", ((map : Map[String, String]) => {if (map != null && !map.isEmpty) map.filter(_._1 != null) else null}))

【原创】大叔问题定位分享(15)spark写parquet数据报错ParquetEncodingException: empty fields are illegal, the field should be ommited completely instead的更多相关文章

  1. 【原创】大叔问题定位分享(16)spark写数据到hive外部表报错ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat

    spark 2.1.1 spark在写数据到hive外部表(底层数据在hbase中)时会报错 Caused by: java.lang.ClassCastException: org.apache.h ...

  2. 【原创】大叔问题定位分享(2)spark任务一定几率报错java.lang.NoSuchFieldError: HIVE_MOVE_FILES_THREAD_COUNT

    最近用yarn cluster方式提交spark任务时,有时会报错,报错几率是40%,报错如下: 18/03/15 21:50:36 116 ERROR ApplicationMaster91: Us ...

  3. 【原创】大叔问题定位分享(12)Spark保存文本类型文件(text、csv、json等)到hdfs时为什么是压缩格式的

    问题重现 rdd.repartition(1).write.csv(outPath) 写文件之后发现文件是压缩过的 write时首先会获取hadoopConf,然后从中获取是否压缩以及压缩格式 org ...

  4. 【原创】大叔问题定位分享(8)提交spark任务报错 Caused by: java.lang.ClassNotFoundException: org.I0Itec.zkclient.exception.ZkNoNodeException

    spark 2.1.1 一 问题重现 spark-submit --master local[*] --class app.package.AppClass --jars /jarpath/zkcli ...

  5. 【原创】大叔问题定位分享(27)spark中rdd.cache

    spark 2.1.1 spark应用中有一些task非常慢,持续10个小时,有一个task日志如下: 2019-01-24 21:38:56,024 [dispatcher-event-loop-2 ...

  6. 【原创】大叔问题定位分享(21)spark执行insert overwrite非常慢,比hive还要慢

    最近把一些sql执行从hive改到spark,发现执行更慢,sql主要是一些insert overwrite操作,从执行计划看到,用到InsertIntoHiveTable spark-sql> ...

  7. 【原创】大叔问题定位分享(19)spark task在executors上分布不均

    最近提交一个spark应用之后发现执行非常慢,点开spark web ui之后发现卡在一个job的一个stage上,这个stage有100000个task,但是绝大部分task都分配到两个execut ...

  8. 【原创】大叔问题定位分享(18)beeline连接spark thrift有时会卡住

    spark 2.1.1 beeline连接spark thrift之后,执行use database有时会卡住,而use database 在server端对应的是 setCurrentDatabas ...

  9. 【原创】大叔问题定位分享(17)spark查orc格式数据偶尔报错NullPointerException

    spark查orc格式的数据有时会报这个错 Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.orc. ...

随机推荐

  1. 转:Flutter动画二

    1. 介绍 本文会从代码层面去介绍Flutter动画,因此不会涉及到Flutter动画的具体使用. 1.1 Animation库 Flutter的animation库只依赖两个库,Dart库以及phy ...

  2. idea在Terminal中使用maven指令

    如果无法直接使用mvn指令,那么这里需要配置你idea中的maven的环境变量, 先说maven在idea中的位置,在你idea安装目录下的\plugins\maven 接下来配置环境变量:在你的用户 ...

  3. 用pip下载的python模块怎么在PyCharm中引入报错

    在IDE中导入下载的模块,比如:numpy模块 你会发现虽然你安装了numpy模块,在CMD中python可以import numpy,但是你在PyCharm引不进去,为什么呢?你要是有注意的话,安装 ...

  4. Jupyter Notebook(推荐使用Anaconda安装)

    一.Jupyter Notebook介绍 1.简介 Jupyter Notebook是基于网页的用于交互计算的应用程序.其可被应用于全过程计算:开发.文档编写.运行代码和展示结果. 简而言之,Jupy ...

  5. tornado自定义session

    这开始之前我们先了解以下什么是cookie和session 简单的说: cookie是保存在客户端的键值对 session是保存在服务端的键值对 session依赖与cookie 在Django中,可 ...

  6. Nim积解法小结

    由于某毒瘤出题人 redbag 不得不学习一下这个史诗毒瘤算法. 本文参考了 Owaski 的 GameTheory 的课件. 定义 我们对于一些二维 \(\mathrm{Nim}\) 游戏(好像更高 ...

  7. Luogu P3600 随机数生成器(期望+dp)

    题意 有一个长度为 \(n\) 的整数列 \(a_1, a_2, \cdots, a_n\) ,每个元素在 \([1, x]\) 中的整数中均匀随机生成. 有 \(q\) 个询问,第 \(i\) 个询 ...

  8. Java EE之表达式语言EL(上)

    1.了解表达式语言 表达式语言(EL)用于在不使用脚本.声明或者表达式的情况下,在JSP页面中渲染数据. EL曾是JSTL 1.0规范(与JSP 1.2)中的一部分,并且只可以用作JSTL标签的特性. ...

  9. EOJ 306 树上问题

    题解: 因为w大于1,所以,题意就是,有多少(x,z),存在x到z的路径上,有一个x<y<z的y w没用的其实. 树上路径问题,有什么方法吗? 1.树链剖分.这个主要方便处理修改操作. 2 ...

  10. Vue的指令系统、计算属性和表单输入绑定

    指令系统 指令 (Directives) 是带有 v- 前缀的特殊特性.指令特性的值预期是单个 JavaScript 表达式 (v-for 是例外情况,稍后我们再讨论).指令的职责是,当表达式的值改变 ...