最近提交一个spark应用之后发现执行非常慢,点开spark web ui之后发现卡在一个job的一个stage上,这个stage有100000个task,但是绝大部分task都分配到两个executor上,其他executor非常空闲,what happened? 查看spark task分配逻辑发现,有一个data locality即数据本地性的特性,详见 https://www.cnblogs.com/barneywill/p/10152497.html即会按照locality级别的优先级…
spark 2.1.1 spark应用中有一些task非常慢,持续10个小时,有一个task日志如下: 2019-01-24 21:38:56,024 [dispatcher-event-loop-22] INFO org.apache.spark.executor.CoarseGrainedExecutorBackend - Got assigned task 40312019-01-24 21:38:56,024 [Executor task launch worker for task 4…
spark 2.1.1 spark里执行sql报错 insert overwrite table test_parquet_table select * from dummy 报错如下: org.apache.spark.SparkException: Task failed while writing rows. at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.writeToFile(hiveWrite…
spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName("MethodPositionTest") val sc = new SparkContext(sparkConf) val spark = SparkSession.builder().enableHiveSupport().getOrCreate() def main(args : Arra…
Spark2.1.1 最近运行spark任务时会发现任务经常运行很久,具体job如下: Job Id  ▾ Description Submitted Duration Stages: Succeeded/Total Tasks (for all stages): Succeeded/Total 16 (kill)treeReduce at CRFWithLBFGS.scala:160 2018/12/03 12:39:50 2.3 h 0/5 196/4723 job中正在运行的stage如下…
最近把一些sql执行从hive改到spark,发现执行更慢,sql主要是一些insert overwrite操作,从执行计划看到,用到InsertIntoHiveTable spark-sql> explain insert overwrite table test2 select * from test1;== Physical Plan ==InsertIntoHiveTable MetastoreRelation temp, test2, true, false+- HiveTableSc…
spark 2.1.1 beeline连接spark thrift之后,执行use database有时会卡住,而use database 在server端对应的是 setCurrentDatabase, 经过排查发现当时spark thrift正在执行insert操作, org.apache.spark.sql.hive.execution.InsertIntoHiveTable protected override def doExecute(): RDD[InternalRow] = {…
spark查orc格式的数据有时会报这个错 Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat…
spark 2.1.1 spark在写数据到hive外部表(底层数据在hbase中)时会报错 Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat cannot be cast to org.apache.hadoop.hive.ql.io.HiveOutputFormat at org.apache.spark.sql.hive.SparkHiveWrit…
问题重现 rdd.repartition(1).write.csv(outPath) 写文件之后发现文件是压缩过的 write时首先会获取hadoopConf,然后从中获取是否压缩以及压缩格式 org.apache.spark.sql.execution.datasources.DataSource def write( org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand val hadoopC…