最近用yarn cluster方式提交spark任务时,有时会报错,报错几率是40%,报错如下: 18/03/15 21:50:36 116 ERROR ApplicationMaster91: User class threw exception: org.apache.spark.sql.AnalysisException: java.lang.NoSuchFieldError: HIVE_MOVE_FILES_THREAD_COUNT; org.apache.spark.sql.Analy…
集群中有一台datanode一直启动报错如下: java.net.BindException: Problem binding to [$server1:50020] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException 查看端口是否被占用 # netstat -tnlp|grep 50020 发现没有进程在监听50020端…
kafka0.8.1 一 问题 10月22号应用系统忽然报错: [2014/12/22 11:52:32.738]java.net.SocketException: 打开的文件过多 [2014/12/22 11:52:32.738]       at java.net.Socket.createImpl(Socket.java:447) [2014/12/22 11:52:32.738]       at java.net.Socket.connect(Socket.java:577) [201…
hive metastore在建表时报错 [pool-5-thread-2]: MetaException(message:Got exception: java.net.ConnectException Call From server2 to server1:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.…
原因: oracle 10g的驱动执行的批量提交只支持32768个参数,如果表的字段多于32个,就会出现该异常 解决办法: 升级oracle的驱动版本,换成ojdbc6.jar…
代码: 报错信息: java.lang.NoClassDefFoundError: scala/xml/MetaData 原因:确失jar包 <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-xml</artifactId> <version>2.11.0-M4</version></dependency>…
spark 2.1.1 spark里执行sql报错 insert overwrite table test_parquet_table select * from dummy 报错如下: org.apache.spark.SparkException: Task failed while writing rows. at org.apache.spark.sql.hive.SparkHiveDynamicPartitionWriterContainer.writeToFile(hiveWrite…
spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName("MethodPositionTest") val sc = new SparkContext(sparkConf) val spark = SparkSession.builder().enableHiveSupport().getOrCreate() def main(args : Arra…
spark 2.1.1 一 问题重现 spark-submit --master local[*] --class app.package.AppClass --jars /jarpath/zkclient-0.3.jar --driver-memory 1g app.jar 报错 Java HotSpot(TM) 64-Bit Server VM warning: Setting CompressedClassSpaceSize has no effect when compressed cl…
Spark 编程读取hive,hbase, 文本等外部数据生成dataframe后,一般我们都会map遍历get数据的每个字段,此时如果原始数据为null时,如果不进行判断直接转化为string,就会报空指针异常 java.lang.NullPointerException 示例代码如下: val data = spark.sql(sql) val rdd = data.rdd.map(record => { val recordSize = record.size for(i <- 0 to…