从impala中创建kudu表之后,如果想从hive或spark sql直接读取,会报错: Caused by: java.lang.ClassNotFoundException: com.cloudera.kudu.hive.KuduStorageHandler at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424…
spark 2.4.3 spark读取hive表,步骤: 1)hive-site.xml hive-site.xml放到$SPARK_HOME/conf下 2)enableHiveSupport SparkSession.builder.enableHiveSupport().getOrCreate() 3) 测试代码 val sparkConf = new SparkConf().setAppName(getName) val sc = new SparkContext(sparkConf)…
spark-2.4.2kudu-1.7.0 开始尝试 1)自己手工将jar加到classpath spark-2.4.2-bin-hadoop2.6+kudu-spark2_2.11-1.7.0-cdh5.16.1.jar # bin/spark-shell scala> val df = spark.read.options(Map("kudu.master" -> "master:7051", "kudu.table" ->…
spark sql执行insert overwrite table时,写到新表或者新分区的文件个数,有可能是200个,也有可能是任意个,为什么会有这种差别? 首先看一下spark sql执行insert overwrite table流程: 1 创建临时目录,比如 .hive-staging_hive_2018-06-23_00-39-39_825_3122897139441535352-2312/-ext-10000 2 将数据写到临时目录: 3 执行loadTable或loadPartiti…
spark 2.4 spark sql中执行 set hive.exec.max.dynamic.partitions=10000; 后再执行sql依然会报错: org.apache.hadoop.hive.ql.metadata.HiveException: Number of dynamic partitions created is 1001, which is more than 1000. To solve this try to set hive.exec.max.dynamic.p…
今天遇到一个问题,spark应用中在一个循环里执行sql,每个sql都会向一张表写入数据,比如 insert overwrite table test_table partition(dt) select * from test_table_another; 除了执行sql没有其他逻辑,每个sql都会对应1个job,在spark web ui上看到job和job之间会停顿几分钟,并且非常有规律,任何两个job之间都会停顿,是不是很神奇? 答案揭晓: spark在执行insert overwrit…
之前讨论过hive中limit的实现,详见 https://www.cnblogs.com/barneywill/p/10109217.html下面看spark sql中limit的实现,首先看执行计划: spark-sql> explain select * from test1 limit 10;== Physical Plan ==CollectLimit 10+- HiveTableScan [id#35], MetastoreRelation temp, test1Time taken…
spark 2.1.1 系统中希望监控spark on yarn任务的执行进度,但是监控过程发现提交任务之后执行进度总是10%,直到执行成功或者失败,进度会突然变为100%,很神奇, 下面看spark on yarn任务提交过程: spark on yarn提交任务时会把mainClass修改为Client childMainClass = "org.apache.spark.deploy.yarn.Client" spark-submit过程详见:https://www.cnblog…
spark on yarn通过--deploy-mode cluster提交任务之后,应用已经在yarn上执行了,但是spark-submit提交进程还在,直到应用执行结束,提交进程才会退出,有时这会很不方便,并且不注意的话还会占用很多资源,比如提交spark streaming应用: 最近发现spark里有一个配置可以修改这种行为,提交任务的时候加长一个conf就可以 --conf spark.yarn.submit.waitAppCompletion=false org.apache.spa…
本地运行spark报错 18/12/18 12:56:55 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.18/12/18 12:56:55 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.18/12/18 12:56:55 WARN Utils: Service 'sparkDr…