【原创】大数据基础之Drill(2)Drill1.14+Hive2.1.1运行
问题
Drill最新版本是1.14,从1.13开始Drill支持hive的版本升级到2.3.2,详见1.13的release notes
The Hive client for Drill is updated to version 2.3.2. With the update, Drill supports queries on transactional (ACID) and non-transactional Hive bucketed ORC tables. The updated libraries are backward compatible with earlier versions of the Hive server and metastore. (DRILL-5978)
强行使用Drill1.14连接Hive2.1.1会由于metastore thrift接口变化导致问题,具体体现为 show tables是空,具体报错如下:
2018-10-10 13:03:54,355 [244277c5-ba8c-b6c8-8f99-2cdde9f3c4d8:frag:0:0] WARN o.a.d.e.s.h.DrillHiveMetaStoreClient - Failure while attempting to get hive table. Retries once.
org.apache.thrift.TApplicationException: Invalid method name: 'get_table_req'
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111) ~[drill-hive-exec-shaded-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$TableLoader.load(DrillHiveMetaStoreClient.java:531) [drill-storage-hive-core-1.14.0.jar:1.14.0]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3937) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4824) [guava-18.0.jar:na]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$HiveClientWithCaching.getHiveReadEntry(DrillHiveMetaStoreClient.java:495) [drill-storage-hive-core-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.hive.schema.HiveSchemaFactory$HiveSchema.getSelectionBaseOnName(HiveSchemaFactory.java:230) [drill-storage-hive-core-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.hive.schema.HiveSchemaFactory$HiveSchema.getDrillTable(HiveSchemaFactory.java:210) [drill-storage-hive-core-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.hive.schema.HiveDatabaseSchema.getTable(HiveDatabaseSchema.java:62) [drill-storage-hive-core-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.AbstractSchema.getTablesByNames(AbstractSchema.java:239) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.AbstractSchema.getTableNamesAndTypes(AbstractSchema.java:257) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator$Tables.visitTables(InfoSchemaRecordGenerator.java:301) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:216) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:209) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:209) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaRecordGenerator.scanSchema(InfoSchemaRecordGenerator.java:196) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaTableType.getRecordReader(InfoSchemaTableType.java:58) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:34) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.store.ischema.InfoSchemaBatchCreator.getBatch(InfoSchemaBatchCreator.java:30) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:159) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getRecordBatch(ImplCreator.java:137) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getChildren(ImplCreator.java:182) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getRootExec(ImplCreator.java:110) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.physical.impl.ImplCreator.getExec(ImplCreator.java:87) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:261) [drill-java-exec-1.14.0.jar:1.14.0]
at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.14.0.jar:1.14.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
编译
于是尝试重新编译Drill1.14,将依赖的hive版本降到2.1.1,下载代码
http://mirror.bit.edu.cn/apache/drill/drill-1.14.0/apache-drill-1.14.0-src.tar.gz
POM
修改pom中的hive版本
<hive.version>2.3.2</hive.version>
修改为<hive.version>2.1.1</hive.version>
重新编译打包后发现问题依旧,经检查发现修改版本之后只有jars/3rdparty下的3个hive相关jar从2.3.2改为2.1.1
hive-contrib-2.1.1.jar
hive-hbase-handler-2.1.1.jar
hive-metastore-2.1.1.jar
报错的jar是drill-hive-exec-shaded-1.14.0.jar,这个jar包中包含包含hive-exec及依赖,
<artifactId>maven-shade-plugin</artifactId>
<configuration>
<artifactSet>
<includes>
<include>org.apache.hive:hive-exec</include>
并且没有使用配置的hive.version
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<scope>compile</scope>
导致打进jar包中的hive-exec是2.3.2版本的,增加hive.version配置
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>${hive.version}</version>
<scope>compile</scope>
再打包,问题消失,show tables正常;
Hadoop Location
官方文档说明如下:
Apache Drill users must tell Drill-on-YARN the location of your Hadoop install. Set the HADOOP_HOME environment variable in $DRILL_SITE/drillenv.sh to point to your Hadoop installation:
export HADOOP_HOME= /path/to/hadoop-home
但配置之后依然存在问题:
1)报错
Diagnostics: File file:/user/drill/site.tar.gz does not exist
java.io.FileNotFoundException: File file:/user/drill/site.tar.gz does not exist
需要添加link
ln -s $HADOOP_HOME/etc/hadoop/core-site.xml $DRILL_SITE/core-site.xml
2)在实际查询时会报错找不到hdfs_name,需要添加link
ln -s $HADOOP_HOME/etc/hadoop/hdfs-site.xml $DRILL_SITE/hdfs-site.xml
【原创】大数据基础之Drill(2)Drill1.14+Hive2.1.1运行的更多相关文章
- 【原创】大数据基础之Drill(1)简介、安装及使用
https://drill.apache.org/ 一 简介 Drill is an Apache open-source SQL query engine for Big Data explorat ...
- 【原创】大数据基础之Zookeeper(2)源代码解析
核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...
- 【原创】大数据基础之Benchmark(2)TPC-DS
tpc 官方:http://www.tpc.org/ 一 简介 The TPC is a non-profit corporation founded to define transaction pr ...
- 【原创】大数据基础之词频统计Word Count
对文件进行词频统计,是一个大数据领域的hello word级别的应用,来看下实现有多简单: 1 Linux单机处理 egrep -o "\b[[:alpha:]]+\b" test ...
- 【原创】大数据基础之Impala(1)简介、安装、使用
impala2.12 官方:http://impala.apache.org/ 一 简介 Apache Impala is the open source, native analytic datab ...
- 大数据基础知识:分布式计算、服务器集群[zz]
大数据中的数据量非常巨大,达到了PB级别.而且这庞大的数据之中,不仅仅包括结构化数据(如数字.符号等数据),还包括非结构化数据(如文本.图像.声音.视频等数据).这使得大数据的存储,管理和处理很难利用 ...
- 大数据基础知识问答----spark篇,大数据生态圈
Spark相关知识点 1.Spark基础知识 1.Spark是什么? UCBerkeley AMPlab所开源的类HadoopMapReduce的通用的并行计算框架 dfsSpark基于mapredu ...
- 大数据基础知识问答----hadoop篇
handoop相关知识点 1.Hadoop是什么? Hadoop是一个由Apache基金会所开发的分布式系统基础架构.用户可以在不了解分布式底层细节的情况下,开发分布式程序.充分利用集群的威力进行高速 ...
- hadoop大数据基础框架技术详解
一.什么是大数据 进入本世纪以来,尤其是2010年之后,随着互联网特别是移动互联网的发展,数据的增长呈爆炸趋势,已经很难估计全世界的电子设备中存储的数据到底有多少,描述数据系统的数据量的计量单位从MB ...
随机推荐
- 如何基于Winform开发框架或混合框架基础上进行项目的快速开发
在开发项目的时候,我们为了提高速度和质量,往往不是白手起家,需要基于一定的基础上进行项目的快速开发,这样可以利用整个框架的生态基础模块,以及成熟统一的开发方式,可以极大提高我们开发的效率.本篇随笔就是 ...
- 分治FFT的三种含义
分治FFT是几个算法的统称.它们之间并无关联. 分治多项式乘法 问题如求\(\prod_{i=1}^na_ix+b\). 若挨个乘复杂度为\(O(n^2\log n)\),可分治做这件事,复杂度为\( ...
- spark-MLlib之协同过滤ALS
协同过滤与推荐 协同过滤是一种根据用户对各种产品的交互与评分来推荐新产品的推荐系统技术. 协同过滤引入的地方就在于它只需要输入一系列用户/产品的交互记录: 无论是显式的交互(例如在购物网站 ...
- ThreadLocal的一些总结
ThreadLocal.class /** * Sets the current thread's copy of this thread-local variable * to the specif ...
- WebViewClient 与 WebChromeClient
WebViewClient帮助WebView处理各种通知和请求事件的,我们可以称他为WebView的“内政大臣”.常用的shouldOverrideUrlLoading就是该类的一个方法,比如: on ...
- 解决Docker中运行的MySQL中文乱码
docker exec -it mysql bash 如果没有安装vim,请参考 解决Docker容器中不能用vim编辑文件 vim /etc/mysql/mysql.conf.d/mysql.cnf ...
- HashMap底层实现原理
HashMap底层实现 HashMap底层数据结构如下图,HashMap由“hash函数+数组+单链表”3个要素构成 通过写一个迷你版的HashMap来深刻理解 MyMap接口,定义一个接口,对外暴露 ...
- 清明培训 清北学堂 DAY1
今天是李昊老师的讲授~~ 总结了一下今天的内容: 1.高精度算法 (1) 高精度加法 思路:模拟竖式运算 注意:进位 优化:压位 程序代码: #include<iostream>#in ...
- mybatis返回结果封装为map的探索
需求 根据课程id 列表,查询每个课程id的总数,放到一个map里 最简单的就是循环遍历,每一个都查询一次网上说mybatis可以返回Map 和 List<Map>两种类型 尝试 直接返回 ...
- MT【320】依次动起来
已知$ BC=6,AC=2AB, $点$ D $满足$ \overrightarrow{AD}=\dfrac{2x}{x+y}\overrightarrow{AB}+\dfrac{y}{2(x+y)} ...