Sparksql 取代 Hive?
sparksql hive
https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html
https://cwiki.apache.org/confluence/display/Hive/Home
【服务数仓,支持sql强标准】
Apache Hive
The Apache Hive™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage and queried using SQL syntax.
【执行引擎有Spark】
Built on top of Apache Hadoop™, Hive provides the following features:
- Tools to enable easy access to data via SQL, thus enabling data warehousing tasks such as extract/transform/load (ETL), reporting, and data analysis.
- A mechanism to impose structure on a variety of data formats
Access to files stored either directly in Apache HDFS™ or in other data storage systems such as Apache HBase™
- Query execution via Apache Tez™, Apache Spark™, or MapReduce
- Procedural language with HPL-SQL
- Sub-second query retrieval via Hive LLAP, Apache YARN and Apache Slider.
Hive provides standard SQL functionality, including many of the later SQL:2003 and SQL:2011 features for analytics.
Hive's SQL can also be extended with user code via user defined functions (UDFs), user defined aggregates (UDAFs), and user defined table functions (UDTFs).
There is not a single "Hive format" in which data must be stored. Hive comes with built in connectors for comma and tab-separated values (CSV/TSV) text files, Apache Parquet™, Apache ORC™, and other formats.
Users can extend Hive with connectors for other formats. Please see File Formats and Hive SerDe in the Developer Guide for details.
Hive is not designed for online transaction processing (OLTP) workloads. It is best used for traditional data warehousing tasks.
Hive is designed to maximize scalability (scale out with more machines added dynamically to the Hadoop cluster), performance, extensibility, fault-tolerance, and loose-coupling with its input formats.
Components of Hive include HCatalog and WebHCat.
- HCatalog is a component of Hive. It is a table and storage management layer for Hadoop that enables users with different data processing tools — including Pig and MapReduce — to more easily read and write data on the grid.
- WebHCat provides a service that you can use to run Hadoop MapReduce (or YARN), Pig, Hive jobs or perform Hive metadata operations using an HTTP (REST style) interface.
https://issues.apache.org/jira/browse/HIVE-7292
Spark as an open-source data analytics cluster computing framework has gained significant momentum recently. Many Hive users already have Spark installed as their computing backbone. To take advantages of Hive, they still need to have either MapReduce or Tez on their cluster. This initiative will provide user a new alternative so that those user can consolidate their backend.
Secondly, providing such an alternative further increases Hive's adoption as it exposes Spark users to a viable, feature-rich de facto standard SQL tools on Hadoop.
【在多reducer阶段,性能佳】
Finally, allowing Hive to run on Spark also has performance benefits. Hive queries, especially those involving multiple reducer stages, will run faster, thus improving user experience as Tez does.
This is an umbrella JIRA which will cover many coming subtask. Design doc will be attached here shortly, and will be on the wiki as well. Feedback from the community is greatly appreciated!
【共享Hive元数据】
Sparksql 没有元数据? 通过临时创建元数据 或者 直接用Hive的元数据?
Sparksql 取代 Hive?的更多相关文章
- SparkSQL读取Hive中的数据
由于我Spark采用的是Cloudera公司的CDH,并且安装的时候是在线自动安装和部署的集群.最近在学习SparkSQL,看到SparkSQL on HIVE.下面主要是介绍一下如何通过SparkS ...
- SparkSQL与Hive on Spark的比较
简要介绍了SparkSQL与Hive on Spark的区别与联系 一.关于Spark 简介 在Hadoop的整个生态系统中,Spark和MapReduce在同一个层级,即主要解决分布式计算框架的问题 ...
- 关于sparksql操作hive,读取本地csv文件并以parquet的形式装入hive中
说明:spark版本:2.2.0 hive版本:1.2.1 需求: 有本地csv格式的一个文件,格式为${当天日期}visit.txt,例如20180707visit.txt,现在需要将其通过spar ...
- spark on yarn模式下配置spark-sql访问hive元数据
spark on yarn模式下配置spark-sql访问hive元数据 目的:在spark on yarn模式下,执行spark-sql访问hive的元数据.并对比一下spark-sql 和hive ...
- sparksql 操作hive
写在前面:hive的版本是1.2.1spark的版本是1.6.x http://spark.apache.org/docs/1.6.1/sql-programming-guide.html#hive- ...
- 【完美解决】Spark-SQL、Hive多 Metastore、多后端、多库
[完美解决]Spark-SQL.Hive多 Metastore.多后端.多库 [完美解决]Spark-SQL.Hive多 Metastore.多后端.多库 SparkSQL 支持同时连接多种 Meta ...
- hive on spark VS SparkSQL VS hive on tez
http://blog.csdn.net/wtq1993/article/details/52435563 http://blog.csdn.net/yeruby/article/details/51 ...
- Spark-SQL连接Hive
第一步:修个Hive的配置文件hive-site.xml 添加如下属性,取消本地元数据服务: <property> <name>hive.metastore.local< ...
- SparkSQL与Hive on Spark
SparkSQL与Hive on Spark的比较 简要介绍了SparkSQL与Hive on Spark的区别与联系 一.关于Spark 简介 在Hadoop的整个生态系统中,Spark和MapR ...
随机推荐
- 解决npm 的 shasum check failed for错误
使用npm安装一些包失败,类似如下报错情况: C:\Program Files\nodejs>npm update npm npm ERR! Windows_NT 10.0.14393 np ...
- shell - sed 简单使用记录
时间长不用,总是会忘掉的........还是烂笔头好些. sed 命令使用帮助及实操举例 功能:主要用来对一个或多个文件进行编辑,简化对文件的反复操作. 语法: sed [-hnV] [-e<s ...
- spring版本不兼容JDK问题
在实验书上Spring项目的时候出现一个问题,导入包和使用注释的时候eclipse出现报错. 导入包报错:The import org cannot be resolved 注释报错:componen ...
- Linux下批量替换文件内容和文件名(转)
1.批量替换指定多个文件的文件内容 在指定目录/your/path里,查找包含old_string字符串的所有文件,并用new_string字符串替换old_string字符串. sed -i &qu ...
- oceanbase 分布式数据库
http://blog.csdn.net/maray/article/details/7230881 http://weibo.com/raywill2
- Unix domain socket
转载:http://www.cnblogs.com/chekliang/p/3222950.html socket API原本是为网络通讯设计的,但后来在socket的框架上发展出一种IPC机制,就是 ...
- ios使用http来上传图片实现方法
if (parameters) { int genderNumber = 1; self.token = loginToken; self.p ...
- Go -- LRU算法(缓存淘汰算法)(转)
1. LRU1.1. 原理 LRU(Least recently used,最近最少使用)算法根据数据的历史访问记录来进行淘汰数据,其核心思想是“如果数据最近被访问过,那么将来被访问的几率也更高”. ...
- 转:如何mac下使用wireshark
Mac OS Mountain Lion默认是没有安装X11的,而wireshark运行需要x11,因此如果直接安装wireshark而没有安装x11,wireshark不会正常运行. 去苹果主页下载 ...
- 最全的HTTP头部信息分析
HTTP 头部解释 1. Accept:告诉WEB服务器自己接受什么介质类型,*/* 表示任何类型,type/* 表示该类型下的所有子类型,type/sub-type. 2. Accept-Chars ...