说在前面的话,

在这里,推荐大家,一定要先去看这篇博客,如下

再谈hive-1.0.0与hive-1.2.1到JDBC编程忽略细节问题

Hadoop Hive概念学习系列之hive三种方式区别和搭建、HiveServer2环境搭建、HWI环境搭建和beeline环境搭建(五)

  版本1

先启动hadoop集群,至少是hdfs

也可以设置成开机启动,chkconfig mysqld on

[hadoop@weekend110 app]$ su root
Password:
[root@weekend110 app]# service mysqld start  (因为,我的mysql是安装在/home/hadoop/app下)
Starting mysqld: [ OK ]
[root@weekend110 app]# su hadoop
[hadoop@weekend110 app]$ cd hive-0.12.0/
[hadoop@weekend110 hive-0.12.0]$ bin/hive
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive>

  版本2 

先启动hadoop集群,至少是hdfs

[hadoop@djt002 hadoop-2.6.0]$ jps
2870 NodeManager
2612 SecondaryNameNode
2767 ResourceManager
3189 Jps
2430 DataNode
2334 NameNode
[hadoop@djt002 hadoop-2.6.0]$

  也可以设置成开机启动,chkconfig mysqld on

[hadoop@djt002 local]$ pwd
/usr/local
[hadoop@djt002 local]$ su root
Password:
[root@djt002 local]# service mysqld start      (因为,我的mysql是安装在/usr/local下)
Starting mysqld: [ OK ]
[root@djt002 local]# su hadoop
[hadoop@djt002 local]$ pwd
/usr/local
[hadoop@djt002 local]$ cd hive/hive-1.0.0/
[hadoop@djt002 hive-1.0.0]$ pwd
/usr/local/hive/hive-1.0.0
[hadoop@djt002 hive-1.0.0]$ bin/hive

Logging initialized using configuration in jar:file:/usr/local/hive/hive-1.0.0/lib/hive-common-1.0.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive/hive-1.0.0/lib/hive-jdbc-1.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive> exit;
hive>

  版本3

  比如,我这里是由HadoopMaster、HadoopSlave1、HadoopSlave2组成的3节点集群。但是,我只在HadoopSlave1节点上安装大家Hive。

  这里,版本同样是hadoop-2.6.0.tar.gz,但是是与hive-1.2.1搭配。具体怎么如何做到,见如下博客

再谈hive-1.0.0与hive-1.2.1到JDBC编程忽略细节问题

hive常见问题解决干货大全

hive权威安装出现的不解错误!(完美解决)两种方法都可以

  

 

  ,在HadoopSlave1节点
[hadoop@HadoopSlave1 hive-1.2.1]$ su root
Password:
[root@HadoopSlave1 hive-1.2.1]# cd /home/hadoop/app/
[root@HadoopSlave1 app]# service mysqld start        (因为,我的mysql是安装在/home/hadoop/app下)
Starting mysqld: [ OK ]

  ,在HadoopMaster节点
[root@HadoopMaster app]# su hadoop
[hadoop@HadoopMaster app]$ cd $HADOOP_HOME
[hadoop@HadoopMaster hadoop-2.6.0]$ sbin/start-all.sh
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
3612 NameNode
4374 Jps
3952 ResourceManager
3805 SecondaryNameNode

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
3004 Jps
2269 DataNode
2378 NodeManager

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2255 DataNode
2998 Jps
2365 NodeManager

[hadoop@HadoopSlave1 hadoop-2.6.0]$ cd $HIVE_HOME
[hadoop@HadoopSlave1 hive-1.2.1]$ bin/hive
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
16/11/02 11:42:54 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive>

  另外,再开一个HadoopSlave1远程界面

[hadoop@HadoopSlave1 hive-1.2.1]$ bin/hive --service hiveserver2 &

Hive的单节点集群详细启动步骤的更多相关文章

  1. HBase的单节点集群详细启动步骤(分为Zookeeper自带还是外装)

    伪分布模式下,如(weekend110)hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例.但是,该实例只能为 ...

  2. Hadoop的单节点集群详细启动步骤

    见,如下博客 hadoop-2.2.0.tar.gz的伪分布集群环境搭建(单节点) 很简单,不多赘述.

  3. Zookeeper的单节点集群详细启动步骤

    这个很简单,见如下博客. 1 week110的zookeeper的安装 + zookeeper提供少量数据的存储 [hadoop@weekend110 zookeeper-3.4.6]$ pwd/ho ...

  4. HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)

    HBase的多节点集群详细启动步骤(3或5节点)分为: 1.HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐) 2.HBASE_MANAGES_ZK的默认值是tru ...

  5. Kafka的3节点集群详细启动步骤(Zookeeper是外装)

    首先,声明,kafka集群是搭建在hadoop1.hadoop2和hadoop3机器上. kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 如下分 ...

  6. Hadoop的多节点集群详细启动步骤(3或5节点)

    版本1 利用自己写的脚本来启动,见如下博客 hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3节点集群搭建 hadoop-2.6.0.tar.gz的集群搭建(3节点) hadoop ...

  7. Zookeeper的多节点集群详细启动步骤(3或5节点)

    分为 (1)分别去3或5节点上去启动Zookeeper进程 (2)自己写个脚本,直接在主节点上去启动Zookeeper进程. (1)分别去3或5节点上去启动Zookeeper进程 第一步: [hado ...

  8. storm的3节点集群详细启动步骤(非HA和HA)(图文详解)

    前期博客 apache-storm-1.0.2.tar.gz的集群搭建(3节点)(图文详解)(非HA和HA) 启动storm集群(HA) 本博文情况是 master(主) nimbus slave1( ...

  9. 说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

    这几天,无意之间,被这件事情给迷惑,不解!先暂时贴于此,以后再解决! 详细问题如下: 在hive的安装目录下(我这里是 /home/hadoop/app/hive-1.2.1),hive的安装目录的l ...

随机推荐

  1. Laravel核心解读--ENV的加载和读取

    Laravel在启动时会加载项目中的.env文件.对于应用程序运行的环境来说,不同的环境有不同的配置通常是很有用的. 例如,你可能希望在本地使用测试的Mysql数据库而在上线后希望项目能够自动切换到生 ...

  2. Docker学习总结(4)——Docker镜像与容器命令

    Docker是一个开源的引擎,可以轻松的为任何应用创建一个轻量级的.可移植的.自给自足的容器.开发者在笔记本上编译测试通过的容器可以批量地在生产环境中部署,包括VMs(虚拟机).bare metal. ...

  3. Qt5.7新特性

    简述 Qt5.7发布了,新特性如下. 简述 新特性 C11 Support Required from the compiler New Features within existing module ...

  4. hadoop-04-mysql安装

    hadoop-04-mysql安装 su root 1,rpm -qa|grep mysql 2, rpm -e --nodeps `rpm -qa|grep mysql` 3,rpm -ivh co ...

  5. C++ 数字、string 简便互转

    一.数字转为 string 类型 借用 sprintf 函数: char buffer[256]; int counter = 10; sprintf(buffer,"%04i", ...

  6. 从client(content="<p></p>")中检測到有潜在危急的 Request.Form 值。

    最近的站点要做一个新闻模块,站点后台须要对新闻进行管理,拿到富文本编辑器的内容,在获取的时候会报一个错误:从client(content="<p>.....</p>& ...

  7. CodedUI自己主动化測试及脱离VS独立执行

    在VS中可创建"编码的UI測试".可录制软件操作,再回放,最后还能够脱离VS独立执行. 在VS中执行測试 创建项目codeuitest,控件布局.例如以下图: 在button单击事 ...

  8. nj09---util、inherits、inspect、events、error

    一.util全局变量 1.util.inherits(constructor,superConstructor) 此方法是一个实现对象间原型继承的函数.javaScript通过原型赋值来实现继承,细节 ...

  9. ES聚合底层机制-bucket深的话采用广度优先更好,而如果是年度统计还是深度优先好

    见原文,仅仅摘录部分:https://www.elastic.co/guide/cn/elasticsearch/guide/current/_preventing_combinatorial_exp ...

  10. 131.lambda表达式小结

    C++ 11中的Lambda表达式用于定义并创建匿名的函数对象,以简化编程工作.Lambda的语法形式如下:[函数对象参数](操作符重载函数参数) mutable或exception声明->返回 ...