HBase的多节点集群详细启动步骤(3或5节点)分为

  1、HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐)

  2、HBASE_MANAGES_ZK的默认值是true(zookeeper自带)

1、HBASE_MANAGES_ZK的默认值是false(推荐)

  伪分布模式下,如(weekend110)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。但是,该实例只能为单机或伪分布模式下的HBase提供服务。

  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

  若,HBASE_MANAGES_ZK的默认值是false
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
3、在HadoopMaster机器上,再启动HBase即可。

  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
1998 Jps
[hadoop@HadoopMaster hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
1877 Jps
[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
1893 Jps
[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

  2、在HadoopMaster、HadoopSlave1、HadoopSlave2机器上,分别手动一个一个得去,启动Zookeeper
[hadoop@HadoopMaster hadoop-2.6.0]$cd ..
[hadoop@HadoopMaster app]$ cd zookeeper-3.4.6/
[hadoop@HadoopMaster zookeeper-3.4.6]$ bin/zkServer.sh start
[hadoop@HadoopSlave1 zookeeper-3.4.6]$ bin/zkServer.sh start
[hadoop@HadoopSlave2 zookeeper-3.4.6]$ bin/zkServer.sh start

  3、在HadoopMaster机器上,再启动HBase即可。
[hadoop@HadoopMaster hadoop-2.6.0]$ cd ..
[hadoop@HadoopMaster app]$ cd hbase-1.2.3
[hadoop@HadoopMaster hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[hadoop@HadoopMaster hbase-1.2.3]$ jps

  进入hbase shell啊,只有HadoopMaster才可进,
[hadoop@HadoopMaster hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[hadoop@HadoopSlave1 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave2 hadoop-2.6.0]$

退出hbase shell啊
hbase(main):001:0> exit
[hadoop@HadoopMaster hbase-1.2.3]$

2、HBASE_MANAGES_ZK的默认值是true

  伪分布模式下,如(weekend110、djt002)
hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例。
但是,该实例只能为单机或伪分布模式下的HBase提供服务。

  若是分布式模式,则需要配置自己的Zookeeper集群。如(HadoopMaster、HadoopSlave1、HadoopSlave2)
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示,分布式模式里,在启动HBase时,HBase将Zookeeper作为自身的一部分运行。进程变为HQuorumPeer。
  hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是false,它表示,分布式模式里,需要,先提前手动,每个节点都手动启动Zookeeper,然后再在主节点上启动HBase时,进程变为HMaster(HadoopMaster节点)。

  若,HBASE_MANAGES_ZK的默认值是true
1、则,直接在HadoopMaster机器上,先启动Hadoop,
2、再启动HBase即可。

  1、则,直接在HadoopMaster机器上,先启动Hadoop,
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
1998 Jps
[hadoop@HadoopMaster hadoop-2.6.0]$ sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/02 19:59:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [HadoopMaster]
HadoopMaster: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-namenode-HadoopMaster.out
HadoopSlave1: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave1.out
HadoopSlave2: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-datanode-HadoopSlave2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-HadoopMaster.out
16/11/02 20:00:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-HadoopMaster.out
HadoopSlave2: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave2.out
HadoopSlave1: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-HadoopSlave1.out
[hadoop@HadoopMaster hadoop-2.6.0]$ jps
2281 SecondaryNameNode
2124 NameNode
2430 ResourceManager
2736 Jps

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
1877 Jps
[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2003 NodeManager
2199 Jps
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
1893 Jps
[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2195 Jps
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

  2、再启动HBase即可。
[hadoop@HadoopMaster hadoop-2.6.0]$ cd ..
[hadoop@HadoopMaster app]$ cd hbase-1.2.3
[hadoop@HadoopMaster hbase-1.2.3]$ bin/start-hbase.sh
HadoopSlave2: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave2.out
HadoopSlave1: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopSlave1.out
HadoopMaster: starting zookeeper, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-HadoopMaster.out
starting master, logging to /home/hadoop/app/hbase-1.2.3/logs/hbase-hadoop-master-HadoopMaster.out
HadoopSlave1: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave1.out
HadoopSlave2: starting regionserver, logging to /home/hadoop/app/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-HadoopSlave2.out
[hadoop@HadoopMaster hbase-1.2.3]$ jps
3201 Jps
2281 SecondaryNameNode
2951 HQuorumPeer
2124 NameNode
2430 ResourceManager
3013 HMaster
[hadoop@HadoopMaster hbase-1.2.3]$

[hadoop@HadoopSlave1 hadoop-2.6.0]$ jps
2336 HRegionServer
2003 NodeManager
2396 Jps
2257 HQuorumPeer
1928 DataNode
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ jps
2019 NodeManager
2254 HQuorumPeer
2451 Jps
2333 HRegionServer
1945 DataNode
[hadoop@HadoopSlave2 hadoop-2.6.0]$

  进入hbase shell啊,只有HadoopMaster才可进,
[hadoop@HadoopMaster hbase-1.2.3]$ hbase shell
2016-11-02 20:07:31,288 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hbase-1.2.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016

hbase(main):001:0>

[hadoop@HadoopSlave1 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave1 hadoop-2.6.0]$

[hadoop@HadoopSlave2 hadoop-2.6.0]$ hbase shell
-bash: hbase: command not found
[hadoop@HadoopSlave2 hadoop-2.6.0]$

退出hbase shell啊
hbase(main):001:0> exit
[hadoop@HadoopMaster hbase-1.2.3]$

  

HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)的更多相关文章

  1. Kafka的3节点集群详细启动步骤(Zookeeper是外装)

    首先,声明,kafka集群是搭建在hadoop1.hadoop2和hadoop3机器上. kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 如下分 ...

  2. Hadoop的多节点集群详细启动步骤(3或5节点)

    版本1 利用自己写的脚本来启动,见如下博客 hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3节点集群搭建 hadoop-2.6.0.tar.gz的集群搭建(3节点) hadoop ...

  3. HBase的单节点集群详细启动步骤(分为Zookeeper自带还是外装)

    伪分布模式下,如(weekend110)hbase-env.sh配置文档中的HBASE_MANAGES_ZK的默认值是true,它表示HBase使用自身自带的Zookeeper实例.但是,该实例只能为 ...

  4. Zookeeper的多节点集群详细启动步骤(3或5节点)

    分为 (1)分别去3或5节点上去启动Zookeeper进程 (2)自己写个脚本,直接在主节点上去启动Zookeeper进程. (1)分别去3或5节点上去启动Zookeeper进程 第一步: [hado ...

  5. storm的3节点集群详细启动步骤(非HA和HA)(图文详解)

    前期博客 apache-storm-1.0.2.tar.gz的集群搭建(3节点)(图文详解)(非HA和HA) 启动storm集群(HA) 本博文情况是 master(主) nimbus slave1( ...

  6. Hive的单节点集群详细启动步骤

    说在前面的话, 在这里,推荐大家,一定要先去看这篇博客,如下 再谈hive-1.0.0与hive-1.2.1到JDBC编程忽略细节问题 Hadoop Hive概念学习系列之hive三种方式区别和搭建. ...

  7. Hadoop的单节点集群详细启动步骤

    见,如下博客 hadoop-2.2.0.tar.gz的伪分布集群环境搭建(单节点) 很简单,不多赘述.

  8. Zookeeper的单节点集群详细启动步骤

    这个很简单,见如下博客. 1 week110的zookeeper的安装 + zookeeper提供少量数据的存储 [hadoop@weekend110 zookeeper-3.4.6]$ pwd/ho ...

  9. Hyperledger Fabric 1.0 从零开始(九)——Fabric多节点集群生产启动

    7:Fabric多节点集群生产启动 7.1.多节点服务器配置 在生产环境上,我们沿用4.1.配置说明中的服务器各节点配置方案. 我们申请了五台生产服务器,其中四台服务器运行peer节点,另外一台服务器 ...

随机推荐

  1. 算法笔记_064:蓝桥杯练习 操作格子(Java)

    目录 1 问题描述 2 解决方案   1 问题描述 问题描述 有n个格子,从左到右放成一排,编号为1-n. 共有m次操作,有3种操作类型: 1.修改一个格子的权值, 2.求连续一段格子权值和, 3.求 ...

  2. CSS 盒状模型简介

    框的构成以及相关 CSS 特性( property ) 结构 为了给文档树中的各个元素排版定位(布局),浏览器会根据渲染模型1为每个元素生成四个嵌套的矩形框, 分别称作 content box.pad ...

  3. Win10系统Host文件修改不了权限不足怎么办

    Win10系统Host文件修改不了权限不足怎么办 Win10系统Host文件修改不了不能保存 工具原料:电脑+win10 Win10系统Host文件修改不了权限不足方法步骤如下: .打开"运 ...

  4. Spring MVC 框架搭建及具体解释

    如今主流的Web MVC框架除了Struts这个主力 外.其次就是Spring MVC了,因此这也是作为一名程序猿需要掌握的主流框架.框架选择多了.应对多变的需求和业务时,可实行的方案自然就多了. 只 ...

  5. Real-Time Rendering.3rd,Radiance与距离无关 的解释

    P208,说radiance与距离无关: 想了半天才想明白: 如图,设入射方向l对应的单位solid angle为dw, 则 沿l方向射入表面的radiance=dw范围内射入表面单位面积上的能量. ...

  6. @@identity、scope_identity()、IDENT_CURRENT('tablename')函数的区别

    @@IDENTITY 和SCOPE_IDENTITY 返回在当前会话中的任何表内所生成的最后一个标识值.但是,SCOPE_IDENTITY 只返回插入到当前作用域中的值:@@IDENTITY 不受限于 ...

  7. MySQL索引长度限制问题

    在修改表结构时出现了错误:Specified key was too long;max key length is 1000 bytes. MySQL版本为Server version: 5.1.36 ...

  8. ubuntu:好用的help命令

    以前光知道如何不清楚某个命令的用法可以打上后缀 ‘ --help' 现在刚发现,原来还有help命令来帮忙,如果你安装的是中文支持,在ubuntu上, 那么部分命令说明还是中文版哦. 举例: help ...

  9. [svc]salt源码安装软件和yum安装软件

    上面简单列出了源码安装的sls文件书写思路. 涉及到一些固定的思路:如, 1,拷贝 解压安装时候需要依赖tar.gz存在 如果已安装则无需再次安装. 2,启动脚本 加入chk时候需要文件存在,如果已添 ...

  10. wp———跳转系统设置页面的wifi、网络连接、蓝牙、飞行模式等

    通过 ConnectionSettingsType 的设置,可以跳转 到 wifi.蓝牙.飞行模式.以及网络连接 其他方案跳转 private async void Button_Click_1(ob ...