通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置
通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置
配置HBASE的时候一开始按照cdh网站上的说明,hbase.rootdir的值设置使用的是基于Hadoop Namenode HA的nameservice
<property>
<name>hbase.rootdir</name>
<value>hdfs://hbasecluster/hbase</value>
</property>
配置后,使用start-hbase.sh启动,只有位于bakup-masters文件中的HMaster能启动,hbase配置的位于Hadoop DateNode上的RegionServer都启动失败,所报错误为连接hdfs:/hbasecluster:8020失败(),当时网上搜索未果,只好改成单机模式:
<property>
<name>hbase.rootdir</name>
<value>hdfs://nn1:8020/hbase</value>
</property>
此时,hbase可以启动成功,hbase shell测试,远程java程序测试皆通过。逐渐地我就忘记这个配置遗留问题了,结果昨天心血来潮又测试了一下Namenode的HA切换,这个倒是成功了没有问题。可是在测试java程序访问hbase发现失败了,到服务器上看主节点zk1上的HMaster进程没有了,zk2,zk3上的HMaster进程虽然有也不能链接,于zk1上重新启动hbase集群
stop-hbase.sh
start-hbase.sh jps
随后用jps查看,发现HMaster进程很快消失,查看log发现,报如下错误:
2014-07-24 08:51:47,023 FATAL [master:zk1:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1565)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1183)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3492)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:764)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:764)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980) at org.apache.hadoop.ipc.Client.call(Client.java:1409)
at org.apache.hadoop.ipc.Client.call(Client.java:1362)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1757)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1398)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:438)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:789)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
at java.lang.Thread.run(Thread.java:745)
2014-07-24 08:51:47,025 INFO [master:zk1:60000] master.HMaster: Aborting
2014-07-24 08:51:47,026 DEBUG [master:zk1:60000] master.HMaster: Stopping service threads
2014-07-24 08:51:47,026 INFO [master:zk1:60000] ipc.RpcServer: Stopping server on 60000
2014-07-24 08:51:47,026 INFO [RpcServer.handler=0,port=60000] ipc.RpcServer: RpcServer.handler=0,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=1,port=60000] ipc.RpcServer: RpcServer.handler=1,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=3,port=60000] ipc.RpcServer: RpcServer.handler=3,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=2,port=60000] ipc.RpcServer: RpcServer.handler=2,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=4,port=60000] ipc.RpcServer: RpcServer.handler=4,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=5,port=60000] ipc.RpcServer: RpcServer.handler=5,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=6,port=60000] ipc.RpcServer: RpcServer.handler=6,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=7,port=60000] ipc.RpcServer: RpcServer.handler=7,port=60000: exiting
2014-07-24 08:51:47,027 INFO [RpcServer.handler=8,port=60000] ipc.RpcServer: RpcServer.handler=8,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=9,port=60000] ipc.RpcServer: RpcServer.handler=9,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=10,port=60000] ipc.RpcServer: RpcServer.handler=10,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=11,port=60000] ipc.RpcServer: RpcServer.handler=11,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=12,port=60000] ipc.RpcServer: RpcServer.handler=12,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=13,port=60000] ipc.RpcServer: RpcServer.handler=13,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=14,port=60000] ipc.RpcServer: RpcServer.handler=14,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=15,port=60000] ipc.RpcServer: RpcServer.handler=15,port=60000: exiting
2014-07-24 08:51:47,028 INFO [RpcServer.handler=16,port=60000] ipc.RpcServer: RpcServer.handler=16,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=17,port=60000] ipc.RpcServer: RpcServer.handler=17,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=19,port=60000] ipc.RpcServer: RpcServer.handler=19,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=18,port=60000] ipc.RpcServer: RpcServer.handler=18,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=20,port=60000] ipc.RpcServer: RpcServer.handler=20,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=21,port=60000] ipc.RpcServer: RpcServer.handler=21,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=22,port=60000] ipc.RpcServer: RpcServer.handler=22,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=24,port=60000] ipc.RpcServer: RpcServer.handler=24,port=60000: exiting
2014-07-24 08:51:47,029 INFO [RpcServer.handler=23,port=60000] ipc.RpcServer: RpcServer.handler=23,port=60000: exiting
2014-07-24 08:51:47,030 INFO [RpcServer.handler=26,port=60000] ipc.RpcServer: RpcServer.handler=26,port=60000: exiting
2014-07-24 08:51:47,030 INFO [RpcServer.handler=27,port=60000] ipc.RpcServer: RpcServer.handler=27,port=60000: exiting
2014-07-24 08:51:47,030 INFO [RpcServer.handler=25,port=60000] ipc.RpcServer: RpcServer.handler=25,port=60000: exiting
2014-07-24 08:51:47,030 INFO [RpcServer.handler=28,port=60000] ipc.RpcServer: RpcServer.handler=28,port=60000: exiting
2014-07-24 08:51:47,030 INFO [RpcServer.handler=29,port=60000] ipc.RpcServer: RpcServer.handler=29,port=60000: exiting
2014-07-24 08:51:47,036 INFO [Replication.RpcServer.handler=0,port=60000] ipc.RpcServer: Replication.RpcServer.handler=0,port=60000: exiting
2014-07-24 08:51:47,036 INFO [Replication.RpcServer.handler=1,port=60000] ipc.RpcServer: Replication.RpcServer.handler=1,port=60000: exiting
2014-07-24 08:51:47,037 INFO [Replication.RpcServer.handler=2,port=60000] ipc.RpcServer: Replication.RpcServer.handler=2,port=60000: exiting
2014-07-24 08:51:47,038 INFO [RpcServer.listener,port=60000] ipc.RpcServer: RpcServer.listener,port=60000: stopping
2014-07-24 08:51:47,039 INFO [master:zk1:60000] master.HMaster: Stopping infoServer
2014-07-24 08:51:47,043 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopped
2014-07-24 08:51:47,043 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: stopping
2014-07-24 08:51:47,055 INFO [master:zk1:60000] mortbay.log: Stopped SelectChannelConnector@0.0.0.0:60010
2014-07-24 08:51:47,190 INFO [master:zk1:60000] zookeeper.ZooKeeper: Session: 0x164756c65ba5008e closed
2014-07-24 08:51:47,191 INFO [master:zk1:60000] master.HMaster: HMaster main thread exiting
2014-07-24 08:51:47,191 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:192)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:134)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2789)
2014-07-24 08:51:47,191 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
仔细看,发现是namenode报的错,检查nn1状态,发现此时nn1处于standby状态,nn2已经处于active状态,突然想起昨天的测试。这激起了我解决这个问题的决心,经过一翻goooooooooooooooogle,终于在hbase官网上的hbase文档中发现了如下几行内容:
2.2.2.2.3. HDFS Client Configuration
Of note, if you have made HDFS client configuration on your Hadoop cluster -- i.e. configuration you want HDFS clients to use as opposed to server-side configurations -- HBase will not see this configuration unless you do one of the following: Add a pointer to your HADOOP_CONF_DIR to the HBASE_CLASSPATH environment variable in hbase-env.sh. Add a copy of hdfs-site.xml (or hadoop-site.xml) or, better, symlinks, under ${HBASE_HOME}/conf, or if only a small set of HDFS client configurations, add them to hbase-site.xml. An example of such an HDFS client configuration is dfs.replication. If for example, you want to run with a replication factor of 5, hbase will create files with the default of 3 unless you do the above to make the configuration available to HBase.
我才用的是方法二,在所有的master节点和regionserver节点上都建立了hdfs-site.xml的符号链接(scp 分发符号连接不行,都变成真实文件了),hbase.rootdir配置为nameservice方式,再次测试hbase集群启动,成功!
然后测试hadoop namenode切换,hbase没问题了,至此困扰我多日的问题终于解决了。
通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置的更多相关文章
- 通过tarball形式安装HBASE Cluster(CDH5.0.2)——集群安装总览
1,手动下载压缩包.tar(下载地址),采用tarball形式手工安装集群. 2,共启用13台虚拟机,CentOS6.5 64bit,nn1,nn2,rm1,rm2,dn1,dn2,dn3,dn4,d ...
- 通过tarball形式安装HBASE Cluster(CDH5.0.2)——重新编译CDH5.0.2 HADOOP点滴
本文参考博文Hadoop2.2.0遇到64位操作系统平台报错,重新编译Hadoop 由于我采用的tarball方式安装hadoop,其lib/native下根本没有内容,启动hdfs时报这个经典的na ...
- 通过tarball形式安装HBASE Cluster(CDH5.0.2)——HBASE 真分布式集群配置
一.应该先配置好zookeeper并成功启动,否则hbase无法启动 二.配置HBASE集群 1,配置hbase-env.sh,下面是最少配置项目 [hadoop@zk1 conf]$ vim hba ...
- 通过tarball形式安装HBASE Cluster(CDH5.0.2)——配置分布式集群中的YARN ResourceManager 的HA
<?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the &q ...
- 通过tarball形式安装HBASE Cluster(CDH5.0.2)——如何配置分布式集群中的zookeeper
集群安装总览参见这里 Zookeeper的配置 1,/etc/profile中加入zk的路径设置,见上面背景说明. 2,进入~/zk/conf目录,复制zoo_sample.cfg为zoo.cfg v ...
- 安装配置和使用HBASE Cluster(基于发行版CDH5.0.2)——系列随笔
本系列文章只是记录了笔者本人在学习实验安装和使用基于CDH5.0.2的HBASE集群过程中的一些经验教训和心得,绝不是详细的安装过程,因本人不过一初学者,很多方面不甚了了,如果能让不幸读到的人有所得则 ...
- Centos7.5安装分布式Hadoop2.6.0+Hbase+Hive(CDH5.14.2离线安装tar包)
Tags: Hadoop Centos7.5安装分布式Hadoop2.6.0+Hbase+Hive(CDH5.14.2离线安装tar包) Centos7.5安装分布式Hadoop2.6.0+Hbase ...
- Hbase的安装(hadoop-2.6.0,hbase1.0)
Hbase的安装相对很简单啊...只要你装了Hadoop 装Hbase就是分分钟的事 如果要装hadoop集群的话 hadoop分类的集群安装好了,如果已经装好单机版~ 那就再配置如下就好~ 一.vi ...
- CentOS7安装CDH 第五章:CDH的安装和部署-CDH5.7.0
相关文章链接 CentOS7安装CDH 第一章:CentOS7系统安装 CentOS7安装CDH 第二章:CentOS7各个软件安装和启动 CentOS7安装CDH 第三章:CDH中的问题和解决方法 ...
随机推荐
- faster rcnn流程
1.执行流程 数据准备 train_net.py中combined_roidb函数会调用get_imdb得到datasets中factory.py生成的imdb 然后调用fast_rcnn下的trai ...
- iOS App中第一次运行添加半透明新手指引
实现方式: 在当前View上一个蒙层,然后找出需要标记的地方圈为白色,那些箭头和提示文字都是UI做出来的图上自带的. 代码: 判断是第一次运行APP后进入页面调用 -(void)newGuide { ...
- 使用Android Studio进行NDK开发
Step1:创建native方法 很easy,仅仅须要给定义好的方法加上native关键词就可以 注意:由于该方法的详细实现是在c++中详细实现的.所以相似于接口方法不须要加{}. Step2:生成c ...
- C# switch-case中的或(or)操作
今天需要在switch中添加一个条件,类似if中的 " || “操作 switch(var) { : : ... break; : : ... break; ... } 这样条件2 5会执行 ...
- tomcat出现的PermGen Space问题<转>
最近做项目碰到了让我纠结的问题,tomcat服务器运行一段时间,总是会自动报异常:java.lang.OutOfmemoryError: PermGen Space 的错误,导致项目无法正常运行. 出 ...
- sqoop 常见错误以及处理方式
Oracle: Connection Reset Errors 错误代码 // :: INFO mapred.JobClient: Task Id : attempt_201105261333_000 ...
- mysql查找有某列但没有此列索引的表
select a.TABLE_SCHEMA,a.TABLE_NAME from information_schema.`COLUMNS` a left join (select 'etl_stamp' ...
- Linux(CentOS)日常操作命令
用硬件检测程序kuduz探测新硬件:service kudzu start (or restart)查看CPU信息:cat /proc/cpuinfo查看板卡信息:cat /proc/pci查看PCI ...
- TiKV 源码解析系列——如何使用 Raft
本系列文章主要面向 TiKV 社区开发者,重点介绍 TiKV 的系统架构,源码结构,流程解析.目的是使得开发者阅读之后,能对 TiKV 项目有一个初步了解,更好的参与进入 TiKV 的开发中. 需要注 ...
- dplyr包
是Hadley Wickham的新作,主要用于数据清洗和整理,该包专注dataframe数据格式,从而大幅提高了数据处理速度,并且提供了与其它数据库的接口:tidyr包的作者是Hadley Wickh ...