Ubuntu 14.10 下Hadoop 错误集
1 FATAL org.apache.hadoop.ha.ZKFailoverController: Unable to start failover controller. Parent znode does not exist.
这个错误导致启动不了DFSZKFailoverController,从而不能选举出Active Node,导致了Hadoop两个NameNode都是Standby,我是这样做的
停掉Hadoop所有进程,然后重新格式化Zookeeper
hdfs zkfc -formatZK
2 紧接着上个问题,再重新格式化过zookeeper之后,发现yarn启动不了了
2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1076)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1095)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:948)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeRMDTMasterKeyState(ZKRMStateStore.java:844)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.removeRMDTMasterKey(RMStateStore.java:733)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.removeStoredMasterKey(RMDelegationTokenSecretManager.java:99)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredKeys(AbstractDelegationTokenSecretManager.java:371)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.rollMasterKey(AbstractDelegationTokenSecretManager.java:348)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:646)
at java.lang.Thread.run(Thread.java:745)
2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Maxed out ZK retries. Giving up!
2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: removing master key with keyID 55
2015-08-05 19:00:33,738 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1076)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1095)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:948)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeRMDTMasterKeyState(ZKRMStateStore.java:844)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.removeRMDTMasterKey(RMStateStore.java:733)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.removeStoredMasterKey(RMDelegationTokenSecretManager.java:99)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredKeys(AbstractDelegationTokenSecretManager.java:371)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.rollMasterKey(AbstractDelegationTokenSecretManager.java:348)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:646)
at java.lang.Thread.run(Thread.java:745)
但是这个问题,估计还是因为zookeeper造成的,但是我没任何修改,从新启动yarn。。。竟然成功了
start-yarn.sh
3 NameNode启动不了
java.lang.IllegalArgumentException: Unable to construct journal, qjournal://spark-1421-0000:8485;spark-1421-0003:8485;spark-1421-0004:8485;spark-1421-0005:8485;spark-1421-0006:8485/hadoop-journal
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1593)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:254)
at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:776)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:621)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1591)
... 13 more
Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.<init>(IPCLoggerChannel.java:146)
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:156)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:367)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:149)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:116)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:105)
... 18 more
我这里的错误关键是:
tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel
后来排查,发现是因为:com.google.common.base.Stopwatch这个类造成的,因为这个类在guava.jar中,但是这个jar包在Hadoop 中是11.0.2,但是在java_home/jre/lib/ext中还有一个是18.*,那么造成了版本不一致,所以我的解决办法是:删掉java_home/jre/lib/ext中的guava.jar
4 Hadoop启动报Error: JAVA_HOME is not set and could not be found解决办法
是因为etc/hadoop配置文件hadoop-env.sh中Javahome没有配置,重新配置JAVA_HOME的绝对路径后即可
5 Hadoop操作文件报Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":root:supergroup:drwx------
修改一下权限
hdfs dfs -chmod -R 755 /tmp
Ubuntu 14.10 下Hadoop 错误集的更多相关文章
- Ubuntu 14.10 下HBase错误集
1 如果机群时间不同步,那么启动子节点RegionServer就会出问题 aused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException ...
- Ubuntu 14.10 下Hadoop HttpFS 配置
因为hadoop集群中需要配置一个图形化管理数据的截面,后来找到HUE,那么在配置HUE的过程中,发现需要配置httpfs,因为配置了httpfs,hue才能去操作hdfs中的数据. HttpFs能干 ...
- Ubuntu 14.10 下Hadoop FTP文件上传配置
最近老板提出一个需求,要用Hadoop机群管理生物数据,并且生物数据很多动辄几十G,几百G,所以需要将这些数据传到HDFS中,在此之前搭建了HUE用来图形化截面管理HDFS数据,但是有个问题,上面使用 ...
- Ubuntu 14.10 下Hadoop代码编译问题总结
问题1 protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionExceptio ...
- Ubuntu 14.10下基于Nginx搭建mp4/flv流媒体服务器(可随意拖动)并支持RTMP/HLS协议(含转码工具)
Ubuntu 14.10下基于Nginx搭建mp4/flv流媒体服务器(可随意拖动)并支持RTMP/HLS协议(含转码工具) 最近因为项目关系,收朋友之托,想制作秀场网站,但是因为之前一直没有涉及到这 ...
- Ubuntu 14.10 下Hive配置
1 系统环境 Ubuntu 14.10 JDK-7 Hadoop 2.6.0 2 安装步骤 2.1 下载Hive 我第一次安装的时候,下载的是Hive-1.2.1,配置好之后,总是报错 [ERROR] ...
- Ubuntu 14.10 下安装Ganglia监控集群
关于 Ganglia 软件,Ganglia是一个跨平台可扩展的,高性能计算系统下的分布式监控系统,如集群和网格.它是基于分层设计,它使用广泛的技术,如XML数据代表,便携数据传输,RRDtool用于数 ...
- Ubuntu 14.10 下ZooKeeper+Hadoop2.6.0+HBase1.0.0 的HA机群高可用配置
1 硬件环境 Ubuntu 14.10 64位 2 软件环境 openjdk-7-jdk hadoop 2.6.0 zookeeper-3.4.6 hbase-1.0.0 3 机群规划 3.1 zoo ...
- Ubuntu 14.10 下安装java反编译工具 jd-gui
系统环境,Ubuntu 14.10 ,64位 1 下载JD-GUI,网址http://221.3.153.126/1Q2W3E4R5T6Y7U8I9O0P1Z2X3C4V5B/jd.benow.ca/ ...
随机推荐
- Gym - 101002D:Programming Team (01分数规划+树上依赖背包)
题意:给定一棵大小为N的点权树(si,pi),现在让你选敲好K个点,需要满足如果如果u被选了,那么fa[u]一定被选,现在要求他们的平均值(pi之和/si之和)最大. 思路:均值最大,显然需要01分数 ...
- CF1132.Educational Codeforces Round 61(简单题解)
A .Regular Bracket Sequence 题意:给定“((” , “()” , “)(”, “))”四种,问是否可以组成合法括号匹配 思路:设四种是ABCD,B可以不用管,而C在A或 ...
- 在django中进行MySQL入库
在django中进行mysql 入库 需要导入 : from django.db import models 在添加主键时,需要使用: primary_key=True id = models. ...
- 软件安装配置笔记(二)——SQL Server安装
客户端安装: 服务器端安装:
- Ubuntu防火墙简单设置
http://wiki.ubuntu.org.cn/UFW防火墙简单设置 http://wiki.ubuntu.org.cn/Ufw使用指南 Ubuntu默认安装内置ufw防火墙,简单使用如下: su ...
- ubuntu软件管理
https://www.cnblogs.com/forward/archive/2012/01/10/2318483.html 一.Ubuntu中软件安装方法1.APT方式(联网安装, 需要联网下载软 ...
- 两道dp
链接:https://ac.nowcoder.com/acm/contest/186/C?&headNav=www 来源:牛客网终于Alice走出了大魔王的陷阱,可是现在傻傻的她忘了带武器了, ...
- python可变类型和不可变类型
原文地址:http://www.cnblogs.com/huamingao/p/5809936.html 可变类型 Vs 不可变类型 可变类型(mutable):列表,字典 不可变类型(unmutab ...
- oracle审计的激活与取消
审计audit用户见识用户所执行的操作,并且oracle会将审计跟踪结果存放到os文件或数据库中 激活审计 conn /as sysdba show parameter audit_sys_opera ...
- sudo command
sudo -i : login as sudo password: change the password of current login user exit : logout