1 FATAL org.apache.hadoop.ha.ZKFailoverController: Unable to start failover controller. Parent znode does not exist.

  这个错误导致启动不了DFSZKFailoverController,从而不能选举出Active Node,导致了Hadoop两个NameNode都是Standby,我是这样做的

  停掉Hadoop所有进程,然后重新格式化Zookeeper

hdfs zkfc -formatZK

2 紧接着上个问题,再重新格式化过zookeeper之后,发现yarn启动不了了

2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1076)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1095)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:948)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeRMDTMasterKeyState(ZKRMStateStore.java:844)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.removeRMDTMasterKey(RMStateStore.java:733)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.removeStoredMasterKey(RMDelegationTokenSecretManager.java:99)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredKeys(AbstractDelegationTokenSecretManager.java:371)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.rollMasterKey(AbstractDelegationTokenSecretManager.java:348)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:646)
at java.lang.Thread.run(Thread.java:745)
2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Maxed out ZK retries. Giving up!
2015-08-05 19:00:33,718 INFO org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: removing master key with keyID 55
2015-08-05 19:00:33,738 INFO org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore: Exception while executing a ZK operation.
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:949)
at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:937)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$4.run(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1076)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1095)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:934)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doMultiWithRetries(ZKRMStateStore.java:948)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.removeRMDTMasterKeyState(ZKRMStateStore.java:844)
at org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.removeRMDTMasterKey(RMStateStore.java:733)
at org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager.removeStoredMasterKey(RMDelegationTokenSecretManager.java:99)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredKeys(AbstractDelegationTokenSecretManager.java:371)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.rollMasterKey(AbstractDelegationTokenSecretManager.java:348)
at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:646)
at java.lang.Thread.run(Thread.java:745)

但是这个问题,估计还是因为zookeeper造成的,但是我没任何修改,从新启动yarn。。。竟然成功了

start-yarn.sh

3 NameNode启动不了

java.lang.IllegalArgumentException: Unable to construct journal, qjournal://spark-1421-0000:8485;spark-1421-0003:8485;spark-1421-0004:8485;spark-1421-0005:8485;spark-1421-0006:8485/hadoop-journal
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1593)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:254)
at org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:776)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:621)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1591)
... 13 more
Caused by: java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.<init>(IPCLoggerChannel.java:146)
at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:156)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:367)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:149)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:116)
at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.<init>(QuorumJournalManager.java:105)
... 18 more

我这里的错误关键是:

tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel

后来排查,发现是因为:com.google.common.base.Stopwatch这个类造成的,因为这个类在guava.jar中,但是这个jar包在Hadoop 中是11.0.2,但是在java_home/jre/lib/ext中还有一个是18.*,那么造成了版本不一致,所以我的解决办法是:删掉java_home/jre/lib/ext中的guava.jar

4 Hadoop启动报Error: JAVA_HOME is not set and could not be found解决办法

是因为etc/hadoop配置文件hadoop-env.sh中Javahome没有配置,重新配置JAVA_HOME的绝对路径后即可

5 Hadoop操作文件报Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":root:supergroup:drwx------

修改一下权限

hdfs dfs -chmod -R 755 /tmp

Ubuntu 14.10 下Hadoop 错误集的更多相关文章

  1. Ubuntu 14.10 下HBase错误集

    1 如果机群时间不同步,那么启动子节点RegionServer就会出问题 aused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException ...

  2. Ubuntu 14.10 下Hadoop HttpFS 配置

    因为hadoop集群中需要配置一个图形化管理数据的截面,后来找到HUE,那么在配置HUE的过程中,发现需要配置httpfs,因为配置了httpfs,hue才能去操作hdfs中的数据. HttpFs能干 ...

  3. Ubuntu 14.10 下Hadoop FTP文件上传配置

    最近老板提出一个需求,要用Hadoop机群管理生物数据,并且生物数据很多动辄几十G,几百G,所以需要将这些数据传到HDFS中,在此之前搭建了HUE用来图形化截面管理HDFS数据,但是有个问题,上面使用 ...

  4. Ubuntu 14.10 下Hadoop代码编译问题总结

    问题1  protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionExceptio ...

  5. Ubuntu 14.10下基于Nginx搭建mp4/flv流媒体服务器(可随意拖动)并支持RTMP/HLS协议(含转码工具)

    Ubuntu 14.10下基于Nginx搭建mp4/flv流媒体服务器(可随意拖动)并支持RTMP/HLS协议(含转码工具) 最近因为项目关系,收朋友之托,想制作秀场网站,但是因为之前一直没有涉及到这 ...

  6. Ubuntu 14.10 下Hive配置

    1 系统环境 Ubuntu 14.10 JDK-7 Hadoop 2.6.0 2 安装步骤 2.1 下载Hive 我第一次安装的时候,下载的是Hive-1.2.1,配置好之后,总是报错 [ERROR] ...

  7. Ubuntu 14.10 下安装Ganglia监控集群

    关于 Ganglia 软件,Ganglia是一个跨平台可扩展的,高性能计算系统下的分布式监控系统,如集群和网格.它是基于分层设计,它使用广泛的技术,如XML数据代表,便携数据传输,RRDtool用于数 ...

  8. Ubuntu 14.10 下ZooKeeper+Hadoop2.6.0+HBase1.0.0 的HA机群高可用配置

    1 硬件环境 Ubuntu 14.10 64位 2 软件环境 openjdk-7-jdk hadoop 2.6.0 zookeeper-3.4.6 hbase-1.0.0 3 机群规划 3.1 zoo ...

  9. Ubuntu 14.10 下安装java反编译工具 jd-gui

    系统环境,Ubuntu 14.10 ,64位 1 下载JD-GUI,网址http://221.3.153.126/1Q2W3E4R5T6Y7U8I9O0P1Z2X3C4V5B/jd.benow.ca/ ...

随机推荐

  1. HDU 2561

    F - 第二第二 Time Limit:1000MS Memory Limit:32768KB 64bit IO Format:%I64d & %I64u Submit Status Prac ...

  2. 当爬虫遇到js加密

    当爬虫遇到js加密 我们在做python爬虫的时候经常会遇到许多的反爬措施,js加密就是其中一种. 破解js加密的方法也有很多种: 1.直接驱动浏览器抓取数据,无视js加密. 2.找到本地加密的js代 ...

  3. 深入学习Motan系列(二)——服务发布

    闯关经验: 袋鼠走过了第一关,顺利搭建出了Demo,信心爆棚.不过之后,心想怎么去研究这个框架呢.查了一下,官方文档,好像没什么东西可以研究啊.后来,又搜了搜博客,因为这是微博的框架嘛,所以搜索时用百 ...

  4. CF使用TGP下载后,分卷文件损坏的解决方法

    首先从游戏的列表删除游戏(安装失败出现分卷文件损坏的游戏) 然后进入游戏重新,继续找到该游戏(安装失败的游戏) 点击下载游戏!不会重新下载的,之后下载一些失败的文件,不会花费多少时间,慢慢等待即可 之 ...

  5. webpack中hash、chunkhash、contenthash区别

    webpack中对于输出文件名可以有三种hash值: 1. hash 2. chunkhash 3. contenthash 这三者有什么区别呢? hash 如果都使用hash的话,因为这是工程级别的 ...

  6. benthos v1 的一些新功能

    主要从视频文件截取,暂时github 上还没有很全的相关文档 v1目标 config lint processor error 处理 subprocess processor awk processo ...

  7. nakadi-ui nakadi event broker 的可视化UI工具

    nakadi 是一款很不错的基于fafka 开发的event broker ,我们只需要使用http 请求就可以调用kafka 方便的发布订阅功能 环境准备 docker-compose 文件 ver ...

  8. S老师 破坏神学习

    代码质量不高 就不整理了 发上来留个纪念 表示自己写过了 数据库:MySQL,服务端:PhotonServer 视频:https://pan.baidu.com/s/1i4ROaRr 客户端:http ...

  9. Bow & Arrow 学习

    using UnityEngine; using System.Collections; using System.Collections.Generic; using UnityEngine.UI; ...

  10. oracle-sql优化器

    优化器optimizer Oracle 执行计划(Explain Plan) 说明 http://langgufu.iteye.com/blog/2158163 explain plan是一个dml语 ...