Problem of Uninstall Cloudera: Cannot Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome
1. Problem
We wrote a shell script to uninstall Cloudera Manager(CM) that run in a cluster with 3 linux server. After run the script, we reinstalled the CM normally. But when we established Hdfs encountered a problem: **failed to format NameNode. Cannot find CDH's bigtop-detect-javahome. **

2.Thinking
- By google we found if it caused by the exist of floder "/dfs", but after used command
rm -rf /dfsthe problem still happen. - We found the error reported:
/usr/libexec/bigtop-detect-javahome, so we thought if caused by JAVA_HOME variable exception. At first, we disable the file:/etc/profile's JAAVA_HOME variable. To make the change effect, we shouldsource /etc/profileand disconnect the xshell. however, the problem still happen. - Recheck the uninstall script, we found the script delete files:
/usr/bin/hadoop*which was important.
now /etc/profile as follow:
#export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
#export PATH=$JAVA_HOME/bin:$PATH
#export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
3. Slove
follwing step you should do in each server of the cluster.
3.1 Check file hadoop avaliable
cd to the floder /usr/bin and check if file hadoopis avaliable. the content of file as follows:
#!/bin/bash
# Autodetect JAVA_HOME if not defined
. /usr/lib/bigtop-utils/bigtop-detect-javahome
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_MAPRED_HOME=/usr/lib/hadoop
export HADOOP_LIBEXEC_DIR=//usr/lib/hadoop/libexec
export HADOOP_CONF_DIR=/etc/hadoop/conf
exec /usr/lib/hadoop/bin/hadoop "$@"
we found it load file bigtop-detect-javahome and set HADOOP_HOME.
It's a pity that /usr/lib/hadoop* and /usr/lib/hadoop* was delete by our uninstall script, so we copied those file and floder from another cluster to our cluster.
3.2 Check file bigtop-detect-javahome avaliable
cd to the floder /usr/lib/bigtop-utils and check if filebigtop-detect-javahome is avaliable, the content of file as follows:
# attempt to find java
if [ -z "${JAVA_HOME}" ]; then
for candidate_regex in ${JAVA_HOME_CANDIDATES[@]}; do
for candidate in `ls -rvd ${candidate_regex}* 2>/dev/null`; do
if [ -e ${candidate}/bin/java ]; then
export JAVA_HOME=${candidate}
break 2
fi
done
done
fi
the file is used to set JAVA_HOME variable when system doesn't define JAVA_HOME in /etc/profile.
Finally, I found no matter the file /etc/profile with JAVA_HOME variable or without it, Hdfs can be installed.
3.3 More and more
Go through the first two steps can slove the problem, but more google let me know /etc/default/ is real config floder !
refer: http://blog.sina.com.cn/s/blog_5d9aca630101pxr1.html
addexport JAVA_HOME=path into /etc/default/bigtop-utils, then source /etc/default/bigtop-utils.
You only need to do it once and all CDH components would use that variable to figure out where JDK is.
4. Conclusion
- Always focus on the log file content and error's relation is.
- Make the best use of your time and modest ask for advise to other. otherwise, the problem always exists.
Problem of Uninstall Cloudera: Cannot Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome的更多相关文章
- Problem of Uninstall Cloudera: Can't Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome
[toc] 1. Problem We wrote a shell script to uninstall Cloudera Manager(CM) that run in a cluster wit ...
- 使用Cloudera Manager搭建HDFS完全分布式集群
使用Cloudera Manager搭建HDFS完全分布式集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 关于Cloudera Manager的搭建我这里就不再赘述了,可以参考 ...
- 使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇
使用Cloudera Manager搭建zookeeper集群及HDFS HA实战篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.使用Cloudera Manager搭建zo ...
- hdfs client access the hdfs cluster not in one domain
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html#Clients_u ...
- cloudera learning1:cloudera简介及安装
cloudera分为两个部分:CDH和CM.CDH是Cloudera Distribution Hadoop的简称,顾名思义,就是cloudera公司发布的Hadoop版本,封装了Apache Had ...
- CDH(Cloudera)与hadoop(apache)对比
本文出自:CDH(Cloudera)与hadoop(apache)对比http://www.aboutyun.com/thread-9225-1-1.html(出处: about云开发) 问题导读 ...
- 纯手工全删除域内最后一个EXCHANGE--How to Manually Uninstall Last Exchange 2010 Server from Organization
http://www.itbigbang.com/how-to-manually-uninstall-last-exchange-2010-server-from-organization/ 没办法, ...
- Hadoop cloudera版和Apache(原生态)的区别
---------------------------------------------------------------------------------------------------- ...
- HDFS 和 YARN 的 HA 故障切换【转】
来源:https://blog.csdn.net/u011414200/article/details/50336735 一 非 HDFS HA 集群转换成 HA 集群二 HDFS 的 HA 自动切换 ...
随机推荐
- CH1807 Necklace
题意 背景 有一天,袁☆同学绵了一条价值连城宝石项链,但是,一个严重的问题是,他竟然忘记了项链的主人是谁!在得知此事后,很多人向☆同学发来了很多邮件,都说项链是自己的,要求他归还(显然其中最多只有一个 ...
- TortoiseSVN 结合使用哪个问题跟踪系统比较好?TRAC?REDMINE?都有什么优缺点?
TortoiseSVN 结合使用哪个问题跟踪系统比较好?TRAC?REDMINE?都有什么优缺点? -------------------------------------------------- ...
- 获取刚刚插入表格的这条信息的自增ID
获取刚刚插入表格的这条信息的自增ID var conn=getConnection(); var msql="INSERT INTO " + table +" (&quo ...
- C++等语言中整型int等的取值范围计算方式
举short为例说明 如果以最高位为符号位,二进制原码最大为0111111111111111=2的15次方减1=32767.最小为1111111111111111=-2的15次方减1=-32767此时 ...
- ELF文件和BIN文件
文件的内容:1. BIN文件是 raw binary 文件,这种文件只包含机器码.2. ELF文件除了机器码外,还包含其它额外的信息,如段的加载地址,运行地址,重定位表,符号表等. 所以ELF文件的体 ...
- Java进行spark计算
首先在Linux环境安装spark: 可以从如下地址下载最新版本的spark: https://spark.apache.org/downloads.html 这个下载下来后是个tgz的压缩包,解压后 ...
- Run-time Settings 运行时设置
1.执行顺序设置和执行概率设置 2.迭代之间的等待时间设置 3.日志设置 4.思考时间 5.杂项 线程进程选择: 一般服务器没有安全机制选择线程执行 但是有安全机制的话 比如第一个进程50个线程 第二 ...
- 汇编_指令_SHL、SHR、SAL、SAR、ROL、ROR、RCL、RCR
;SHL(Shift Left): 逻辑左移 ;SHR(Shift Right): 逻辑右移 ;SAL(Shift Arithmetic Left): 算术左移 ;SAR(Shif ...
- [转]Explorer.exe的命令行参数
本文来自:Explorer.exe的命令行参数 摘要 本文讲述explorer.exe(资源管理器)的命令行. 语法 EXPLORER.EXE [/n][/e][,/root,<object&g ...
- 揭秘 Python 中的 enumerate() 函数
原文:https://mp.weixin.qq.com/s/Jm7YiCA20RDSTrF4dHeykQ 如何以去写以及为什么你应该使用Python中的内置枚举函数来编写更干净更加Pythonic的循 ...