一、Hadoop集群配置好后,执行start-dfs.sh后报错,一堆permission denied

zf sbin $ ./start-dfs.sh
Starting namenodes on [master]
master: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted
master: starting namenode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out
master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied
master: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out' for reading: No such file or directory
master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied
master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied
slave-1: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted
slave-1: starting datanode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied
slave-2: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted
slave-2: starting datanode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out
slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied
slave-1: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out' for reading: No such file or directory
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied
slave-2: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out' for reading: No such file or directory
slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied
slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied
Starting secondary namenodes [slave-1]
slave-1: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted
slave-1: starting secondarynamenode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied
slave-1: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out' for reading: No such file or directory
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied
slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied

解决方案:对hadoop安装目录执行命令:sudo chmod a+w * 
对文件敞开权限

二、执行./start-dfs.sh 和 ./start-yarn.sh 后master主机上jps 无法启动NameNode和ResourceManager,但slave-1 和slave-2 DataNode启动正常。

查看logs,错误如下:

**************namenode.log****************************

-- ::, ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Port in use: master:
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
... more
-- ::, INFO org.apache.hadoop.util.ExitUtil: Exiting with status : java.net.BindException: Port in use: master:
-- ::, INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/182.61.39.233
************************************************************/
*************secondnamemode.log****************************
-- ::, INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: master:
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
... more
-- ::, FATAL org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode
java.net.BindException: Port in use: master:
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.Net.bind(Net.java:)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:)
... more

可见主要原因就是:

1、 java.net.BindException: Port in use: master:50070/9001

2、Caused by: java.net.BindException: Cannot assign requested address

端口被占用是直接原因,但起因是不能分配所需的地址,跟地址有关的就联想到 /etc/hosts文件

原来我的是:

127.0.0.1 localhost localhost.localdomain
182.xx.xx.33 master (外网IP, 主机名)
106.xx.xx.72 slave-
106.xx.xx.73 slave-

后来把外网IP全部换成内网IP后就可以了

127.0.0.1 localhost localhost.localdomain
172.xx.x.2 master
172.xx.x.5 slave-
172.xx.x.6 slave-

原因我也不明,我三台主机都是使用百度云服务器。

三、Hadoop 配置好后,发现DataNode没有启动,很大原因是因为NameNode 格式化时与上次的残留有冲突,所以要先删除上次的残留。或者每次使用完后及时Hadoop关闭,重新开启后不要重新格式化NameNode。

Linux 搭建Hadoop集群错误锦集的更多相关文章

  1. Linux 搭建Hadoop集群 成功

    内容基于(自己的真是操作步骤编写) Linux 搭建Hadoop集群---Jdk配置 Linux 搭建Hadoop集群 ---SSH免密登陆 一:下载安装 Hadoop 1.1:下载指定的Hadoop ...

  2. Hadoop(五)搭建Hadoop与Java访问HDFS集群

    前言 上一篇详细介绍了HDFS集群,还有操作HDFS集群的一些命令,常用的命令: hdfs dfs -ls xxx hdfs dfs -mkdir -p /xxx/xxx hdfs dfs -cat ...

  3. redis在Windows下以后台服务一键搭建集群(单机--伪集群)

    redis在Windows下以后台服务一键搭建集群(单机--伪集群) 一.概述 此教程介绍如何在windows系统中同一台机器上布置redis伪集群,同时要以后台服务的模式运行.布置以脚本的形式,一键 ...

  4. hadoop 集群及hbase集群的pid文件存放位置

    一.当hbase集群和hadoop集群停了做一些配置调整,结果执行stop-all.sh的时候无法停止集群, 提示no datanode,no namenode等等之类的信息, 查看stop-all. ...

  5. Redis集群(五):集群搭建

    一.本文目的        演示在一台机器上搭建3主3从的redis集群,通过演示了解redis集群的搭建,使用和注意事项     二.搭建说明        1.同一台机器搭建3主3从的伪集群   ...

  6. 搭建mongodb集群(副本集+分片)

    搭建mongodb集群(副本集+分片) 转载自:http://blog.csdn.net/bluejoe2000/article/details/41323051 完整的搭建mongodb集群(副本集 ...

  7. 基于Hadoop集群的HBase集群的配置

    一  Hadoop集群部署 hadoop配置 二 Zookeeper集群部署 zookeeper配置 三  Hbase集群部署 1.配置hbase-env.sh HBASE_MANAGES_ZK:用来 ...

  8. elasticsearch系列八:ES 集群管理(集群规划、集群搭建、集群管理)

    一.集群规划 搭建一个集群我们需要考虑如下几个问题: 1. 我们需要多大规模的集群? 2. 集群中的节点角色如何分配? 3. 如何避免脑裂问题? 4. 索引应该设置多少个分片? 5. 分片应该设置几个 ...

  9. Centos7上搭建activemq集群和zookeeper集群

    Zookeeper集群的搭建 1.环境准备 Zookeeper版本:3.4.10. 三台服务器: IP 端口 通信端口 10.233.17.6 2181 2888,3888 10.233.17.7 2 ...

随机推荐

  1. sqlserver 实现数据变动触发信息

    1.建立存储过程,功能是动态写入文件中信息,可以在触发器或存储过程调用. SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO create proc [d ...

  2. 【转】C#发送Email邮件

    转自:http://hi.baidu.com/bluesky_cn/item/8bb060ace834c53f020a4df2 下面用到的邮件账号和密码都不是真实的,需要测试就换成自己的邮件账号. 需 ...

  3. C#时间加减

    DateTime dt =......//减数DateTime dt_n = DateTime.Now;//被减数 long x = dt .ToFileTime();//表示自协调世界时 (UTC) ...

  4. BitAdminCore框架应用篇:(五)核心套件querySuite列的定义

    索引 NET Core应用框架之BitAdminCore框架应用篇系列 框架演示:http://bit.bitdao.cn 框架源码:https://github.com/chenyinxin/coo ...

  5. 开源且功能强大的C# 扩展方法类库Pure.Ext,包含1000+个拓展方法 (支持.Net Framework和.Net Core)

    先上地址 Github: https://github.com/purestackorg/pure.ext Gitee: https://gitee.com/purestack/pure.ext 扩展 ...

  6. css 可拖拽列表

    <!DOCTYPE HTML> <html><head> <meta charset="UTF-8"> <title>d ...

  7. JEECG(二) JEECG框架下调用webservice java springmvc maven 调用 webservice

    JEECG系列教程二 如何在JEECG框架下使用webservice 本文所使用的webservice是c#开发的 其实无论是什么语言开发的webservice用法都一样 java springmvc ...

  8. Android Source 源码已下载但 Android Studio 找不到的解决办法

    Android Studio 2.1 reporting in: solved the issue by resetting SDK. Preferences -> Appearance &am ...

  9. Day 15 内置函数 , 匿名函数.

    1. 最大值 max,最小值# #最大值 ret = max(1,2,-3)print(ret)# 结果 2ret=max([1,2,3,4])print(ret)# 结果 4 2.sum 函数用法 ...

  10. Centos 7 GCC 7.3编译器安装方法及C++17标准测试示例

    1.下载gcc-7.3.0源码 http://mirror.linux-ia64.org/gnu/gcc/releases/gcc-7.3.0/ 2.下载编译依赖 [root@localhost ~] ...