向hadoop导入文件,报错

....

There are 0 datanode(s) running and no node(s) are excluded in this operation.

....

查看配置

$hadoop_home/hadoop/etc/hdfs-site.xml

<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/sparkuser/myspark/hadoop/hdfs/name</value>
</property>
<property>

解决

删除目录下的 hdfs目录下所有文件

1. 查看NameNode的9000端口,(core-site.xml文件中的fs.default.name节点配置)端口是否打开,因为所有的DataNode都要通过这个端口连接NameNode

2. 关闭防火墙,因为防火墙可能会阻止其他的电脑连接。使用以下命令关闭防火墙

**查看防火墙**

service iptables status

service iptable stop

chkconfig iptable  off

3.在host文件中注释掉

127.0.0.1 localhost
::1 localhost6 Master

There are 0 datanode(s) running and no node(s) are excluded in this operation.的更多相关文章

  1. Hadoop问题:There are 0 datanode(s) running and no node(s) are excluded in this operation.

    问题描述: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/hadoop-yarn/staging/hado ...

  2. [bug] Hive:map.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

    原因: datanode未运行,重启hdfs

  3. hadoopmaster主机上传文件出错: put: File /a.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) are excluded in this operation.

    刚开始装好hadoop的时候,namenode机上传文件没有错误,今天打开时突然不能上传文件,报错 put: File /a.txt._COPYING_ could only be replicate ...

  4. 运行时候报异常could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.

    运行时候报异常could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) ...

  5. File /hbase could only be replicated to 0 nodes instead of minReplication (=1). There are 30 datanode(s) running and no node(s) are excluded in this operation.

    原因: hdfs-site.xml中的配置为: <property> <name>dfs.datanode.du.reserved</name> <value ...

  6. there are 0 datanode.....

    当时执行hive的导入数据load data  inpath "XXXX" into table.....的时候发现总是导不进去,最后试了下简单的从Linux 到 HDFS上传文件 ...

  7. Hadoop3.0 DataNode启动不成功——java.net.BindException: Port in use: localhost:0 Caused by: java.net.BindException: Cannot assign requested address解决办法

    一.问题出现的原因 启动Hadoop分布式环境时出现主节点的namenode.secondarynamenode启动成功,但是Worker节点datenode启动不成功. hadoop@master$ ...

  8. 【转】Ansys 13.0 flexlm not running完美解决方案

    http://jingyan.baidu.com/article/af9f5a2dd9843a43150a4550.html 实测,12.1 用此方法问题同样得解.

  9. tensorflow-gpu2.1.0报错 so returning NUMA node zero解决办法

    >>> print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))2020-06-06 10:14:08.92 ...

随机推荐

  1. org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 47956; actual size: 35656)

    在使用httpcomponents-client-4.2.1时,任务运行一段时间就抛出以下一场 下面是异常的堆栈信息: org.apache.http.TruncatedChunkException: ...

  2. H3C ER6300 + 两台 H3C S5120 组网举例

    组网需求: 1.H3C ER6300 作出口路由.防火墙及Qos限速等功能(ER6300 配置LAN口 192.168.30.254默认网关) 2.H3C S5120 两台配置相同VLAN10 VLA ...

  3. CentOS服务器ntpdate同步

    如有多台CentOS服务器运行相同的服务,且对时间准确性要求较高,那必须保证多台服务器时间统一. 最简单的就是每台服务器都用ntpdate同步同一台网络时间服务器的时间. 1.输入ntpdate ti ...

  4. SSM框架搭建教程(从零开始,图文结合)

    1.准备 IntelliJ IDEA Tomcat JDK Maven mysql spring.springmvc.mybatis 了解 现在假设如上条件你都具备,那么通过我这篇博客 你一定可以整合 ...

  5. [Tensorflow] RNN - 02. Movie Review Sentiment Prediction with LSTM

    From: Predicting Movie Review Sentiment with TensorFlow and TensorBoard Ref: http://www.cnblogs.com/ ...

  6. EventFlow.helper.js 事件流程控制

    /*! * 事件流程管理 * version: 1.0.0-2018.07.25 * Requires ES6 * Copyright (c) 2018 Tiac * http://www.cnblo ...

  7. scala中隐式转换之总结

    1.隐式转换的时机: 1.当方法中的参数的类型与目标类型不一致时 2.当对象调用类中不存在的方法或成员时,编译器会自动将对象进行隐式转换   2.隐式解析机制 即编译器是如何查找到缺失信息的,解析具有 ...

  8. PySide_Qt文档介绍

    http://qt-project.org/wiki/PySideDocumentation/

  9. C/C++判断传入的UTC时间是否在今天

    在项目中经常会显示一个时间,如果这个时间在今日内就显示为时分秒,否则显示为年月日. 这里先给出一个正确的版本: #include <iostream> #include <time. ...

  10. kubernetes-PetSet

    什么是Pet?Pet是一个有状态应用程序,本质上它是一个具有确定性名称以及唯一身份的Pod,身份内容包括: DNS中可以识别的固定hostname 顺序化索引(Pet名称组成:PetSetName-O ...