运行基准测试hadoop集群中的问题:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /benchmarks/TestDFSIO/io_data/test_
在master(即:host2)中执行
hadoop jar hadoop-test-1.1.2.jar DFSCIOTest -write -nrFiles 12 -fileSize 10240 -resFile test












最后fail,为啥,看了一下日志
org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File /benchmarks/TestDFSIO/io_data/test_io_0 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File /benchmarks/TestDFSIO/io_data/test_io_0 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989) org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File /benchmarks/TestDFSIO/io_data/test_io_0 could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) at org.apache.hadoop.ipc.Client.call(Client.java:1107)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy2.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3686)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3546)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2749)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2989)
attempt_201607141305_0001_m_000003_2: log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
attempt_201607141305_0001_m_000003_2: log4j:WARN Please initialize the log4j system properly.
16/07/14 15:14:04 INFO mapred.JobClient: Job complete: job_201607141305_0001
16/07/14 15:14:04 INFO mapred.JobClient: Counters: 7
16/07/14 15:14:04 INFO mapred.JobClient: Job Counters
16/07/14 15:14:04 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=7002226
16/07/14 15:14:04 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
16/07/14 15:14:04 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
16/07/14 15:14:04 INFO mapred.JobClient: Launched map tasks=20
16/07/14 15:14:04 INFO mapred.JobClient: Data-local map tasks=20
16/07/14 15:14:04 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
16/07/14 15:14:04 INFO mapred.JobClient: Failed map tasks=1
16/07/14 15:14:04 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201607141305_0001_m_000002
java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1327)
at org.apache.hadoop.fs.TestDFSIO.runIOTest(TestDFSIO.java:257)
at org.apache.hadoop.fs.TestDFSIO.writeTest(TestDFSIO.java:237)
at org.apache.hadoop.fs.TestDFSIO.run(TestDFSIO.java:457)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.fs.DFSCIOTest.main(DFSCIOTest.java:403)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.test.AllTestDriver.main(AllTestDriver.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
网上查找了很多,其中一下这个人说的 :
异常何时产生
Hadoop上传文件抛出的错误,即"hadoop fs –put [本地地址] [hadoop目录]"产生的异常。
该问题在网上答案蛮多,先总结如下:
1、系统或hdfs是否有足够空间(本人就是因为硬盘空间不足导致异常发生)
2、datanode数是否正常
3、是否在safemode
4、防火墙是否关闭
5、关闭hadoop、格式化、重启hadoop
我个人认为是自己的磁盘不够的,我将hadoop的数据都放在了home目录下了,导致我的磁盘都接近了97%,导致失败的.



解决方案
http://blog.csdn.net/zuiaituantuan/article/details/6533867
http://blog.csdn.net/wanghai__/article/details/5744158
http://www.docin.com/p-629030380.html
http://www.cnblogs.com/linjiqin/archive/2013/03/13/2957310.html
http://f.dataguru.cn/thread-32858-1-1.html
运行基准测试hadoop集群中的问题:org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /benchmarks/TestDFSIO/io_data/test_的更多相关文章
- HBase中此类异常解决记录org.apache.hadoop.ipc.RemoteException(java.io.IOException):
ERROR: Can't get master address from ZooKeeper; znode data == null 一定注意这只是问题的第一层表象,真的问题是: File /hb ...
- org.apache.hadoop.ipc.RemoteException(java.io.IOException)
昨晚突然之间mr跑步起来了 jps查看 进程都在的,但是在reduce任务跑了85%的时候会抛异常 异常情况如下: 2016-09-21 21:32:28,538 INFO [org.apache.h ...
- hive运行query语句时提示错误:org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.io.IOException:
hive> select product_id, track_time from trackinfo limit 5; Total MapReduce jobs = 1 Launching Jo ...
- org.apache.hadoop.ipc.RemoteException: java.io.IOException:XXXXXXXXXXX could only be replicated to 0 nodes, instead of 1
原因:Configured Capacity也就是datanode 没用分配容量 [root@dev9106 bin]# ./hadoop dfsadmin -report Configured Ca ...
- 在Hadoop集群中添加机器和删除机器
本文转自:http://www.cnblogs.com/gpcuster/archive/2011/04/12/2013411.html 无论是在Hadoop集群中添加机器和删除机器,都无需停机,整个 ...
- Hadoop集群中添加硬盘
Hadoop工作节点扩展硬盘空间 接到老板任务,Hadoop集群中硬盘空间不够用,要求加一台机器到Hadoop集群,并且每台机器在原有基础上加一块2T硬盘,老板给力啊,哈哈. 这些我把完成这项任务的步 ...
- org.apache.catalina.connector.ClientAbortException: java.io.IOException: 您的主机中的软件中止了一个已建立的连接。
日志文件中有“java.io.IOException: 您的主机中的软件中止了一个已建立的连接.”错误 org.apache.catalina.connector.ClientAbortExcepti ...
- org.apache.catalina.connector.ClientAbortException: java.io.IOException: 你的主机中的软件中止了一个已建立的连接。
org.apache.catalina.connector.ClientAbortException: java.io.IOException: 你的主机中的软件中止了一个已建立的连接. at org ...
- 如何使用Hive&R从Hadoop集群中提取数据进行分析
一个简单的例子! 环境:CentOS6.5 Hadoop集群.Hive.R.RHive,具体安装及调试方法见博客内文档. 1.分析题目 --有一个用户数据样本(表名huserinfo)10万数据左右: ...
随机推荐
- Codeforces 650A Watchmen
传送门 time limit per test 3 seconds memory limit per test 256 megabytes input standard input output st ...
- inux环境PHP7.0安装
inux环境PHP7.0安装 PHP7和HHVM比较PHP7的在真实场景的性能确实已经和HHVM相当, 在一些场景甚至超过了HHVM.HHVM的运维复杂, 是多线程模型, 这就代表着如果一个线程导 ...
- thinkphp空控制器的处理
<?php namespace Admin\Controller; use Think\Controller; class DengLuController extends Controller ...
- latex+bibtex+jabref(zz)
很好的的latex使用心得: bibtex现学现卖 http://derecks.blog.sohu.com/118984444.html latex+bibtex+jabref http://blo ...
- C++对象模型:单继承,多继承,虚继承
什么是对象模型 有两个概念可以解释C++对象模型: 语言中直接支持面向对象程序设计的部分.对于各种支持的底层实现机制. 类中成员分类 数据成员分为静态和非静态,成员函数有静态非静态以及虚函数 clas ...
- Nginx和PHP-FPM的启动、重启、停止脚本分享(转)
服务器上的Nginx和PHP都是源码编译安装的,不像ubuntu一样有自带service启动脚本,所以不支持类似以前的nginx (start|restart|stop|reload)了.自己动手丰衣 ...
- Visual Studio Online Integrations
...
- wordpress编辑主题时报错Warning: scandir() has been disabled for security reasons in
在ubuntu下面安装了一个wordpress程序,在后台什么都没干,编辑主题时,发现页面中报下面的错误. notice: /home/wwwroot/test.localhost/wordpress ...
- matlab之waitbar() delete() close()
matlab之waitbar() delete() close() 三者之间的关系: 在显示某个程序的进度时,用waitbar函数显示进度条,当程序进行完毕时,用close 或 delete函数关闭此 ...
- Minimum Inversion Number
Minimum Inversion Number Time Limit:1000MS Memory Limit:32768KB 64bit IO Format:%I64d & ...