org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://xxx:49000/user/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at org.apache.hadoop.examples.Grep.run(Grep.java:69)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.examples.Grep.main(Grep.java:93)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

解决:

需要执行上传的命令即可:
hadoop fs -put input/ input

Hadoop异常的更多相关文章

  1. Hadoop异常总结

    版权声明:本文为yunshuxueyuan原创文章.如需转载请标明出处:http://www.cnblogs.com/sxt-zkys/QQ技术交流群:299142667 Hadoop异常总结 had ...

  2. hadoop异常:java.lang.RuntimeException: java.lang.NoSuchMethodException

    出现异常的程序大致框架是这样的: public class getMaxTemperature extends Configured implements Tool { ... class MaxTe ...

  3. hadoop异常: 到目前为止解决的最牛逼的一个异常(java.io.IOException: Incompatible clusterIDs)

    (注意: 本人用的版本为hadoop2.2.0, 旧的版本和此版本的解决方法不同) 异常为: 9 (storage id DS-2102177634-172.16.102.203-50010-1384 ...

  4. [Hadoop] - 异常Cannot obtain block length for LocatedBlock

    在Flume NG+hadoop的开发中,运行mapreduce的时候出现异常Error: java.io.IOException: Cannot obtain block length for Lo ...

  5. hadoop异常:Be Replicated to 0 nodes, instead of 1

    Hadoop 坑爹的Be Replicated to 0 nodes, instead of 1 异常 博客分类: Java 编程 HadoopITeyeJSP算法Apache  有段时间不写博客了, ...

  6. hadoop异常: java.io.EOFException: Unexpected end of input stream

    执行hadoop任务时报错: -- ::, INFO [main] org.apache.hadoop.mapred.MapTask: Processing --//app1@flume23_1000 ...

  7. hadoop 异常及处理总结-01(小马哥-原创)

    试验环境: 本地:MyEclipse 集群:Vmware 11+ 6台 Centos 6.5 Hadoop版本: 2.4.0(配置为自动HA) 试验背景: 在正常测试MapReduce(下简称MR)程 ...

  8. hadoop 异常 datanode未启动

    暴力方法:(本人是学习阶段,实际工作中不能这么做)在各个节点上执行如下操作. 将/tmp 删除 将 conf/mapred-site.xml <property> <name> ...

  9. hadoop 异常 ls: Cannot access .: No such file or directory.

    bin/hadoop dfs -lsls: Cannot access .: No such file or directory. bin/hadoop dfs -ls /用这个命令代替试试 原因是格 ...

  10. hadoop 异常 INFO ipc.Client: Retrying connect to server:

    // :: INFO ipc.Client: Retrying connect to server: master/. Already tried , sleepTime= SECONDS) // : ...

随机推荐

  1. ubuntu16.04下安装pcl点云库

    安装依赖项 sudo apt-get update sudo apt-get install git build-essential linux-libc-dev sudo apt-get insta ...

  2. Oracle Schema Objects——PARTITION

    Oracle Schema Objects 表分区 表- - 分区( partition )TABLE PARTITION 一段时间给出一个分区,这样方便数据的管理. 可以按照范围range分区,列表 ...

  3. 使用linuxbridge + vlan网络模式

    #openstack pike 使用 linuxbridge + vlan openstack pike 集群高可用  安装部署 汇总 http://www.cnblogs.com/elvi/p/76 ...

  4. Squeeze-and-Excitation Networks

    Squeeze-and-Excitation Networks Paper 近些年来,卷积神经网络在很多领域都取得了巨大的突破.而卷积核作为卷积神经网络的核心,通常被看做是在局部感受野上,将空间上(s ...

  5. Teigha.net读写dwg文件显示

    官网:http://www.opendesign.com/ http://www.cnblogs.com/zhanglibo0626/archive/2011/11/04/2236238.html 下 ...

  6. python之redis模块

    一.redis简介 redis是一个key-value存储系统.和Memcached类似,它支持存储的value类型相对更多,包括string(字符串).list(链表).set(集合).zset(s ...

  7. Mybatis 创建Configuration对象

    Mybatis  创建Configuration对象是在项目启动时就创建了. 具体创建流程为: https://blog.csdn.net/wolfcode_cn/article/details/80 ...

  8. 使用distcp并行拷贝大数据文件

    以前我们介绍的访问HDFS的方法都是单线程的,Hadoop中有一个工具可以让我们并行的拷贝大量数据文件,这个工具就是distcp. distcp的典型应用就是在两个HDFS集群中拷贝文件,如果两个集群 ...

  9. SCons构建工具使用

    scons是一个Python写的自动化构建工具,和GNU make相比优点明显:    1. 移植性:python能运行的地方,就能运行scons    2. 扩展性:理论上scons只是提供了pyt ...

  10. (转)JavaScriptSerializer,DataContractJsonSerializer解析JSON字符串功能小记

    JsonAbout: using System;using System.Collections.Generic;using System.Linq;using System.Text;using S ...