在实例测试前先确保hadoop 启动正确

  Master.Hadoop:

word 1
[hadoop@Master input]$ jps
6736 Jps
6036 NameNode
4697 SecondaryNameNode
4849 ResourceManager
[hadoop@Master input]$

  Slave1.Hadoop

[hadoop@Slave1 sources]$ jps
8086 SecondaryNameNode
8961 Jps
8320 NodeManager
7935 DataNode

在测试过程中遇到的错与与解决方案:

问题一:
[hadoop@Master input]$ hadoop fs -ls /
ls: Call From Master.Hadoop/192.168.160.131 to Master.Hadoop:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
解决方案:
1、格式化HDFS文件系统:hadoop namenode -format
 
问题二:
/home/hadoop/WordCount/input
[hadoop@Master input]$ hadoop fs -put ./ /input
15/06/30 17:10:45 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /input/input/test1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1441)
解决方案:

1. 先执行stop-all.sh暂停所有服务

2. 将所有Salve节点上的tmp , logs 文件夹删除 , 然后重新建立tmp , logs 文件夹

3. 格式化HDFS文件系统:hadoop namenode -format


转载地址:

http://linux.it.net.cn/e/cluster/hadoop/2014/1215/10427.html

转载内容:

装好的hadoop测试一1个示例程序WordCount,首先需要在操作系统上新建两个任意文件,然后上传到hadoop,再运行该程序统计文件中单词的个数,最后查看结果。

在操作系统上新建任意文件:

例如:
[hadoop@hadoop01 input]$ ls
test1.txt  test2.txt

查看hadoop的文件系统目录

[hadoop@hadoop01 input]$ hadoop fs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2013-10-30 00:00 /input

上传至hadoop的/input下

[hadoop@hadoop01 input]$ hadoop fs -put ./ /input
[hadoop@hadoop01 input]$ hadoop fs -ls /input
Found 2 items
-rw-r--r--   3 hadoop supergroup         12 2013-10-30 00:00 /input/test1.txt
-rw-r--r--   3 hadoop supergroup         13 2013-10-30 00:00 /input/test2.txt

在hadoop文件系统命令查看这两个文件的内容:

[hadoop@hadoop01 test]$ hadoop fs -cat /input/test1.txt 
hello world
[hadoop@hadoop01 test]$ hadoop fs -cat /input/test2.txt
hello hadoop

运行示例程序(WordCount):

[hadoop@hadoop01 test]$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /input /output
13/11/06 21:33:40 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
13/11/06 21:33:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/11/06 21:33:40 INFO input.FileInputFormat: Total input paths to process : 2
13/11/06 21:33:41 INFO mapreduce.JobSubmitter: number of splits:2
13/11/06 21:33:41 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class IT网,http://www.it.net.cn 
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/11/06 21:33:41 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class 
13/11/06 21:33:41 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/11/06 21:33:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local382050821_0001
13/11/06 21:33:41 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/staging/hadoop382050821/.staging/job_local382050821_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
13/11/06 21:33:41 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/staging/hadoop382050821/.staging/job_local382050821_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
13/11/06 21:33:42 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local382050821_0001/job_local382050821_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring. Linux学习,http:// linux.it.net.cn 
13/11/06 21:33:42 WARN conf.Configuration: file:/hadoop/hdfs/tmp/hadoop-hadoop/mapred/local/localRunner/hadoop/job_local382050821_0001/job_local382050821_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
13/11/06 21:33:42 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
13/11/06 21:33:42 INFO mapreduce.Job: Running job: job_local382050821_0001
13/11/06 21:33:42 INFO mapred.LocalJobRunner: OutputCommitter set in config null
13/11/06 21:33:42 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
13/11/06 21:33:42 INFO mapred.LocalJobRunner: Waiting for map tasks
13/11/06 21:33:42 INFO mapred.LocalJobRunner: Starting task: attempt_local382050821_0001_m_000000_0
13/11/06 21:33:42 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:42 INFO mapred.MapTask: Processing split: hdfs://hadoop01:9000/input/test2.txt:0+13 Linux学习,http:// linux.it.net.cn 
13/11/06 21:33:42 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/11/06 21:33:42 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
13/11/06 21:33:42 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
13/11/06 21:33:42 INFO mapred.MapTask: soft limit at 83886080
13/11/06 21:33:42 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
13/11/06 21:33:42 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO mapred.MapTask: Starting flush of map output
13/11/06 21:33:43 INFO mapred.MapTask: Spilling map output
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufend = 21; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
13/11/06 21:33:43 INFO mapred.MapTask: Finished spill 0
13/11/06 21:33:43 INFO mapred.Task: Task:attempt_local382050821_0001_m_000000_0 is done. And is in the process of committing 
13/11/06 21:33:43 INFO mapreduce.Job: Job job_local382050821_0001 running in uber mode : false
13/11/06 21:33:43 INFO mapreduce.Job:  map 0% reduce 0%
13/11/06 21:33:43 INFO mapred.LocalJobRunner: map
13/11/06 21:33:43 INFO mapred.Task: Task 'attempt_local382050821_0001_m_000000_0' done.
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local382050821_0001_m_000000_0
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Starting task: attempt_local382050821_0001_m_000001_0
13/11/06 21:33:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:43 INFO mapred.MapTask: Processing split: hdfs://hadoop01:9000/input/test1.txt:0+12
13/11/06 21:33:43 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
13/11/06 21:33:43 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
13/11/06 21:33:43 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100 
13/11/06 21:33:43 INFO mapred.MapTask: soft limit at 83886080
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO mapred.MapTask: Starting flush of map output
13/11/06 21:33:43 INFO mapred.MapTask: Spilling map output
13/11/06 21:33:43 INFO mapred.MapTask: bufstart = 0; bufend = 20; bufvoid = 104857600
13/11/06 21:33:43 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/6553600
13/11/06 21:33:43 INFO mapred.MapTask: Finished spill 0
13/11/06 21:33:43 INFO mapred.Task: Task:attempt_local382050821_0001_m_000001_0 is done. And is in the process of committing
13/11/06 21:33:43 INFO mapred.LocalJobRunner: map
13/11/06 21:33:43 INFO mapred.Task: Task 'attempt_local382050821_0001_m_000001_0' done.
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Finishing task: attempt_local382050821_0001_m_000001_0 
13/11/06 21:33:43 INFO mapred.LocalJobRunner: Map task executor complete.
13/11/06 21:33:43 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/11/06 21:33:43 INFO mapred.Merger: Merging 2 sorted segments
13/11/06 21:33:43 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 36 bytes
13/11/06 21:33:43 INFO mapred.LocalJobRunner: 
13/11/06 21:33:43 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
13/11/06 21:33:44 INFO mapreduce.Job:  map 100% reduce 0%
13/11/06 21:33:44 INFO mapred.Task: Task:attempt_local382050821_0001_r_000000_0 is done. And is in the process of committing
13/11/06 21:33:44 INFO mapred.LocalJobRunner: 
13/11/06 21:33:44 INFO mapred.Task: Task attempt_local382050821_0001_r_000000_0 is allowed to commit now
13/11/06 21:33:44 INFO output.FileOutputCommitter: Saved output of task 'attempt_local382050821_0001_r_000000_0' to hdfs://hadoop01:9000/output/_temporary/0/task_local382050821_0001_r_000000 
13/11/06 21:33:44 INFO mapred.LocalJobRunner: reduce > reduce
13/11/06 21:33:44 INFO mapred.Task: Task 'attempt_local382050821_0001_r_000000_0' done.
13/11/06 21:33:45 INFO mapreduce.Job:  map 100% reduce 100%
13/11/06 21:33:45 INFO mapreduce.Job: Job job_local382050821_0001 completed successfully
13/11/06 21:33:45 INFO mapreduce.Job: Counters: 32
        File System Counters
                FILE: Number of bytes read=812174
                FILE: Number of bytes written=1395157
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=63 
                HDFS: Number of bytes written=25
                HDFS: Number of read operations=25
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=5
        Map-Reduce Framework
                Map input records=2
                Map output records=4
                Map output bytes=41
                Map output materialized bytes=61
                Input split bytes=202
                Combine input records=4
                Combine output records=4 
                Reduce input groups=3
                Reduce shuffle bytes=0
                Reduce input records=4
                Reduce output records=3
                Spilled Records=8
                Shuffled Maps =0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=146
                CPU time spent (ms)=0
                Physical memory (bytes) snapshot=0
                Virtual memory (bytes) snapshot=0 IT网,http://www.it.net.cn 
                Total committed heap usage (bytes)=456732672
        File Input Format Counters 
                Bytes Read=25
        File Output Format Counters 
                Bytes Written=25

查看程序运行结果:

[hadoop@hadoop01 test]$ hadoop fs -cat /output/part-r-00000
hadoop  1
hello   2
world   1

Hadoop2.4.x 实例测试 WordCount程序的更多相关文章

  1. Hadoop集群测试wordcount程序

    一.集群环境搭好了,我们来测试一下吧 1.在java下创建一个wordcount文件夹:mkdir wordcount 2.在此文件夹下创建两个文件,比如file1.txt和file2.txt 在fi ...

  2. hadoop2.7.x运行wordcount程序卡住在INFO mapreduce.Job: Running job:job _1469603958907_0002

    一.抛出问题 Hadoop集群(全分布式)配置好后,运行wordcount程序测试,发现每次运行都会卡住在Running job处,然后程序就呈现出卡死的状态. wordcount运行命令:[hado ...

  3. WordCount程序及测试

    Github地址:https://github.com/CG0317/WordCount PSP表: PSP2.1 PSP阶段 预估耗时 (分钟) 实际耗时 (分钟) Planning 计划  30 ...

  4. spark学习11(Wordcount程序-本地测试)

    wordcount程序 文件wordcount.txt hello wujiadong hello spark hello hadoop hello python 程序示例 package wujia ...

  5. hadoop安装后运行一个单实例(测试MapReduce程序)

    1.安装hadoop 解压hadoop-1.2.1-bin.tar.gz包   tar -zxvf hadoop-1.2.1-bin.tar.gz  /opt/modules/ 解压后在/opt/mo ...

  6. Hadoop环境搭建及wordcount程序

    目的: 前期学习了一些机器学习基本算法,实际企业应用中算法是核心,运行的环境和数据处理的平台是基础. 手段: 搭建简易hadoop集群(由于机器限制在自己的笔记本上通过虚拟机搭建) 一.基础环境介绍 ...

  7. hadoop学习笔记——用python写wordcount程序

    尝试着用3台虚拟机搭建了伪分布式系统,完整的搭建步骤等熟悉了整个分布式框架之后再写,今天写一下用python写wordcount程序(MapReduce任务)的具体步骤. MapReduce任务以来H ...

  8. 50、Spark Streaming实时wordcount程序开发

    一.java版本 package cn.spark.study.streaming; import java.util.Arrays; import org.apache.spark.SparkCon ...

  9. Spark练习之通过Spark Streaming实时计算wordcount程序

    Spark练习之通过Spark Streaming实时计算wordcount程序 Java版本 Scala版本 pom.xml Java版本 import org.apache.spark.Spark ...

随机推荐

  1. ps 进程查看器

    命令参数 a 显示所有进程 -a 显示同一终端下的所有程序 -A 显示所有进程 c 显示进程的真实名称 -N 反向选择 -e 等于"-A" e 显示环境变量 f 显示程序间的关系 ...

  2. 【229】Raster Calculator - 栅格计算器

    参考:分段函数进行复制,利用语句 参考:ArcGIS栅格计算器 - CSDN 参考:ArcGIS栅格计算器con条件函数使用 参考:ArcGIS栅格计算器 - 电脑玩物 ("lyr" ...

  3. mysql处理海量数据时的一些优化查询速度方法

      最近一段时间由于工作需要,开始关注针对Mysql数据库的select查询语句的相关优化方法. 由于在参与的实际项目中发现当mysql表的数据量达到百万级时,普通SQL查询效率呈直线下降,而且如果w ...

  4. php文章内容分页并生成相应的htm静态页面代码

    代码如下: <?php $url='test.php?1=1'; $contents="fjka;fjsa;#page#批量生成分成文件并且加上分页代码"; $ptext = ...

  5. Spring MVC实例(增删改查)

    数据库配置文件application-context-jdbc.xml <?xml version="1.0" encoding="UTF-8"?> ...

  6. 附10 kibana创建新的index patterns

    elk整体架构图: 一.logstash indexer 配置文件: input { stdin{} } filter { } output { elasticsearch { hosts => ...

  7. [html]兼容 IE6 IE7 的简单网页框架

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/ ...

  8. phpexcel相关函数

    1.header [php] header("Content-Type:application/vnd.ms-excel"); header("Content-Dispo ...

  9. coreseek实战(一):windows下coreseek的安装与测试

    coreseek实战(一):windows下coreseek的安装与测试 网上关于 coreseek 在 windows 下安装与使用的教程有很多,官方也有详细的教程,这里我也只是按着官方提供的教程详 ...

  10. C#保存Base64格式图片

    .前端页面代码 /** * 通过图片本地路径获取图片真实大小,并进行压缩 */ function getLocalRealSize(path, callback) { var img = new Im ...