Hadoop Examples

除了《Hadoop基准测试(一)》提到的测试,Hadoop还自带了一些例子,比如WordCount和TeraSort,这些例子在hadoop-examples-2.6.0-mr1-cdh5.16.1.jar和hadoop-examples.jar中。执行以下命令:

hadoop jar hadoop-examples--mr1-cdh5.16.1.jar

会列出所有的示例程序:

bash--mr1-cdh5.16.1.jar
An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
  grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
  pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
  randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

 单词统计测试

进入角色hdfs创建的文件夹**,执行命令:vim words.txt,输入内容如下:

hello hadoop hbase mytest
hadoop-node1
hadoop-master
hadoop-node2
this is my test

执行命令:

../bin/hadoop fs -put words.txt /tmp/

将文件上传到HDFS中,如下:

执行以下命令,使用mapreduce统计指定文件单词个数,并将结果输入到指定文件:

hadoop jar ../jars/hadoop-examples--mr1-cdh5.16.1.jar wordcount /tmp/words.txt /tmp/words_result.txt

返回如下信息:

bash--mr1-cdh5.16.1.jar wordcount /tmp/words.txt /tmp/words_result.txt
// :: INFO client.RMProxy: Connecting to ResourceManager at node1/
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1552358721447_0060
// :: INFO impl.YarnClientImpl: Submitted application application_1552358721447_0060
// :: INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1552358721447_0060/
// :: INFO mapreduce.Job: Running job: job_1552358721447_0060
// :: INFO mapreduce.Job: Job job_1552358721447_0060 running in uber mode : false
// :: INFO mapreduce.Job:  map % reduce %
// :: INFO mapreduce.Job:  map % reduce %
// :: INFO mapreduce.Job:  map % reduce %
// :: INFO mapreduce.Job:  map % reduce %
// :: INFO mapreduce.Job: Job job_1552358721447_0060 completed successfully
// :: INFO mapreduce.Job: Counters:
        File System Counters
                FILE: Number of bytes read=
                FILE: Number of bytes written=
                FILE: Number of read operations=
                FILE: Number of large read operations=
                FILE: Number of
                HDFS: Number of bytes read=
                HDFS: Number of bytes written=
                HDFS: Number of read operations=
                HDFS: Number of large read operations=
                HDFS: Number of
        Job Counters
                Launched map tasks=
                Launched reduce tasks=
                Data-local map tasks=
                Total
                Total
                Total
                Total
                Total vcore-milliseconds taken by all map tasks=
                Total vcore-milliseconds taken by all reduce tasks=
                Total megabyte-milliseconds taken by all map tasks=
                Total megabyte-milliseconds taken by all reduce tasks=
        Map-Reduce Framework
                Map input records=
                Map output records=
                Map output bytes=
                Map output materialized bytes=
                Input
                Combine input records=
                Combine output records=
                Reduce input
                Reduce shuffle bytes=
                Reduce input records=
                Reduce output records=
                Spilled Records=
                Shuffled Maps =
                Failed Shuffles=
                Merged Map outputs=
                GC
                CPU
                Physical memory (bytes) snapshot=
                Virtual memory (bytes) snapshot=
                Total committed heap usage (bytes)=
        Shuffle Errors
                BAD_ID=
                CONNECTION=
                IO_ERROR=
                WRONG_LENGTH=
                WRONG_MAP=
                WRONG_REDUCE=
        File Input Format Counters
                Bytes Read=
        File Output Format Counters
                Bytes Written=

在hdfs目录下保存了任务的结果文件:

结果记录条目从0计数到47,共计48条:

每一个part对应一个Reduce:

执行命令,查看任务执行后的结果:

bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-*****

返回结果如下:

bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00000
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00011
is
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00015
this
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00022
hadoop
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00024
hbase
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00040
hadoop-node1
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00041
hadoop-master
hadoop-node2
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00045
my
bash-4.2$ hadoop fs -cat hdfs:///tmp/words_result.txt/part-r-00047
mytest  

参考: https://jeoygin.org/2012/02/22/running-hadoop-on-centos-single-node-cluster/

Hadoop基准测试(二)的更多相关文章

  1. MySQL基准测试(二)--方法

    MySQL基准测试(二)--方法 目的: 方法不是越高级越好.而应该善于做减法.至简是一种智慧,首先要做的是收集MySQL的各状态数据.收集到了,不管各个时间段出现的问题,至少你手上有第一时间的状态数 ...

  2. Hadoop(二):MapReduce程序(Java)

    Java版本程序开发过程主要包含三个步骤,一是map.reduce程序开发:第二是将程序编译成JAR包:第三使用Hadoop jar命令进行任务提交. 下面拿一个具体的例子进行说明,一个简单的词频统计 ...

  3. Hadoop 基准测试与example

    #pi值示例 hadoop jar /app/cdh23502/share/hadoop/mapreduce2/hadoop-mapreduce-examples--cdh5. #生成数据 第一个参数 ...

  4. Hadoop系列(二)hadoop2.2.0伪分布式安装

    一.环境配置 安装虚拟机vmware,并在该虚拟机机中安装CentOS 6.4: 修改hostname(修改配置文件/etc/sysconfig/network中的HOSTNAME=hadoop),修 ...

  5. Hadoop MapReduce 二次排序原理及其应用

    关于二次排序主要涉及到这么几个东西: 在0.20.0 以前使用的是 setPartitionerClass setOutputkeyComparatorClass setOutputValueGrou ...

  6. Hadoop基准测试(转载)

    <hadoop the definitive way>(third version)中的Benchmarking a Hadoop Cluster Test Cases的class在新的版 ...

  7. hadoop系列二:HDFS文件系统的命令及JAVA客户端API

    转载请在页首明显处注明作者与出处 一:说明 此为大数据系列的一些博文,有空的话会陆续更新,包含大数据的一些内容,如hadoop,spark,storm,机器学习等. 当前使用的hadoop版本为2.6 ...

  8. hadoop(二)搭建伪分布式集群

    前言 前面只是大概介绍了一下Hadoop,现在就开始搭建集群了.我们下尝试一下搭建一个最简单的集群.之后为什么要这样搭建会慢慢的分享,先要看一下效果吧! 一.Hadoop的三种运行模式(启动模式) 1 ...

  9. Hadoop基准测试

    其实就是从网络上copy的吧,在这里做一下记录 这个是看一下有哪些测试方式: hadoop  jar /opt/cloudera/parcels/CDH-5.3.6-1.cdh5.3.6.p0.11/ ...

随机推荐

  1. Vue组件介绍及开发

    一. 通过axios实现数据请求 1.json json是 JavaScript Object Notation 的首字母缩写,单词的意思是javascript对象表示法,这里说的json指的是类似于 ...

  2. 安卓开发:在Mac系统中搭建安卓开发环境

    第一步:检查下自己的电脑上有没有安装JDK(Java Development Kit)(2019年7月安装的最新版是JDK 1.8.0_181版本),通过在终端中输入"java -versi ...

  3. 关于报错:Warning: Cannot modify header information - headers already sent by (output started at

    8月5日,第一个项目即将完成,测试时,发现登录功能会出现小问题:记住密码的时候会报错 Warning: Cannot modify header information - headers alrea ...

  4. 关于无线的Idle Timeout和Session Timeout

    1.Session Timeout Session Timer的默认值为1800s,也就是30min.Session Timeout:当该计时器超时时,使得客户端强制发生重认证,这个时间是从客户端认证 ...

  5. 学校实训作业:Java爬虫(WebMagic框架)的简单操作

    项目名称:java爬虫 项目技术选型:Java.Maven.Mysql.WebMagic.Jsp.Servlet 项目实施方式:以认知java爬虫框架WebMagic开发为主,用所学java知识完成指 ...

  6. #P1909 买铅笔 的题解

    题目描述 P老师需要去商店买n支铅笔作为小朋友们参加NOIP的礼物.她发现商店一共有 33种包装的铅笔,不同包装内的铅笔数量有可能不同,价格也有可能不同.为了公平起 见,P老师决定只买同一种包装的铅笔 ...

  7. 输入流输出流IO的理解

    之前对IO理解总是有点模糊, 输入输出,其实针对 数据处理主体 A ,这个主体我们通常是指服务器程序本身,   交互目的源B ,一般是本地磁盘,由本地磁盘IO传输, 或者 远程客户端,由网络IO传输. ...

  8. python脚本调用iftop 统计业务应用流量

    因公司服务器上部署应用较多,在有大并发访问.业务逻辑有问题的情况下反复互相调用或者有异常流量访问的时候,需要对业务应用进行故障定位,所以利用python调用iftop命令来获取应用进程流量,结合zab ...

  9. javascript入门教程02

    JavaScript中的运算符 (1)算术运算符 + :相加 var a=123,b=45; document.write(a+b); - :相减 document.write(a-b); *:相乘 ...

  10. JavaWeb之过滤器

    过滤器 什么是过滤器 1示意图: 过滤器的作用: 1.过滤器的作用好比一个保安.是servlet规范中的技术 2.用户在访问应用的资源之前或者之后,可以对请求做出一定的处理 编写过滤器步骤: 1.编写 ...