HDFS基本命令与Hadoop MapReduce程序的执行
一、HDFS基本命令
1.创建目录:-mkdir
[jun@master ~]$ hadoop fs -mkdir /test
[jun@master ~]$ hadoop fs -mkdir /test/input
2.查看文件列表:-ls
[jun@master ~]$ hadoop fs -ls /
Found items
drwxr-xr-x - jun supergroup -- : /test
[jun@master ~]$ hadoop fs -ls /test
Found items
drwxr-xr-x - jun supergroup -- : /test/input
3.上传文件到HDFS
在/home/jun下新建两个文件jun.dat和jun.txt
(1)使用-put将文件从本地复制到HDFS集群
[jun@master ~]$ hadoop fs -put /home/jun/jun.dat /test/input/jun.dat
(2)使用-copyFromLocal将文件从本地复制到HDFS集群
[jun@master ~]$ hadoop fs -copyFromLocal -f /home/jun/jun.txt /test/input/jun.txt
(3)查看是否复制成功
[jun@master ~]$ hadoop fs -ls /test/input
Found items
-rw-r--r-- jun supergroup -- : /test/input/jun.dat
-rw-r--r-- jun supergroup -- : /test/input/jun.txt
4.下载文件到本地
(1)使用-get将文件从HDFS集群复制到本地
[jun@master ~]$ hadoop fs -get /test/input/jun.dat /home/jun/jun1.dat
(2)使用-copyToLocal将文件从HDFS集群复制到本地
[jun@master ~]$ hadoop fs -copyToLocal /test/input/jun.txt /home/jun/jun1.txt
(3)查看是否复制成功
[jun@master ~]$ ls -l /home/jun/
total
drwxr-xr-x. jun jun Jul : Desktop
drwxr-xr-x. jun jun Jul : Documents
drwxr-xr-x. jun jun Jul : Downloads
drwxr-xr-x. jun jun Jul : hadoop
drwxrwxr-x. jun jun Jul : hadoopdata
-rw-r--r--. jun jun Jul : jun1.dat
-rw-r--r--. jun jun Jul : jun1.txt
-rw-rw-r--. jun jun Jul : jun.dat
-rw-rw-r--. jun jun Jul : jun.txt
drwxr-xr-x. jun jun Jul : Music
drwxr-xr-x. jun jun Jul : Pictures
drwxr-xr-x. jun jun Jul : Public
drwxr-xr-x. jun jun Jul : Resources
drwxr-xr-x. jun jun Jul : Templates
drwxr-xr-x. jun jun Jul : Videos
5.查看HDFS集群中的文件
[jun@master ~]$ hadoop fs -cat /test/input/jun.txt
This is the txt file.
[jun@master ~]$ hadoop fs -text /test/input/jun.txt
This is the txt file.
[jun@master ~]$ hadoop fs -tail /test/input/jun.txt
This is the txt file.
6.删除HDFS文件
[jun@master ~]$ hadoop fs -rm /test/input/jun.txt
Deleted /test/input/jun.txt
[jun@master ~]$ hadoop fs -ls /test/input
Found items
-rw-r--r-- jun supergroup -- : /test/input/jun.dat
7.也可以在slave节点上执行命令
[jun@slave0 ~]$ hadoop fs -ls /test/input
Found items
-rw-r--r-- jun supergroup -- : /test/input/jun.dat
二、在Hadoop集群中运行程序
Hadoop安装文件中有一个MapReduce示例程序,该程序用来计算圆周率pi的Java程序包,
参数说明:pi(类名)、10(Map次数)、10(随机生成点的次数)
[jun@master ~]$ hadoop jar /home/jun/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8..jar pi
Number of Maps =
Samples per Map =
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Wrote input for Map #
Starting Job
// :: INFO client.RMProxy: Connecting to ResourceManager at master/192.168.1.100:
// :: INFO input.FileInputFormat: Total input files to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1532226440522_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1532226440522_0001
// :: INFO mapreduce.Job: The url to track the job: http://master:18088/proxy/application_1532226440522_0001/
// :: INFO mapreduce.Job: Running job: job_1532226440522_0001
// :: INFO mapreduce.Job: Job job_1532226440522_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1532226440522_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
File System Counters
FILE: Number of bytes read=
FILE: Number of bytes written=
FILE: Number of read operations=
FILE: Number of large read operations=
FILE: Number of write operations=
HDFS: Number of bytes read=
HDFS: Number of bytes written=
HDFS: Number of read operations=
HDFS: Number of large read operations=
HDFS: Number of write operations=
Job Counters
Launched map tasks=
Launched reduce tasks=
Data-local map tasks=
Total time spent by all maps in occupied slots (ms)=
Total time spent by all reduces in occupied slots (ms)=
Total time spent by all map tasks (ms)=
Total time spent by all reduce tasks (ms)=
Total vcore-milliseconds taken by all map tasks=
Total vcore-milliseconds taken by all reduce tasks=
Total megabyte-milliseconds taken by all map tasks=
Total megabyte-milliseconds taken by all reduce tasks=
Map-Reduce Framework
Map input records=
Map output records=
Map output bytes=
Map output materialized bytes=
Input split bytes=
Combine input records=
Combine output records=
Reduce input groups=
Reduce shuffle bytes=
Reduce input records=
Reduce output records=
Spilled Records=
Shuffled Maps =
Failed Shuffles=
Merged Map outputs=
GC time elapsed (ms)=
CPU time spent (ms)=
Physical memory (bytes) snapshot=
Virtual memory (bytes) snapshot=
Total committed heap usage (bytes)=
Shuffle Errors
BAD_ID=
CONNECTION=
IO_ERROR=
WRONG_LENGTH=
WRONG_MAP=
WRONG_REDUCE=
File Input Format Counters
Bytes Read=
File Output Format Counters
Bytes Written=
Job Finished in 88.689 seconds
Estimated value of Pi is 3.20000000000000000000
最后可以看到,得到的结果近似为3.2。
HDFS基本命令与Hadoop MapReduce程序的执行的更多相关文章
- 使用Python实现Hadoop MapReduce程序
转自:使用Python实现Hadoop MapReduce程序 英文原文:Writing an Hadoop MapReduce Program in Python 根据上面两篇文章,下面是我在自己的 ...
- 简单的java Hadoop MapReduce程序(计算平均成绩)从打包到提交及运行
[TOC] 简单的java Hadoop MapReduce程序(计算平均成绩)从打包到提交及运行 程序源码 import java.io.IOException; import java.util. ...
- [python]使用python实现Hadoop MapReduce程序:计算一组数据的均值和方差
这是参照<机器学习实战>中第15章“大数据与MapReduce”的内容,因为作者写作时hadoop版本和现在的版本相差很大,所以在Hadoop上运行python写的MapReduce程序时 ...
- 用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python pr ...
- Python实现Hadoop MapReduce程序
1.概述 Hadoop Streaming提供了一个便于进行MapReduce编程的工具包,使用它可以基于一些可执行命令.脚本语言或其他编程语言来实现Mapper和 Reducer,从而充分利用Had ...
- Intellij idea开发Hadoop MapReduce程序
1.首先下载一个Hadoop包,仅Hadoop即可. http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0 ...
- Hadoop MapReduce程序中解决第三方jar包问题方案
hadoop怎样提交多个第三方jar包? 方案1:把所有的第三方jar和自己的class打成一个大的jar包,这种方案显然笨拙,而且更新升级比较繁琐. 方案2: 在你的project里面建立一个lib ...
- hadoop——在命令行下编译并运行map-reduce程序 2
hadoop map-reduce程序的编译需要依赖hadoop的jar包,我尝试javac编译map-reduce时指定-classpath的包路径,但无奈hadoop的jar分布太散乱,根据自己 ...
- hadoop-初学者写map-reduce程序中容易出现的问题 3
1.写hadoop的map-reduce程序之前所必须知道的基础知识: 1)hadoop map-reduce的自带的数据类型: Hadoop提供了如下内容的数据类型,这些数据类型都实现了Writab ...
随机推荐
- CF #579 (Div. 3) C.Common Divisors
C.Common Divisors time limit per test2 seconds memory limit per test256 megabytes inputstandard inpu ...
- uptimerobot 监控
前言 由于搞了多个公共服务于多台vps,需要监控项目稳定性与服务器稳定性,考察了阿里云云监控与uptimerobot,最后选择了uptimerobot 教程 访问官网,注册账号 : https://u ...
- js 判断字符串是否存在某个字符串
可使用String和Regexp对象的相关方法进行处理,如下 一.String对象方法 1.使用indexOf()方法,返回某个指定的字符串值在字符串中首次出现的位置.如果要检索的字符串值没有出现,则 ...
- vue 条件渲染方式
1.通过class绑定 <div :class="{'div-class': this.align == 'center'}"></div> 对应的css ...
- 织梦cms列表页获取标签
<!-- 标签 --> [field:id runphp='yes'] global $cfg_cmspath; $tags = GetTags(@me); $revalue = ''; ...
- Python 对cookies的处理——urllib2
import urllib2 import cookielib cookie = cookielib.CookieJar() opener = urllib2.build_opener(urllib2 ...
- Scala 数组和List
Scala 数组和List: import scala.collection.mutable.ArrayBuffer import scala.collection.mutable.Buffer ob ...
- .NET成人礼 | 还记得20年前一起拖过的控件吗?
本文是MVP Ediwang写的回忆一个80后的拖控件的感悟,与君共勉: 每一代人都有记忆里的味道.煤球炉.黑白电视机是属于父母的记忆.而“拖控件”式编程,启蒙了无数像我这样的80后(嗯,89也算80 ...
- 项目一:ssm超市订单管理系统
声明:项目参考于课程教材,学习使用,仅在此记录 项目介绍 ssm超市订单管理系统,功能模块有订单管理,供应商管理,用户管理,密码修改,退出系统,管理模块中包括基本的增删改查 集成工具使用idea,基于 ...
- ORM查询总结版
目录 概要 ORM常用字段 ORM基础 自定义一个插入类型,即固定长度 创建类终极版 多对多关系表创建 常用几个代码 参数 ORM与数据库代码对应的关系 外键使用分表很麻烦,要先删除主表后,再删除 不 ...