HDFS的Shell
调用文件系统(FS)Shell命令应使用 $HADOOP_HOME/bin/hadoop fs 的形式。
所有的FS Shell命令使用URI路径作为参数。
URI格式是scheme://authority/path。
对于HDFS,HDFS的scheme是hdfs,如hdfs://localhost:9000/ , 对于本地,本地的scheme是file,如file://
其中,scheme和authority参数都是可选的,如果未加指定,就会使用配置中指定的默认scheme。
例如,/parent/child可以表示成hdfs://namenode:namenodePort/parent/child,或/parent/child(假设配置文件是namenode:namenodePort)
大多数FS Shell命令的行为和对应的Unix Shell命令类似。
进一步,可以参考 http://www.cnblogs.com/zlslch/p/5107847.html
bin/hdfs dfs命令
appendToFile
Usage: hdfs dfs -appendToFile <localsrc> ... <dst>
追加一个或者多个文件到hdfs制定文件中.也可以从命令行读取输入.
- hdfs dfs -appendToFile localfile /user/hadoop/hadoopfile
- hdfs dfs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile
- hdfs dfs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile
- hdfs dfs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and 1 on error.
cat
Usage: hdfs dfs -cat URI [URI ...]
查看内容.
Example:
- hdfs dfs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
- hdfs dfs -cat file:///file3 /user/hadoop/file4
Exit Code:
Returns 0 on success and -1 on error.
Chgrp【change group】
Usage: hdfs dfs -chgrp [-R] GROUP URI [URI ...]
修改所属组.
Options
- The -R option will make the change recursively through the directory structure.
chmod
Usage: hdfs dfs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]
修改权限.
Options
- The -R option will make the change recursively through the directory structure.
chown
Usage: hdfs dfs -chown [-R] [OWNER][:[GROUP]] URI [URI ]
修改所有者.
Options
- The -R option will make the change recursively through the directory structure.
copyFromLocal
Usage: hdfs dfs -copyFromLocal <localsrc> URI
Similar to put command, except that the source is restricted to a local file reference.
Options:
- The -f option will overwrite the destination if it already exists.
copyToLocal
Usage: hdfs dfs -copyToLocal [-ignorecrc] [-crc] URI <localdst>
Similar to get command, except that the destination is restricted to a local file reference.
count
Usage: hdfs dfs -count [-q] [-h] <paths>
列出文件夹数量、文件数量、内容大小. The output columns with -count are: DIR_COUNT, FILE_COUNT, CONTENT_SIZE FILE_NAME
The output columns with -count -q are: QUOTA, REMAINING_QUATA, SPACE_QUOTA, REMAINING_SPACE_QUOTA, DIR_COUNT, FILE_COUNT, CONTENT_SIZE, FILE_NAME
The -h option shows sizes in human readable format.
Example:
- hdfs dfs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2
- hdfs dfs -count -q hdfs://nn1.example.com/file1
- hdfs dfs -count -q -h hdfs://nn1.example.com/file1
Exit Code:
Returns 0 on success and -1 on error.
cp
Usage: hdfs dfs -cp [-f] [-p | -p[topax]] URI [URI ...] <dest>
复制文件(夹),可以覆盖,可以保留原有权限信息
Options:
- The -f option will overwrite the destination if it already exists.
- The -p option will preserve file attributes [topx] (timestamps, ownership, permission, ACL, XAttr). If -p is specified with no arg, then preserves timestamps, ownership, permission. If -pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the -p flag.
Example:
- hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2
- hdfs dfs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
du
Usage: hdfs dfs -du [-s] [-h] URI [URI ...]
显示文件(夹)大小.
Options:
- The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.
- The -h option will format file sizes in a "human-readable" fashion (e.g 64.0m instead of 67108864)
Example:
- hdfs dfs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error.
dus
Usage: hdfs dfs -dus <args>
Displays a summary of file lengths.
Note: This command is deprecated. Instead use hdfs dfs -du -s.
expunge
Usage: hdfs dfs -expunge
清空回收站.
get
Usage: hdfs dfs -get [-ignorecrc] [-crc] <src> <localdst>
Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
Example:
- hdfs dfs -get /user/hadoop/file localfile
- hdfs dfs -get hdfs://nn.example.com/user/hadoop/file localfile
Exit Code:
Returns 0 on success and -1 on error.
getfacl
Usage: hdfs dfs -getfacl [-R] <path>
显示权限信息.
Options:
- -R: List the ACLs of all files and directories recursively.
- path: File or directory to list.
Examples:
- hdfs dfs -getfacl /file
- hdfs dfs -getfacl -R /dir
Exit Code:
Returns 0 on success and non-zero on error.
getfattr
Usage: hdfs dfs -getfattr [-R] -n name | -d [-e en] <path>
Displays the extended attribute names and values (if any) for a file or directory.
Options:
- -R: Recursively list the attributes for all files and directories.
- -n name: Dump the named extended attribute value.
- -d: Dump all extended attribute values associated with pathname.
- -e encoding: Encode values after retrieving them. Valid encodings are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively.
- path: The file or directory.
Examples:
- hdfs dfs -getfattr -d /file
- hdfs dfs -getfattr -R -n user.myAttr /dir
Exit Code:
Returns 0 on success and non-zero on error.
getmerge
Usage: hdfs dfs -getmerge <src> <localdst> [addnl]
合并.
ls
Usage: hdfs dfs -ls [-R] <args>
Options:
- The -R option will return stat recursively through the directory structure.
For a file returns stat on the file with the following format:
permissions number_of_replicas userid groupid filesize modification_date modification_time filename
For a directory it returns list of its direct children as in Unix. A directory is listed as:
permissions userid groupid modification_date modification_time dirname
Example:
- hdfs dfs -ls /user/hadoop/file1
Exit Code:
Returns 0 on success and -1 on error.
lsr
Usage: hdfs dfs -lsr <args>
Recursive version of ls.
Note: This command is deprecated. Instead use hdfs dfs -ls -R
mkdir
Usage: hdfs dfs -mkdir [-p] <paths>
Takes path uri's as argument and creates directories.
Options:
- The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
Example:
- hdfs dfs -mkdir /user/hadoop/dir1 /user/hadoop/dir2
- hdfs dfs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir
Exit Code:
Returns 0 on success and -1 on error.
moveFromLocal
Usage: hdfs dfs -moveFromLocal <localsrc> <dst>
Similar to put command, except that the source localsrc is deleted after it's copied.
moveToLocal
Usage: hdfs dfs -moveToLocal [-crc] <src> <dst>
Displays a "Not implemented yet" message.
mv
Usage: hdfs dfs -mv URI [URI ...] <dest>
Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.
Example:
- hdfs dfs -mv /user/hadoop/file1 /user/hadoop/file2
- hdfs dfs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1
Exit Code:
Returns 0 on success and -1 on error.
put
Usage: hdfs dfs -put <localsrc> ... <dst>
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
- hdfs dfs -put localfile /user/hadoop/hadoopfile
- hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir
- hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
- hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
rm
Usage: hdfs dfs -rm [-f] [-r|-R] [-skipTrash] URI [URI ...]
Delete files specified as args.
Options:
- The -f option will not display a diagnostic message or modify the exit status to reflect an error if the file does not exist.
- The -R option deletes the directory and any content under it recursively.
- The -r option is equivalent to -R.
- The -skipTrash option will bypass trash, if enabled, and delete the specified file(s) immediately. This can be useful when it is necessary to delete files from an over-quota directory.
Example:
- hdfs dfs -rm hdfs://nn.example.com/file /user/hadoop/emptydir
Exit Code:
Returns 0 on success and -1 on error.
rmr
Usage: hdfs dfs -rmr [-skipTrash] URI [URI ...]
Recursive version of delete.
Note: This command is deprecated. Instead use hdfs dfs -rm -r
setfacl
Usage: hdfs dfs -setfacl [-R] [-b|-k -m|-x <acl_spec> <path>]|[--set <acl_spec> <path>]
Sets Access Control Lists (ACLs) of files and directories.
Options:
- -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
- -k: Remove the default ACL.
- -R: Apply operations to all files and directories recursively.
- -m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
- -x: Remove specified ACL entries. Other ACL entries are retained.
- --set: Fully replace the ACL, discarding all existing entries. The acl_spec must include entries for user, group, and others for compatibility with permission bits.
- acl_spec: Comma separated list of ACL entries.
- path: File or directory to modify.
Examples:
- hdfs dfs -setfacl -m user:hadoop:rw- /file
- hdfs dfs -setfacl -x user:hadoop /file
- hdfs dfs -setfacl -b /file
- hdfs dfs -setfacl -k /dir
- hdfs dfs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file
- hdfs dfs -setfacl -R -m user:hadoop:r-x /dir
- hdfs dfs -setfacl -m default:user:hadoop:r-x /dir
Exit Code:
Returns 0 on success and non-zero on error.
setfattr
Usage: hdfs dfs -setfattr -n name [-v value] | -x name <path>
Sets an extended attribute name and value for a file or directory.
Options:
- -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
- -n name: The extended attribute name.
- -v value: The extended attribute value. There are three different encoding methods for the value. If the argument is enclosed in double quotes, then the value is the string inside the quotes. If the argument is prefixed with 0x or 0X, then it is taken as a hexadecimal number. If the argument begins with 0s or 0S, then it is taken as a base64 encoding.
- -x name: Remove the extended attribute.
- path: The file or directory.
Examples:
- hdfs dfs -setfattr -n user.myAttr -v myValue /file
- hdfs dfs -setfattr -n user.noValue /file
- hdfs dfs -setfattr -x user.myAttr /file
Exit Code:
Returns 0 on success and non-zero on error.
setrep
Usage: hdfs dfs -setrep [-R] [-w] <numReplicas> <path>
Changes the replication factor of a file. If path is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at path.
Options:
- The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.
- The -R flag is accepted for backwards compatibility. It has no effect.
Example:
- hdfs dfs -setrep -w 3 /user/hadoop/dir1
Exit Code:
Returns 0 on success and -1 on error.
stat
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
Example:
- hdfs dfs -stat path
Exit Code: Returns 0 on success and -1 on error.
tail
Usage: hdfs dfs -tail [-f] URI
Displays last kilobyte of the file to stdout.
Options:
- The -f option will output appended data as the file grows, as in Unix.
Example:
- hdfs dfs -tail pathname
Exit Code: Returns 0 on success and -1 on error.
test
Usage: hdfs dfs -test -[ezd] URI
Options:
- The -e option will check to see if the file exists, returning 0 if true.
- The -z option will check to see if the file is zero length, returning 0 if true.
- The -d option will check to see if the path is directory, returning 0 if true.
Example:
- hdfs dfs -test -e filename
text
Usage: hdfs dfs -text <src>
Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
touchz
Usage: hdfs dfs -touchz URI [URI ...]
Create a file of zero length.
Example:
- hdfs dfs -touchz pathname
Exit Code: Returns 0 on success and -1 on error.
HDFS的Shell的更多相关文章
- Hadoop HDFS的shell(命令行客户端)操作实例
HDFS的shell(命令行客户端)操作实例 3.2 常用命令参数介绍 -help 功能:输出这个命令参数手册 -ls 功能:显示目录信息 示例: hadoop fs ...
- Hadoop开发第6期---HDFS的shell操作
一.HDFS的shell命令简介 我们都知道HDFS 是存取数据的分布式文件系统,那么对HDFS 的操作,就是文件系统的基本操作,比如文件的创建.修改.删除.修改权限等,文件夹的创建.删除.重命名等. ...
- HDFS基本shell操作
在客户端输入Hadoop fs,可以查看所有的,hadoop shell # -help [cmd] //显示命令的帮助信息,如: hadoop fs -help ls # -ls(r) <pa ...
- 大数据:Hadoop(JDK安装、HDFS伪分布式环境搭建、HDFS 的shell操作)
所有的内容都来源与 Hadoop 官方文档 一.Hadoop 伪分布式安装步骤 1)JDK安装 解压:tar -zxvf jdk-7u79-linux-x64.tar.gz -C ~/app 添加到系 ...
- HDFS的Shell、java操作
HDFS的Shell操作 1.基本语法 bin/hadoop fs 具体命令 OR bin/hdfs dfs 具体命令 dfs是fs的实现类. 2.命令大全 [Tesla@hadoop102 h ...
- HDFS02 HDFS的Shell操作
HDFS的Shell操作(开发重点) 目录 HDFS的Shell操作(开发重点) 基本语法 常用命令 准备工作 上传 -moveFromLocal 剪切 -copyFromLocal 拷贝 -put ...
- 熟练掌握HDFS的Shell访问
HDFS设计的主要目的是对海量数据进行存储,也就是说在其上能够存储很大量文件 (可以存储TB级的文件).HDFS将这些文件分割之后,存储在不同的DataNode上, HDFS 提供了两种访问接口:Sh ...
- HDFS的shell操作
bin/hadoop命令操作: namenode -format 格式化文件系统 fs(缩写:FileSystem) 运行一个文件系统的用户客户端 bin/hadoop fs常用命令操作: -ls h ...
- Hadoop读书笔记(二)HDFS的shell操作
Hadoop读书笔记(一)Hadoop介绍:http://blog.csdn.net/caicongyang/article/details/39898629 1.shell操作 1.1全部的HDFS ...
随机推荐
- android 内存溢出问题分析
最近的项目中,内存一直再增长,但是不知道是什么问题,导致内存溢出,在网上看到了这么一篇关于内存分析与管理的文章,解决了部分问题,感觉这篇文 章还不错,就转帖到我的blog上了,希望对大家有所帮助. ...
- JAVA使用JNI调用C++动态链接库
JAVA使用JNI调用C++动态链接库 使用JNI连接DLL动态链接库,并调用其中的函数 首先 C++中写好相关函数,文件名为test.cpp,使用g++编译为DLL文件,指令如下: g++ -sha ...
- Machine Learning for hackers读书笔记(十)KNN:推荐系统
#一,自己写KNN df<-read.csv('G:\\dataguru\\ML_for_Hackers\\ML_for_Hackers-master\\10-Recommendations\\ ...
- 《自己动手写操作系统》pmtest2笔记
;DispReturn模拟一个回车的显示,(让下一个要显示的字符在下一行的开头处显示),其中edi始终指向要显示的下一个字符的位置.; ------------------------------ ...
- HDU 2159 FATE【二维完全背包】
题意:xhd玩游戏,还需要n个经验值升级,还留有m的忍耐度,但是他最多打s只怪,给出k个怪的经验值a[i],以及消耗的忍耐度b[i],问xhd能不能升级-- 因为有两个限定,忍耐度,和最多打s只怪(即 ...
- Ajaxload动态加载动画生成工具的实现(ajaxload的本地移植)
前言 前段时间看到一个国外的网站,在线生成ajax loading动画.觉得很实用,于是动起了移植到自己网站的念头(一直以来的习惯,看到好的工具总想着移植到本地好好研究).根据以往移植的经验最终把 这 ...
- 学习java之HashMap和TreeMap
HashMap和TreeMap是Map接口的两种实现,ArrayDeque和LinkedList是Queue接口的两种实现方式.下面的代码是我今天学习这四个数据结构之后写的.还是不熟悉,TreeMap ...
- AVL树的旋转实现
AVL树:带有平衡条件的二叉查找树,即一棵AVL树是其每个节点的左子树和右子树的高度最多相差1的二叉查找树.一般通过Single Rotate和Double Rotate来保持AVL树的平衡.AVL树 ...
- notepad 行替换使用指南
notepad++是开源的文本处理软件,性能堪比ultraedit,不过在转用notepad++之后一直为他的行替换功能纠结,UE当中只需要用 [^p] 就可以表示一行,但是在notepad++当中, ...
- REDHAT YUM使用网易源
刚装好了 RedHat 6 系统,但是使用 yum 的时候总是提示 nothing to do,并且什么都做不了.后来经过一番搜索才知道,红帽的 yum 在线更新是收费的,而且必须注册系统之后才能使用 ...