Hadoop HDFS文件常用操作及注意事项(更新)
1.Copy a file from the local file system to HDFS
The srcFile variable needs to contain the full name (path + file name) of the file in the local file system.
The dstFile variable needs to contain the desired full name of the file in the Hadoop file system.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyFromLocalFile(srcPath, dstPath);
2.Create HDFS file
The fileName variable contains the file name and path in the Hadoop file system.
The content of the file is the buff variable which is an array of bytes.
//byte[] buff - The content of the file
//创建了一个HDFS文件,并且把buff数组的内容写到了HDFS文件中。
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FSDataOutputStream outputStream = hdfs.create(path);
outputStream.write(buff, 0, buff.length);
3.Rename HDFS file
In order to rename a file in Hadoop file system, we need the full name (path + name) of
the file we want to rename. The rename method returns true if the file was renamed, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path fromPath = new Path(fromFileName);
Path toPath = new Path(toFileName);
boolean isRenamed = hdfs.rename(fromPath, toPath);
4.Delete HDFS file
In order to delete a file in Hadoop file system, we need the full name (path + name)
of the file we want to delete. The delete method returns true if the file was deleted, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isDeleted = hdfs.delete(path, false); //Recursive delete:估计true是递归删除该目录下面的文件。
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isDeleted = hdfs.delete(path, true);
5.Get HDFS file last modification time
In order to get the last modification time of a file in Hadoop file system,
we need the full name (path + name) of the file.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
long modificationTime = fileStatus.getModificationTime
6.Check if a file exists in HDFS
In order to check the existance of a file in Hadoop file system,
we need the full name (path + name) of the file we want to check.
The exists methods returns true if the file exists, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isExists = hdfs.exists(path);
7.Get the locations of a file in the HDFS cluster
A file can exist on more than one node in the Hadoop file system cluster for two reasons:
Based on the HDFS cluster configuration, Hadoop saves parts of files on different nodes in the cluster.
Based on the HDFS cluster configuration, Hadoop saves more than one copy of each file on different nodes for redundancy (The default is three).
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;
for (int i = 0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
// Do something with the block hosts
}
8. Get a list of all the nodes host names in the HDFS cluster
his method casts the FileSystem Object to a DistributedFileSystem Object.
This method will work only when Hadoop is configured as a cluster.
Running Hadoop on the local machine only, in a non cluster configuration will cause this method to throw an Exception.
Configuration config = new Configuration();
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
}
遇到的问题及解决:
1.文件append的问题
hadoop的版本1.0.4以后,API中已经有了追加写入的功能,但不建议在生产环境中使用,原因如下:Does HDFS allow appends to files? This is currently set to false because there are bugs in the "append code" and is not supported in any prodction cluster.
如果要测试的话,需要把dfs.support.appen的参数设置为true,不然客户端写入的时候会报错:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: java.io.IOException: Append to hdfs not supported. Please refer to dfs.support.append configuration parameter.
解决:修改namenode节点上的hdfs-site.xml。
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
2.在Hadoop-1.0.4和Hadoop-2.2的使用append时,需求:追加写入文件,如果文件不存在,需求先创建。
异常:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /huangq/dailyRolling/mommy-dailyRolling for DFSClient_-1456545217 on client 10.1.85.243 because current leaseholder is trying to recreate file.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1374)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1246)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1426)
at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:643)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
代码版本1:(出现上述报错的代码)
FileSystem fs = FileSystem.get(conf);
Path dstPath = new Path(dst);
if (!fs.exists(dstPath)) {
fs.create(dstPath);
}
FSDataOutputStream fsout = fs.append(dstPath);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fsout));
异常的原因:FSDataOutputStream create(Path f) 产生了一个输出流,创建完后需要关闭。
解决:创建完文件之后,关闭流FSDataOutputStream。
FileSystem fs = FileSystem.get(conf);
Path dstPath = new Path(dst);
if (!fs.exists(dstPath)) {
fs.create(dstPath).close();
}
FSDataOutputStream fsout = fs.append(dstPath);
Hadoop HDFS文件常用操作及注意事项(更新)的更多相关文章
- Hadoop HDFS文件常用操作及注意事项
Hadoop HDFS文件常用操作及注意事项 1.Copy a file from the local file system to HDFS The srcFile variable needs t ...
- [b0002] Hadoop HDFS cmd常用命令练手
目的: 学会HDFS CLI 常用操作 环境: Hadoop 2.6.4 伪分布式版 环境搭建参考本博客前篇文章: 伪分布式 hadoop 2.6.4 帮助: hadoop@ssmaster:~$ h ...
- python 异常处理、文件常用操作
异常处理 http://www.jb51.net/article/95033.htm 文件常用操作 http://www.jb51.net/article/92946.htm
- go语言之进阶篇文件常用操作接口介绍和使用
一.文件常用操作接口介绍 1.创建文件 法1: 推荐用法 func Create(name string) (file *File, err Error) 根据提供的文件名创建新的文件,返回一个文件对 ...
- Python基础灬文件常用操作
文件常用操作 文件内建函数和方法 open() :打开文件 read():输入 readline():输入一行 seek():文件内移动 write():输出 close():关闭文件 写文件writ ...
- hdfs 文件系统命令操作
hdfs 文件系统命令操作 [1]hdfs dfs -ls [目录]. 显示所有文件 hdfs dfs -ls -h /user/20170214.txt 显示文件时,文件大小以人易读的形式显示 [2 ...
- Hadoop HDFS文件操作
1.创建目录 import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.ha ...
- HDFS文件读写操作(基础基础超基础)
环境 OS: Ubuntu 16.04 64-Bit JDK: 1.7.0_80 64-Bit Hadoop: 2.6.5 原理 <权威指南>有两张图,下次po上来好好聊一下 实测 读操作 ...
- Spark环境搭建(二)-----------HDFS shell 常用操作
配置好HDFS,也学习了点HDFS的简单操作,跟Linux命令相似 1) 配置Hadoop的环境变量,类似Java的配置 在 ~/.bash_profile 中加入 export HADOOP_HO ...
随机推荐
- java实现mysql数据库的备份及还原
备份: public static void backup() { try { Runtime rt = Runtime.getRuntime(); // 调用 调用mysql的安装目录的命令 Pro ...
- iOS的SandBox的结构研究
在模拟器中运行iOS程序,都会为该程序创建一个沙盒(SandBox).首先声明,我用的系统是Max OS X 10.7.3,编译器是Xcode 4.3.2.想要找到沙盒目录,先运行Finder,然后在 ...
- 使用PHP计算上一个月的今天
一日,遇到一个问题,求上一个月的今天. 最开始我们使用 strtotime(“-1 month”) 函数求值,发现有一个问题,月长度不一样的月份的计算结果有误. 比如:2011-03-31,得到的结果 ...
- AVPlayer的基本使用
2014-5-7 06:46| 发布者: admin| 查看: 437| 评论: 0 摘要: 在iOS开发中,播放视频通常有两种方式,一种是使用MPMoviePlayerController(需要 ...
- 使用jquery控制只能输入数字,并且关闭输入法(转)
控制文本框只能输入数字是一个很常见的需求,比如电话号码的输入.数量的输入等,这时候就需要我们控制文本框只能输入数字.在用js控制之后在英文输入法的状态下去敲击键盘上的非数字键是输不进去的,然而当你转到 ...
- 20145120 《Java程序设计》第2周学习总结
20145120 <Java程序设计>第2周学习总结 教材学习内容总结 因为前面有学习过C语言以及汇编语言,类型.运算符.流程控制等很多都是之前接触过的,因此在学习第三章的时候感觉并非十分 ...
- ADT通过svn进行团队开发,svn插件不好使的解决方案
在使用ADT的svn插件的时候老是会出现各种异常,所以就干脆不用svn插件了,直接将adt的工作空间建在svn上面,以保证团队成员共用一套代码,节约宝贵的整合时间. 使用步骤: 1.首先需要安装好sv ...
- Codeforces Round #303 (Div. 2) E. Paths and Trees 最短路+贪心
题目链接: 题目 E. Paths and Trees time limit per test 3 seconds memory limit per test 256 megabytes inputs ...
- Ext学习-基础组件介绍
1.目标 学习对象获取,组件基础,事件模型以及学习ExtJS中的基础组件的应用. 2.内容 1.对象获取 2.组件原理以及基础 3.事件模型 4.常用组件的介绍 3.学习步骤 1 ...
- 有关javascript中的JSON.parse和JSON.stringify的使用一二
有没有想过,当我们的大后台只是扮演一个数据库的角色,json在前后台的数据交换中扮演极其重要的角色时,作为依托node的前端开发,其实相当多的时间都是在处理数据,准确地说就是在处理逻辑和数据(这周实习 ...