Hadoop HDFS文件常用操作及注意事项(更新)
1.Copy a file from the local file system to HDFS
The srcFile variable needs to contain the full name (path + file name) of the file in the local file system.
The dstFile variable needs to contain the desired full name of the file in the Hadoop file system.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstFile);
hdfs.copyFromLocalFile(srcPath, dstPath);
2.Create HDFS file
The fileName variable contains the file name and path in the Hadoop file system.
The content of the file is the buff variable which is an array of bytes.
//byte[] buff - The content of the file
//创建了一个HDFS文件,并且把buff数组的内容写到了HDFS文件中。
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FSDataOutputStream outputStream = hdfs.create(path);
outputStream.write(buff, 0, buff.length);
3.Rename HDFS file
In order to rename a file in Hadoop file system, we need the full name (path + name) of
the file we want to rename. The rename method returns true if the file was renamed, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path fromPath = new Path(fromFileName);
Path toPath = new Path(toFileName);
boolean isRenamed = hdfs.rename(fromPath, toPath);
4.Delete HDFS file
In order to delete a file in Hadoop file system, we need the full name (path + name)
of the file we want to delete. The delete method returns true if the file was deleted, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isDeleted = hdfs.delete(path, false); //Recursive delete:估计true是递归删除该目录下面的文件。
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isDeleted = hdfs.delete(path, true);
5.Get HDFS file last modification time
In order to get the last modification time of a file in Hadoop file system,
we need the full name (path + name) of the file.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
long modificationTime = fileStatus.getModificationTime
6.Check if a file exists in HDFS
In order to check the existance of a file in Hadoop file system,
we need the full name (path + name) of the file we want to check.
The exists methods returns true if the file exists, otherwise false.
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
boolean isExists = hdfs.exists(path);
7.Get the locations of a file in the HDFS cluster
A file can exist on more than one node in the Hadoop file system cluster for two reasons:
Based on the HDFS cluster configuration, Hadoop saves parts of files on different nodes in the cluster.
Based on the HDFS cluster configuration, Hadoop saves more than one copy of each file on different nodes for redundancy (The default is three).
Configuration config = new Configuration();
FileSystem hdfs = FileSystem.get(config);
Path path = new Path(fileName);
FileStatus fileStatus = hdfs.getFileStatus(path);
BlockLocation[] blkLocations = hdfs.getFileBlockLocations(fileStatus, 0, fileStatus.getLen());
int blkCount = blkLocations.length;
for (int i = 0; i < blkCount; i++) {
String[] hosts = blkLocations[i].getHosts();
// Do something with the block hosts
}
8. Get a list of all the nodes host names in the HDFS cluster
his method casts the FileSystem Object to a DistributedFileSystem Object.
This method will work only when Hadoop is configured as a cluster.
Running Hadoop on the local machine only, in a non cluster configuration will cause this method to throw an Exception.
Configuration config = new Configuration();
FileSystem fs = FileSystem.get(config);
DistributedFileSystem hdfs = (DistributedFileSystem) fs;
DatanodeInfo[] dataNodeStats = hdfs.getDataNodeStats();
String[] names = new String[dataNodeStats.length];
for (int i = 0; i < dataNodeStats.length; i++) {
names[i] = dataNodeStats[i].getHostName();
}
遇到的问题及解决:
1.文件append的问题
hadoop的版本1.0.4以后,API中已经有了追加写入的功能,但不建议在生产环境中使用,原因如下:Does HDFS allow appends to files? This is currently set to false because there are bugs in the "append code" and is not supported in any prodction cluster.
如果要测试的话,需要把dfs.support.appen的参数设置为true,不然客户端写入的时候会报错:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: java.io.IOException: Append to hdfs not supported. Please refer to dfs.support.append configuration parameter.
解决:修改namenode节点上的hdfs-site.xml。
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
2.在Hadoop-1.0.4和Hadoop-2.2的使用append时,需求:追加写入文件,如果文件不存在,需求先创建。
异常:
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /huangq/dailyRolling/mommy-dailyRolling for DFSClient_-1456545217 on client 10.1.85.243 because current leaseholder is trying to recreate file.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1374)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1246)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1426)
at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:643)
at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
代码版本1:(出现上述报错的代码)
FileSystem fs = FileSystem.get(conf);
Path dstPath = new Path(dst);
if (!fs.exists(dstPath)) {
fs.create(dstPath);
}
FSDataOutputStream fsout = fs.append(dstPath);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fsout));
异常的原因:FSDataOutputStream create(Path f) 产生了一个输出流,创建完后需要关闭。
解决:创建完文件之后,关闭流FSDataOutputStream。
FileSystem fs = FileSystem.get(conf);
Path dstPath = new Path(dst);
if (!fs.exists(dstPath)) {
fs.create(dstPath).close();
}
FSDataOutputStream fsout = fs.append(dstPath);
Hadoop HDFS文件常用操作及注意事项(更新)的更多相关文章
- Hadoop HDFS文件常用操作及注意事项
Hadoop HDFS文件常用操作及注意事项 1.Copy a file from the local file system to HDFS The srcFile variable needs t ...
- [b0002] Hadoop HDFS cmd常用命令练手
目的: 学会HDFS CLI 常用操作 环境: Hadoop 2.6.4 伪分布式版 环境搭建参考本博客前篇文章: 伪分布式 hadoop 2.6.4 帮助: hadoop@ssmaster:~$ h ...
- python 异常处理、文件常用操作
异常处理 http://www.jb51.net/article/95033.htm 文件常用操作 http://www.jb51.net/article/92946.htm
- go语言之进阶篇文件常用操作接口介绍和使用
一.文件常用操作接口介绍 1.创建文件 法1: 推荐用法 func Create(name string) (file *File, err Error) 根据提供的文件名创建新的文件,返回一个文件对 ...
- Python基础灬文件常用操作
文件常用操作 文件内建函数和方法 open() :打开文件 read():输入 readline():输入一行 seek():文件内移动 write():输出 close():关闭文件 写文件writ ...
- hdfs 文件系统命令操作
hdfs 文件系统命令操作 [1]hdfs dfs -ls [目录]. 显示所有文件 hdfs dfs -ls -h /user/20170214.txt 显示文件时,文件大小以人易读的形式显示 [2 ...
- Hadoop HDFS文件操作
1.创建目录 import java.io.IOException; import org.apache.hadoop.conf.Configuration; import org.apache.ha ...
- HDFS文件读写操作(基础基础超基础)
环境 OS: Ubuntu 16.04 64-Bit JDK: 1.7.0_80 64-Bit Hadoop: 2.6.5 原理 <权威指南>有两张图,下次po上来好好聊一下 实测 读操作 ...
- Spark环境搭建(二)-----------HDFS shell 常用操作
配置好HDFS,也学习了点HDFS的简单操作,跟Linux命令相似 1) 配置Hadoop的环境变量,类似Java的配置 在 ~/.bash_profile 中加入 export HADOOP_HO ...
随机推荐
- (转)MapReduce中的两表join几种方案简介
转自:http://blog.csdn.net/leoleocmm/article/details/8602081 1. 概述 在传统数据库(如:MYSQL)中,JOIN操作是非常常见且非常耗时的.而 ...
- hornetq 入门(1)
Hornetq 版本2.4.0final 需要JDK7及以上 Hornetq官网 Hornetq2.1中文手册 step1.启动服务端 1.1准备配置文件(配置说明参考官网手册) hornetq-c ...
- Java线程通信——wait() 和 notify()
Object类中有关线程通信的方法有两个notify方法和三个wait方法,官方解释: void notify() Wakes up a single thread that is waiting o ...
- Java Web应用中调优线程池的重要性
不论你是否关注,Java Web应用都或多或少的使用了线程池来处理请求.线程池的实现细节可能会被忽视,但是有关于线程池的使用和调优迟早是需要了解的.本文主要介绍Java线程池的使用和如何正确的配置线程 ...
- 图片轮播插件-carouFredSel
carouFredSel图片轮播插件基于Jquery,比较常规的轮播插件,支持滚轮及键盘左右按键,加入其它插件可实现更加复杂的特效. 主页地址:http://caroufredsel.dev7stud ...
- android动态增加控件时控制样式的方法
在学习android的动画时,发现所谓的tween动画只是改变绘制效果并不改变原控件的位置时是颇为失望的,虽然3.0之后已经有了property animation,但是由于要兼容老版本的androi ...
- 从OGRE,GAMEPLAY3D,COCOS2D-X看开源
OGRE,大家都很熟悉咯. 说到这一点真的有点好笑,我见过很多人说认识OGRE,但是却不知道D3D和OPENGL是什么东东的,可能是我的笑点真的很低,反正是莫名喜感.前天在COCOS2D-X的一个群里 ...
- IOS crash分析
此处不讨论具体的如何根据.dsym文件解析crash log的方式. 什么是崩溃: 不希望出现的中断,APP收到了系统发出的unhandle signal,来源主要由系统内核,处理器,或者应用程序本身 ...
- ios开发之多线程资源争夺
上一篇介绍了常用的多线程技术,目前开发中比较常用的是GCD,其它的熟悉即可.多线程是为了同步完成多项任务,不是为了提高运行效率,而是为了提高资源使用率来提高系统的整体性能,但是会出现多个线程对同一资源 ...
- BT5之Metasploit[MSF]连接postgresql数据库
1,先查看postgresql的端口,默认是自动开启的,端口7337 . root@bt:~# netstat -tnpl |grep postgres tcp 0 0 1 ...