【原创】大叔问题定位分享(20)hdfs文件create写入正常,append写入报错
最近在hdfs写文件的时候发现一个问题,create写入正常,append写入报错,每次都能重现,代码示例如下:
FileSystem fs = FileSystem.get(conf);
OutputStream out = fs.create(file);
IOUtils.copyBytes(in, out, 4096, true); //正常
out = fs.append(file);
IOUtils.copyBytes(in, out, 4096, true); //报错
通过hdfs fsck命令检查出问题的文件,发现只有一个副本,难道是因为这个?
看FileSystem.append执行过程:
org.apache.hadoop.fs.FileSystem
public abstract FSDataOutputStream append(Path var1, int var2, Progressable var3) throws IOException;
实现类在这里:
org.apache.hadoop.hdfs.DistributedFileSystem
public FSDataOutputStream append(Path f, final int bufferSize, final Progressable progress) throws IOException {
this.statistics.incrementWriteOps(1);
Path absF = this.fixRelativePart(f);
return (FSDataOutputStream)(new FileSystemLinkResolver<FSDataOutputStream>() {
public FSDataOutputStream doCall(Path p) throws IOException, UnresolvedLinkException {
return DistributedFileSystem.this.dfs.append(DistributedFileSystem.this.getPathName(p), bufferSize, progress, DistributedFileSystem.this.statistics);
}
public FSDataOutputStream next(FileSystem fs, Path p) throws IOException {
return fs.append(p, bufferSize);
}
}).resolve(this, absF);
}
这里会调用DFSClient.append方法
org.apache.hadoop.hdfs.DFSClient
private DFSOutputStream append(String src, int buffersize, Progressable progress) throws IOException {
this.checkOpen();
DFSOutputStream result = this.callAppend(src, buffersize, progress);
this.beginFileLease(result.getFileId(), result);
return result;
}
private DFSOutputStream callAppend(String src, int buffersize, Progressable progress) throws IOException {
LocatedBlock lastBlock = null;
try {
lastBlock = this.namenode.append(src, this.clientName);
} catch (RemoteException var6) {
throw var6.unwrapRemoteException(new Class[]{AccessControlException.class, FileNotFoundException.class, SafeModeException.class, DSQuotaExceededException.class, UnsupportedOperationException.class, UnresolvedPathException.class, SnapshotAccessControlException.class});
}
HdfsFileStatus newStat = this.getFileInfo(src);
return DFSOutputStream.newStreamForAppend(this, src, buffersize, progress, lastBlock, newStat, this.dfsClientConf.createChecksum());
}
DFSClient.append最终会调用NameNodeRpcServer的append方法
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
public LocatedBlock append(String src, String clientName) throws IOException {
this.checkNNStartup();
String clientMachine = getClientMachine();
if (stateChangeLog.isDebugEnabled()) {
stateChangeLog.debug("*DIR* NameNode.append: file " + src + " for " + clientName + " at " + clientMachine);
}
this.namesystem.checkOperation(OperationCategory.WRITE);
LocatedBlock info = this.namesystem.appendFile(src, clientName, clientMachine);
this.metrics.incrFilesAppended();
return info;
}
这里调用到FSNamesystem.append
org.apache.hadoop.hdfs.server.namenode.FSNamesystem
LocatedBlock appendFile(String src, String holder, String clientMachine) throws AccessControlException, SafeModeException,
...
lb = this.appendFileInt(src, holder, clientMachine, cacheEntry != null); private LocatedBlock appendFileInt(String srcArg, String holder, String clientMachine, boolean logRetryCache) throws
...
lb = this.appendFileInternal(pc, src, holder, clientMachine, logRetryCache); private LocatedBlock appendFileInternal(FSPermissionChecker pc, String src, String holder, String clientMachine, boolean logRetryCache) throws AccessControlException, UnresolvedLinkException, FileNotFoundException, IOException {
assert this.hasWriteLock(); INodesInPath iip = this.dir.getINodesInPath4Write(src);
INode inode = iip.getLastINode();
if (inode != null && inode.isDirectory()) {
throw new FileAlreadyExistsException("Cannot append to directory " + src + "; already exists as a directory.");
} else {
if (this.isPermissionEnabled) {
this.checkPathAccess(pc, src, FsAction.WRITE);
} try {
if (inode == null) {
throw new FileNotFoundException("failed to append to non-existent file " + src + " for client " + clientMachine);
} else {
INodeFile myFile = INodeFile.valueOf(inode, src, true);
BlockStoragePolicy lpPolicy = this.blockManager.getStoragePolicy("LAZY_PERSIST");
if (lpPolicy != null && lpPolicy.getId() == myFile.getStoragePolicyID()) {
throw new UnsupportedOperationException("Cannot append to lazy persist file " + src);
} else {
this.recoverLeaseInternal(myFile, src, holder, clientMachine, false);
myFile = INodeFile.valueOf(this.dir.getINode(src), src, true);
BlockInfo lastBlock = myFile.getLastBlock();
if (lastBlock != null && lastBlock.isComplete() && !this.getBlockManager().isSufficientlyReplicated(lastBlock)) {
throw new IOException("append: lastBlock=" + lastBlock + " of src=" + src + " is not sufficiently replicated yet.");
} else {
return this.prepareFileForWrite(src, iip, holder, clientMachine, true, logRetryCache);
}
}
}
} catch (IOException var11) {
NameNode.stateChangeLog.warn("DIR* NameSystem.append: " + var11.getMessage());
throw var11;
}
}
} public boolean isSufficientlyReplicated(BlockInfo b) {
int replication = Math.min(this.minReplication, this.getDatanodeManager().getNumLiveDataNodes());
return this.countNodes(b).liveReplicas() >= replication;
}
在append文件的时候,会首先取出这个文件最后一个block,然后会检查这个block是否满足副本要求,如果不满足就抛出异常,如果满足就准备写入;
看来原因确实是因为文件只有1个副本导致append报错,那为什么新建文件只有1个副本,后来找到原因是因为机架配置有问题导致的,详见 https://www.cnblogs.com/barneywill/p/10114504.html
【原创】大叔问题定位分享(20)hdfs文件create写入正常,append写入报错的更多相关文章
- 【报错】spring整合activeMQ,pom.xml文件缺架包,启动报错:Caused by: java.lang.ClassNotFoundException: org.apache.xbean.spring.context.v2.XBeanNamespaceHandler
spring版本:4.3.13 ActiveMq版本:5.15 ======================================================== spring整合act ...
- (未解决)flume监控目录,抓取文件内容推送给kafka,报错
flume监控目录,抓取文件内容推送给kafka,报错: /export/datas/destFile/220104_YT1013_8c5f13f33c299316c6720cc51f94f7a0_2 ...
- 【原创】大叔问题定位分享(5)Kafka客户端报错SocketException: Too many open files 打开的文件过多
kafka0.8.1 一 问题 10月22号应用系统忽然报错: [2014/12/22 11:52:32.738]java.net.SocketException: 打开的文件过多 [2014/12/ ...
- 【原创】大叔问题定位分享(13)HBase Region频繁下线
问题现象:hive执行sql报错 select count(*) from test_hive_table; 报错 Error: java.io.IOException: org.apache.had ...
- 【原创】大叔问题定位分享(3)Kafka集群broker进程逐个报错退出
kafka0.8.1 一 问题现象 生产环境kafka服务器134.135.136分别在10月11号.10月13号挂掉: 134日志 [2014-10-13 16:45:41,902] FATAL [ ...
- 【原创】大叔问题定位分享(32)mysql故障恢复
mysql启动失败,一直crash,报错如下: 2019-03-14T11:15:12.937923Z 0 [Note] InnoDB: Uncompressed page, stored check ...
- 【原创】大叔问题定位分享(30)mesos agent启动失败:Failed to perform recovery: Incompatible agent info detected
mesos agent启动失败,报错如下: Feb 15 22:03:18 server1.bj mesos-slave[1190]: E0215 22:03:18.622994 1192 slave ...
- 【原创】大叔问题定位分享(28)openssh升级到7.4之后ssh跳转异常
服务器集群之间忽然ssh跳转不通 # ssh 192.168.0.1The authenticity of host '192.168.0.1 (192.168.0.1)' can't be esta ...
- 【原创】大叔问题定位分享(25)ambari metrics collector内置standalone hbase启动失败
ambari metrics collector内置hbase目录位于 /usr/lib/ams-hbase 配置位于 /etc/ams-hbase/conf 通过ruby启动 /usr/lib/am ...
随机推荐
- SystemCheckError: System check identified some issues: ERRORS: users.Test.groups: (fields.E304) Reverse accessor for 'Test.groups' clashes with reverse accessor for 'User.groups'.
Error Msg: SystemCheckError: System check identified some issues: ERRORS: users.Test.groups: (fields ...
- Equinox OSGi应用嵌入Jersey框架搭建REST服务
原文地址:https://www.cnblogs.com/kira2will/p/5040264.html 一.环境 eclipse版本:eclipse-luna 4.4 jre版本:1.8 二.Eq ...
- mybatis 中 foreach collection的三种用法
foreach的主要用在构建in条件中,它可以在SQL语句中进行迭代一个集合. foreach元素的属性主要有 item,index,collection,open,separator,close. ...
- Shell命令-线上查询及帮助之man、help
线上查询及帮助 - man.help 1.man:获取命令的帮助信息 man命令的简单介绍 man命令是Linux系统中最核心的命令之一 ,因为通过它可以查看其它Linux命令的使用信息.当然了 ,m ...
- vue 点击当前元素添加class 去掉兄弟的class 获取当前点击元素的文字
点击当前标签给其添加class,兄弟标签class删除 然后获取当前点击元素的文字 演示地址: https://xibushijie.github.io/static/addClass.html &l ...
- java 将保单数据 生成图片
主要代码:---------------------------------------------------------------- /** * 生成图片 * @param cellsValue ...
- [十二省联考2019]异或粽子——可持久化trie树+堆
题目链接: [十二省联考2019]异或粽子 求前$k$大异或区间,可以发现$k$比较小,我们考虑找出每个区间. 为了快速得到一个区间的异或和,将原序列做前缀异或和. 对于每个点作为右端点时,我们维护出 ...
- bash 字符串处理
bash 字符串处理 字符串切片:${var:offset:length}示例:[root@localhost ~]#mypath="/etc/sysconfig/network-scrip ...
- 【模板】2-SAT 问题
[传送门] 分析 按照逻辑关系建图,跑tarjan,如果上下点在一个环中,说明不可能,不然就可能. 代码 #include <bits/stdc++.h> #define ll long ...
- Java【第六篇】面向对象基础
类和对象 面向对象的概念 面向过程 核心是过程二字,过程指的是解决问题的步骤,设计一条流水线,机械式的思维方式: 面向对象 核心就是对象二字,对象就是特征与技能的结合体,利用“类”和“对象”来创建各种 ...