Fix Corrupt Blocks on HDFS
来自:http://centoshowtos.org/hadoop/fix-corrupt-blocks-on-hdfs/
How do I know if my hadoop hdfs filesystem has corrupt blocks, and how do I fix it?
The easiest way to determine this is to run an fsck on the filesystem. If you have setup your hadoop environment variables you should be able to use a path of /, if not hdfs://ip.or.hostname:50070/.
hdfs fsck /
or.
hdfs fsck hdfs://ip.or.hostname:50070/
If the end of your output looks something like this, you have corrupt blocks on your fs.
.............................Status: CORRUPT
Total size: 3453345169348 B (Total open files size: 664 B)
Total dirs: 15233
Total files: 14029
Total symlinks: 0 (Files currently being written: 8)
Total blocks (validated): 40961 (avg. block size 84308126 B) (Total open file blocks (not validated): 8)
********************************
CORRUPT FILES: 2
MISSING BLOCKS: 2
MISSING SIZE: 15731297 B
CORRUPT BLOCKS: 2
********************************
Corrupt blocks: 2
Number of data-nodes: 12
Number of racks: 2
FSCK ended at Fri Mar 27 XX:03:21 UTC 201X in XXX milliseconds The filesystem under path '/' is CORRUPT
How do I know which files have blocks that are corrupt?
The output of the fsck above will be very verbose, but it will mention which blocks are corrupt. We can do some grepping of the fsck above so that we aren't "reading through a firehose".
hdfs fsck / | egrep -v '^\.+$' | grep -v replica | grep -v Replica
or
hdfs fsck hdfs://ip.or.host:50070/ | egrep -v '^\.+$' | grep -v replica | grep -v Replica
This will list the affected files, and the output will not be a bunch of dots, and also files that might currently have under-replicated blocks (which isn't necessarily an issue). The output should include something like this with all your affected files.
/path/to/filename.fileextension: CORRUPT blockpool BP-1016133662-10.29.100.41-1415825958975 block blk_1073904305 /path/to/filename.fileextension: MISSING 1 blocks of total size 15620361 B
The next step would be to determine the importance of the file, can it just be removed and copied back into place, or is there sensitive data that needs to be regenerated?
If it's easy enough just to replace the file, that's the route I would take.
Remove the corrupted file from your hadoop cluster
This command will move the corrupted file to the trash.
hdfs dfs -rm /path/to/filename.fileextension
hdfs dfs -rm hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension
Or you can skip the trash to permanently delete (which is probably what you want to do)
hdfs dfs -rm -skipTrash /path/to/filename.fileextension
hdfs dfs -rm -skipTrash hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension
How would I repair a corrupted file if it was not easy to replace?
This might or might not be possible, but the first step would be to gather information on the file's location, and blocks.
hdfs fsck /path/to/filename/fileextension -locations -blocks -files
hdfs fsck hdfs://ip.or.hostname.of.namenode:50070/path/to/filename/fileextension -locations -blocks -files
From this data, you can track down the node where the corruption is. On those nodes, you can look through logs and determine what the issue is. If a disk was replaced, i/o errors on the server, etc. If possible to recover on that machine and get the partition with the blocks online that would report back to hadoop and the file would be healthy again. If that isn't possible, you will unforunately have to find another way to regenerate.
Fix Corrupt Blocks on HDFS的更多相关文章
- How to fix corrupt HDFS FIles
1 问题描述 HDFS在机器断电或意外崩溃的情况下,有可能出现正在写的数据(例如保存在DataNode内存的数据等)丢失的问题.再次重启HDFS后,发现hdfs无法启动,查看日志后发现,一直处于安全模 ...
- ORA-19566: exceeded limit of 0 corrupt blocks for file E:\xxxx\<datafilename>.ORA.
How to Format Corrupted Block Not Part of Any Segment (Doc ID 336133.1) To BottomTo Bottom In this D ...
- 手动修复 under-replicated blocks in HDFS
解决方式步骤: 1.进入hdfs的pod kubectl get pod -o wide | grep hdfs kubectl exec -ti hadoop-hdfs-namenode-hdfs1 ...
- hdfs 如何实现退役节点快速下线(也就是退役节点上的数据块快速迁移)speed up decommission blocks removal
以下是选择复制源节点的代码 代码总结: A=datanode上要复制block的Queue size与 target datanode没被选出之前待处理复制工作数之和. 1. 优先选择退役中的节点,因 ...
- [bigdata] 使用Flume hdfs sink, hdfs文件未关闭的问题
现象: 执行mapreduce任务时失败 通过hadoop fsck -openforwrite命令查看发现有文件没有关闭. [root@com ~]# hadoop fsck -openforwri ...
- hdfs 常用命令
(2)bin/hdfs dfs -mkdir -p /home/雨渐渐 (3)scp /media/root/DCE28B65E28B432E/download/第2周/ChinaHadoop第二讲\ ...
- 【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode
HDFS中的File由Block组成,一个File包含一个或多个Block,当创建File时会创建一个Block,然后根据配置的副本数量(默认是3)申请3个Datanode来存放这个Block: 通过 ...
- Hadoop 2.7.4 HDFS+YRAN HA增加datanode和nodemanager
当前集群 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...
- hdfs fsck命令查看HDFS文件对应的文件块信息(Block)和位置信息(Locations)
关键字:hdfs fsck.block.locations 在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态.获取文件的block信息和位置信息等. fsck命令必须由HDFS ...
随机推荐
- docker 运行Django项目
一.概述 已经写好了一个Django项目,需要将这个项目用docker封装一个镜像,使用k8s发布! 在封装并运行的过程中,发现了很多问题,这里会一一介绍! 二.时区问题 采用的是镜像是 ubuntu ...
- jqgrid 表格中筛选条件的多选下拉,树形下拉 ;文本框清除插件;高级查询多条件动态筛选插件[自主开发]
/** * @@desc 文本框清除按钮,如果isAutoWrap为false当前文本框父级必须是relative定位,boostrap参考input-group * @@author Bear.Ti ...
- mysql 增加只读用户查询指定表
GRANT SELECT ON dsideal_db.t_base_organization TO 'guanli'@'%' IDENTIFIED BY '123456';GRANT SELECT O ...
- guava常用
教程: http://www.yiibai.com/guava/ http://ifeve.com/google-guava/ optional 注意java8同样提供optional,区分 意义: ...
- URAL - 1495 One-two, One-two 2
URAL - 1495 这是在dp的专题里写了,想了半天的dp,其实就是暴力... 题目大意:给你一个n,问你在30位以内有没有一个只由1或2 构成的数被 n 整除,如果 有则输出最小的那个,否则输出 ...
- 007 关于Spark下的第二种模式——standalone搭建
一:介绍 1.介绍standalone Standalone模式是Spark自身管理资源的一个模式,类似Yarn Yarn的结构: ResourceManager: 负责集群资源的管理 NodeMan ...
- 微信小程序倒计时组件开发
今天给大家带来微信小程序倒计时组件具体开发步骤: 先来看下最终效果: git源:http://git.oschina.net/dotton/CountDown 分步骤-性子急的朋友,可以直接看最后那段 ...
- internet连接共享
韩梦飞沙 韩亚飞 313134555@qq.com yue31313 han_meng_fei_shai nternet连接共享 允许其他网络用户通过此计算机的internet连接来连接
- 闪烁的LED灯
/* Main.c file generated by New Project wizard * * Created: 周五 五月 5 2017 * Processor: 80C31 * Compil ...
- Altium Designer Summer 09 Gerber 文件设置