来自:http://centoshowtos.org/hadoop/fix-corrupt-blocks-on-hdfs/

How do I know if my hadoop hdfs filesystem has corrupt blocks, and how do I fix it?

The easiest way to determine this is to run an fsck on the filesystem. If you have setup your hadoop environment variables you should be able to use a path of /, if not hdfs://ip.or.hostname:50070/.

hdfs fsck /

or.

hdfs fsck hdfs://ip.or.hostname:50070/

If the end of your output looks something like this, you have corrupt blocks on your fs.

.............................Status: CORRUPT
Total size: 3453345169348 B (Total open files size: 664 B)
Total dirs: 15233
Total files: 14029
Total symlinks: 0 (Files currently being written: 8)
Total blocks (validated): 40961 (avg. block size 84308126 B) (Total open file blocks (not validated): 8)
********************************
CORRUPT FILES: 2
MISSING BLOCKS: 2
MISSING SIZE: 15731297 B
CORRUPT BLOCKS: 2
********************************
Corrupt blocks: 2
Number of data-nodes: 12
Number of racks: 2
FSCK ended at Fri Mar 27 XX:03:21 UTC 201X in XXX milliseconds The filesystem under path '/' is CORRUPT

How do I know which files have blocks that are corrupt?

The output of the fsck above will be very verbose, but it will mention which blocks are corrupt. We can do some grepping of the fsck above so that we aren't "reading through a firehose".

hdfs fsck / | egrep -v '^\.+$' | grep -v replica | grep -v Replica

or

hdfs fsck hdfs://ip.or.host:50070/ | egrep -v '^\.+$' | grep -v replica | grep -v Replica

This will list the affected files, and the output will not be a bunch of dots, and also files that might currently have under-replicated blocks (which isn't necessarily an issue). The output should include something like this with all your affected files.

/path/to/filename.fileextension: CORRUPT blockpool BP-1016133662-10.29.100.41-1415825958975 block blk_1073904305

/path/to/filename.fileextension: MISSING 1 blocks of total size 15620361 B

The next step would be to determine the importance of the file, can it just be removed and copied back into place, or is there sensitive data that needs to be regenerated?

If it's easy enough just to replace the file, that's the route I would take.

Remove the corrupted file from your hadoop cluster

This command will move the corrupted file to the trash.

hdfs dfs -rm /path/to/filename.fileextension
hdfs dfs -rm hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension

Or you can skip the trash to permanently delete (which is probably what you want to do)

hdfs dfs -rm -skipTrash /path/to/filename.fileextension
hdfs dfs -rm -skipTrash hdfs://ip.or.hostname.of.namenode:50070/path/to/filename.fileextension

How would I repair a corrupted file if it was not easy to replace?

This might or might not be possible, but the first step would be to gather information on the file's location, and blocks.

hdfs fsck /path/to/filename/fileextension -locations -blocks -files
hdfs fsck hdfs://ip.or.hostname.of.namenode:50070/path/to/filename/fileextension -locations -blocks -files

From this data, you can track down the node where the corruption is. On those nodes, you can look through logs and determine what the issue is. If a disk was replaced, i/o errors on the server, etc. If possible to recover on that machine and get the partition with the blocks online that would report back to hadoop and the file would be healthy again. If that isn't possible, you will unforunately have to find another way to regenerate.

Fix Corrupt Blocks on HDFS的更多相关文章

  1. How to fix corrupt HDFS FIles

    1 问题描述 HDFS在机器断电或意外崩溃的情况下,有可能出现正在写的数据(例如保存在DataNode内存的数据等)丢失的问题.再次重启HDFS后,发现hdfs无法启动,查看日志后发现,一直处于安全模 ...

  2. ORA-19566: exceeded limit of 0 corrupt blocks for file E:\xxxx\<datafilename>.ORA.

    How to Format Corrupted Block Not Part of Any Segment (Doc ID 336133.1) To BottomTo Bottom In this D ...

  3. 手动修复 under-replicated blocks in HDFS

    解决方式步骤: 1.进入hdfs的pod kubectl get pod -o wide | grep hdfs kubectl exec -ti hadoop-hdfs-namenode-hdfs1 ...

  4. hdfs 如何实现退役节点快速下线(也就是退役节点上的数据块快速迁移)speed up decommission blocks removal

    以下是选择复制源节点的代码 代码总结: A=datanode上要复制block的Queue size与 target datanode没被选出之前待处理复制工作数之和. 1. 优先选择退役中的节点,因 ...

  5. [bigdata] 使用Flume hdfs sink, hdfs文件未关闭的问题

    现象: 执行mapreduce任务时失败 通过hadoop fsck -openforwrite命令查看发现有文件没有关闭. [root@com ~]# hadoop fsck -openforwri ...

  6. hdfs 常用命令

    (2)bin/hdfs dfs -mkdir -p /home/雨渐渐 (3)scp /media/root/DCE28B65E28B432E/download/第2周/ChinaHadoop第二讲\ ...

  7. 【原创】大数据基础之HDFS(1)HDFS新创建文件如何分配Datanode

    HDFS中的File由Block组成,一个File包含一个或多个Block,当创建File时会创建一个Block,然后根据配置的副本数量(默认是3)申请3个Datanode来存放这个Block: 通过 ...

  8. Hadoop 2.7.4 HDFS+YRAN HA增加datanode和nodemanager

    当前集群 主机名称 IP地址 角色 统一安装目录 统一安装用户 sht-sgmhadoopnn-01 172.16.101.55 namenode,resourcemanager /usr/local ...

  9. hdfs fsck命令查看HDFS文件对应的文件块信息(Block)和位置信息(Locations)

    关键字:hdfs fsck.block.locations 在HDFS中,提供了fsck命令,用于检查HDFS上文件和目录的健康状态.获取文件的block信息和位置信息等. fsck命令必须由HDFS ...

随机推荐

  1. PyCharm 新建文件时默认添加作者时间等

    将内容添加到Python Script 右侧的文本框中: 路径: File → Setting → Editor → File and Code Templates → Python Script # ...

  2. django中,如何把所有models模型文件放在同一个app目录下?

    django的每个app目录下,都有自己的models.py文件. 原则上,每个app涉及的数据库,都会定义在这个文件里. 但是,有的数据库,涉及到多个app应用,不是很方便放在一个单独的app里. ...

  3. BZOJ1088 [SCOI2005]扫雷Mine 动态规划

    欢迎访问~原文出处——博客园-zhouzhendong 去博客园看该题解 题目传送门 - BZOJ1088 题意概括 扫雷.只有2行.第2行没有雷,第一行有雷.告诉你第二行显示的数组,问有几种摆放方式 ...

  4. 010 Spark中的监控----日志聚合的配置,以及REST Api

    一:History日志聚合的配置 1.介绍 Spark的日志聚合功能不是standalone模式独享的,是所有运行模式下都会存在的情况 默认情况下历史日志是保存到tmp文件夹中的 2.参考官网的知识点 ...

  5. 005 Spark快速入门的简单程序案例

    参考:官网的quick start http://spark.apache.org/docs/1.6.0/quick-start.html 这里只是在shell命令行中简单的书写一些命令,做一个简单的 ...

  6. sublime 自定义快捷代码

    选择打开tools ->developer->new snippet <snippet> <content><![CDATA[Hello, ${1:this} ...

  7. 《大型分布式网站架构》学习笔记--01SOA

    "学无长幼,达者为先",作者陈康贤通过3年左右时间就能写出如此著作确实令人钦佩,加油,熊二,早日成为一个合格的后端程序员. 基础概念 SOA(Service-Oriented Ar ...

  8. 使用 git clone 的时候出现 fatal: Unable to find remote helper for 'https' 解决办法

    安装 libcurl 和 curl yum install libcurl-devel yum install curl-devel 重编译git客户端

  9. FTP 学习笔记

    由于最近在跟LMS项目,前期的环境部署需要使用到FTP协议,所以在这里记录一下项目中学习到的知识,以作记录. FTP为基于TCP/IP网络传输协议的文件传输应用层协议. FTP协议在两台服务器中传输文 ...

  10. thumbs.db是什么文件

    thumbs.db是什么文件  这是图片缓存文件 Thumbs.db文件是一个数据库,里面保存了这个目录下所有图像文件的缩略图(格式为jpeg) thumbs.db删除不掉 反射获取某个类的 所有字段 ...