hadoop 突然断电数据丢失问题
HDFS-Could not obtain block
MapReduce Total cumulative CPU time: 33 seconds 380 msec
Ended Job = job_201308291142_4635 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://xxx /jobdetails.jsp?jobid=job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000019 (and more) from job job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000007 m(and more) from job job_201308291142_4635
Examining task ID: task_201308291142_4635_m_000009 (and more) from job job_201308291142_4635
Task with the most failures(5):
-----
Task ID:
task_201308291142_4635_m_000009
URL:
-----
Diagnostic Messages for this Task:
java.io.IOException: java.io.IOException: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1555036314-10.115.5.16-1375773346340:blk_-2678705702538243931_541142 file=/user/hive/warehouse/playtime/dt=20131119/access_pt.log.2013111904.log
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:330)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:246)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)
at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
- Reson
- Solution
HDFS FILE
- If HDFS block is missing
1. confirm status
Confirm missing block is exit or not.
If missing block is over 1, file is not able to read.
$ hadoop dfsadmin -report
Configured Capacity: 411114887479296 (373.91 TB)
Present Capacity: 411091477784158 (373.89 TB)
DFS Remaining: 411068945908611 (373.87 TB)
DFS Used: 22531875547 (20.98 GB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 20 (20 total, 0 dead)
2. detail block file
...
Status: HEALTHY
Total size: 4056908575 B (Total open files size: 3505453 B)
Total dirs: 533
Total files: 15525 (Files currently being written: 2)
Total blocks (validated): 15479 (avg. block size 262091 B) (Total open file blocks (not validated): 2)
Minimally replicated blocks: 15479 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0094967
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 20
Number of racks: 1
FSCK ended at Tue Nov 19 10:17:19 KST 2013 in 351 milliseconds
The filesystem under path '/' is HEALTHY
3. remove corrupted file
.....
.........................Status: HEALTHY
Total size: 4062473881 B (Total open files size: 3505453 B)
Total dirs: 533
Total files: 15525 (Files currently being written: 2)
Total blocks (validated): 15479 (avg. block size 262450 B) (Total open file blocks (not validated): 2)
Minimally replicated blocks: 15479 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0094967
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 20
Number of racks: 1
FSCK ended at Tue Nov 19 10:21:41 KST 2013 in 294 milliseconds
The filesystem under path '/' is HEALTHY
HIVE FILE
- If hive block is missing
alter table drop partition
hadoop 突然断电数据丢失问题的更多相关文章
- Hadoop的由来、Block切分、进程详解
Hadoop的由来.Block切分.进程详解 一.hadoop的由来 Google发布了三篇论文: GFS(Google File System) MapReduce(数据计算方法) BigTable ...
- Linux实战教学笔记02:计算机系统硬件核心知识
标签(空格分隔):Linux实战教学笔记-陈思齐 第1章 互联网企业常见服务器介绍 1.1 互联网公司服务器品牌 - DELL(大多数公司,常用) - HP - IBM(百度在用) 浪潮 联想 航天联 ...
- 【Python全栈笔记】00 12-14 Oct Linux 和 Python 基础
Linux 基础认识 更加稳定,安全,开源 设置好ssh协议后可以通过windows系统连接Linux,基于ssh协议进行通信 '/' 为根目录 cd / -> 切换到根目录 ls -lh 列出 ...
- Day1 - 服务器硬件基础
1.1 关于运维人员 1.1.1 运维的职责 1.保证服务器7*24小时 运行 2.保证数据不能丢 3.提高用户的体验(网站打开的速度) 1.1.2 运维原则 简单.易用.高效 === 简单.粗暴 ...
- Ceph: A Scalable, High-Performance Distributed File System译文
原文地址:陈晓csdn博客 http://blog.csdn.net/juvxiao/article/details/39495037 论文概况 论文名称:Ceph: A Scalable, High ...
- memcached 缓存数据库应用实践
1.1 数据库对比 缓存: 将数据存储到内存中,只有当磁盘胜任不了的时候,才会启用缓存 缺点:断电数据丢失(双电),用缓存存储数据的目的只是为了应付大并发的业务. 数据库: mysql(关系型数据 ...
- DDMS和程序打包过程
1. Android版本对应api级别 2.3~~~~~10 3.0~~~~~11 4.0~~~~~14 4.1.2~~~16 2.3和4.1.2是最稳定的 2.Android手机常见分辨率 320* ...
- python编程基础--计算机原理之硬件基础
一.寄存器:寄存器是CPU内部用来存放数据的一些小型存储区域,用来暂时存放参与运算的数据和运算结果. 1.寄存器的特性: 1)寄存器位于CPU内部,数量很少,仅十四个: 2)寄存器所能存储的数据不一定 ...
- Python--day01(计算机基础)
Python: python 是一门面向后台的编程语言,在大数据,数据分析,机器学习,人工智能,爬虫,自动化运维,web等方面具有强大功能. 基础阶段学习内容:基本语法,文件处理,函数,模块,面向对象 ...
随机推荐
- 相同的问题又出现了,struts2取不出数值
debug里面是有数值的,不知道是不是又是表示错了.全部改成了小写也无济于事.正在想法解决中... 问题解决了,因为自己的不仔细,问题还是出在了action的set,get方法里,不是大小写没注意,改 ...
- vbs获取命令行里的参数
var args1=WScript.Arguments.Item(0) var args2=WScript.Arguments.Item(1)
- [珠玑之椟]浅谈代码正确性:循环不变式、断言、debug
这个主题和代码的实际写作有关,而且内容和用法相互交织,以下只是对于其内容的一个划分.<编程珠玑>上只用了两个章节20页左右的篇幅介绍,如果希望能获得更多的实例和技巧,我比较推崇<程序 ...
- LApacheMP基础环境搭建
一.安装前准备 1.下载所需软件包: apr | http://apache.etoak.com/apr/ apr-util | http://apache.etoak.com/apr/ autoco ...
- JQuery中ajax请求写法
$.ajax({ type: "POST", url: "ygdwController.do?getonygdw", data : "id=" ...
- swift 代码添加image
let image_ElectricianBtn = UIImage(named: "ElectricianBtn") let vimage_ElectricianBtn = UI ...
- TortoiseSVN汉化包装了,不管用,仍然是英文菜单
TortoiseSVN装了后,把对应的汉化包也装了,但不管用,仍然是英文菜单. 想着是因为没有重启的原因,但是重启了再装,仍然看不到中文工菜单. 想了一下,TortoiseSVN汉化包在装的时候,没有 ...
- css颜色表示
CSS1&CSS2的颜色方式 Color Name方式 用颜色关键字表示对应的颜色. 例如:red(红色).blue(蓝色).pink(粉色) 优点:方便快捷而且特定颜色比较准确 缺点:英文记 ...
- Linux如何学习
一:如何提问 1. 尝试自己解决 帮助文档 示例 2. 提问的要求 问题要详细(能被别人看懂, 一个知识点) 报错信息(截图) 二:1.Linux区分大小写 2.所有内容以文件形式保存,包括硬件(一切 ...
- oracle 多级菜单查询 。start with connect by prior
select * from S_dept where CODE in(select sd.code from s_dept sd start with sd.code='GDKB' connect b ...