重启hdfs集群的时候,报大量的gc问题。
问题现象:
2019-03-11 12:30:52,174 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7653ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7692ms
2019-03-11 12:31:00,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7899ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7951ms
2019-03-11 12:31:08,952 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7878ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7937ms
2019-03-11 12:31:17,405 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7951ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8037ms
2019-03-11 12:31:26,611 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8705ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8835ms
2019-03-11 12:31:35,009 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7897ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8083ms
2019-03-11 12:31:43,806 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8296ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8416ms
2019-03-11 12:31:52,317 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8010ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8163ms
2019-03-11 12:32:00,680 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7862ms
gc一段时间后出现:
2019-03-11 12:27:15,820 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.OutOfMemoryError: Java heap space
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:300)
at java.lang.StringCoding.encode(StringCoding.java:344)
at java.lang.String.getBytes(String.java:918)
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
at java.io.File.exists(File.java:819)
at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:1282)
at sun.misc.URLClassPath.getResource(URLClassPath.java:239)
at java.net.URLClassLoader$1.run(URLClassLoader.java:365)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.close(JournalSet.java:244)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:400)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:112)
at org.apache.hadoop.hdfs.server.namenode.FSImage.close(FSImage.java:1408)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1079)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
2019-03-11 12:27:15,827 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.OutOfMemoryError: Java heap space
2019-03-11 12:27:15,830 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
或者出现下面的错误:
2019-03-11 11:09:16,124 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.google.protobuf.CodedInputStream.<init>(CodedInputStream.java:573)
at com.google.protobuf.CodedInputStream.newInstance(CodedInputStream.java:55)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:199)
at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeSection$INode.parseDelimitedFrom(FsImageProto.java:10867)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:233)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:250)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:176)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:937)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:921)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:794)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:724)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
2019-03-11 11:09:16,127 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.OutOfMemoryError: GC overhead limit exceeded
解决:
打开hadoop-env.sh文件,找到HADOOP_HEAPSIZE= 和HADOOP_NAMENODE_INIT_HEAPSIZE= 调整这两个参数,具体调整多少,视情况而定,默认是1000m,也就是一个g,我这里调整如下:
export HADOOP_HEAPSIZE=32000
export HADOOP_NAMENODE_INIT_HEAPSIZE=16000 这两个参数去掉前面的#号,两台namenode节点都要调整
接着重新启动hdfs,如果还不行,打开hadoop-env.sh文件,找到HADOOP_NAMENODE_OPTS
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" ----这是系统默认值
调整如下:
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xms6000m -Xmx6000m -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -XX:SoftRefLRUPolicyMSPerMB=0 $HADOOP_NAMENODE_OPTS"
接着重新启动hdfs,如果还是报上面的错误,那就继续调大上面
HADOOP_HEAPSIZE和
HADOOP_NAMENODE_INIT_HEAPSIZE 的值
重启hdfs集群的时候,报大量的gc问题。的更多相关文章
- vivo 万台规模 HDFS 集群升级 HDFS 3.x 实践
vivo 互联网大数据团队-Lv Jia Hadoop 3.x的第一个稳定版本在2017年底就已经发布了,有很多重大的改进. 在HDFS方面,支持了Erasure Coding.More than 2 ...
- 大数据学习之hdfs集群安装部署04
1-> 集群的准备工作 1)关闭防火墙(进行远程连接) systemctl stop firewalld systemctl -disable firewalld 2)永久修改设置主机名 vi ...
- HDFS集群常见报错汇总
HDFS集群常见报错汇总 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.DataXceiver error processing WRITE_BLOCK operation 报 ...
- 大数据学习笔记03-HDFS-HDFS组件介绍及Java访问HDFS集群
HDFS组件概述 NameNode 存储数据节点信息及元文件,即:分成了多少数据块,每一个数据块存储在哪一个DataNode中,每一个数据块备份到哪些DataNode中 这个集群有哪些DataNode ...
- 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作
马士兵hadoop第一课:虚拟机搭建和安装hadoop及启动 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作 马士兵hadoop第三课:java开发hdfs 马士兵hadoop第 ...
- 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作(转)
马士兵hadoop第一课:虚拟机搭建和安装hadoop及启动 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作 马士兵hadoop第三课:java开发hdfs 马士兵hadoop第 ...
- 大数据(2)---HDFS集群搭建
一.准备工作 1.准备几台机器,我这里使用VMware准备了四台机器,一个name node,三个data node. VMware安装虚拟机:https://www.cnblogs.com/niju ...
- HDFS集群balance(2)-- 架构概览
转载请注明博客地址:http://blog.csdn.net/suileisl HDFS集群balance,对应版本balance design 6 如需word版本,请QQ522173163联系索要 ...
- HDFS集群balance(3)-- 架构细节
转载请注明博客地址:http://blog.csdn.net/suileisl HDFS集群balance,对应版本balance design 6 如需word版本,请QQ522173163联系索要 ...
随机推荐
- Win7下搭建Go语言开发环境
Win7下搭建Go语言开发环境 1 下载适合window版本的Go安装包,下载地址http://code.google.com/p/go/downloads/list 2 下载适合window本本的L ...
- vue进入页面时不在顶部,检测滚动返回顶部按钮
这里是本小白使用时遇到的问题及个人使用的方法可能并不完美. 1.监测浏览器滚动条滚动事件及滚动距离 dmounted() { window.addEventListener("scroll& ...
- 最新版Navicate破解激活
2019年5月5日激活成功 版本12.1.18 Navicat12.1下载地址 http://www.navicat.com.cn/download/navicat-premium有32位和64位,大 ...
- How to change SAPABAP1 schema password In HANA
Symptom How to change SAPABAP1 schema password Environment HANA 1.x HANA 2.x Resolution Shutdown the ...
- java容器细节
1. 2. 3.报错的原因,object里面没有game()方法 4.关于3的解决方法以及另一个细节 .next() 会在取完这个值之后,指针向后走一位,所以迭代器里不要出现多次.next() 正确写 ...
- Python 虚拟空间的使用
使用虚拟环境, 可以将当前项目所使用的依赖与电脑中其他 Python 项目的依赖区分开, 避免依赖版本不匹配带来的问题, 同时也可以防止项目依赖被不当更新. mkdir myproject cd my ...
- 初识Nginx,简单配置实现负载均衡(ubuntu + Nginx + tomcat)
工作需要,研究了一下Nginx的反向代理实现负载均衡,网上搜了一下教程,大多含糊不清,所以写下这个,权当总结,方便日后查看,如果能恰好帮到一些需要的人,那就更好了 先说需求,域名指向搭建了Nginx的 ...
- Win10设置开机进入启动设置模块(进入安全模式等)
Win10设置开机进入启动设置模块(进入安全模式等) Win10系统要进入安全模式或其他启动模式选择时,需要在系统中做如下设置后,才可在开机的时候对模式进行选择,操作如下: 1.依次点选:win10设 ...
- Django modle基础样版
定义一个基类模版, from django.db import models class ModelBase(models.Model): """ "" ...
- Stock Exchange (最大上升子子串)
/* 题意: 给定L个整数A1,A2,...,An,按照从左到右的顺序选出尽量多的整数, 组成一个上升序列(子序列可以理解为:删除0个或者多个数,其他的数的吮吸不变). 例如,1,6,2,3,7,5, ...