问题现象:

2019-03-11 12:30:52,174 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7653ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7692ms
2019-03-11 12:31:00,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7899ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7951ms
2019-03-11 12:31:08,952 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7878ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7937ms
2019-03-11 12:31:17,405 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7951ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8037ms
2019-03-11 12:31:26,611 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8705ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8835ms
2019-03-11 12:31:35,009 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7897ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8083ms
2019-03-11 12:31:43,806 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8296ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8416ms
2019-03-11 12:31:52,317 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8010ms
GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8163ms
2019-03-11 12:32:00,680 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7862ms

  

gc一段时间后出现:

2019-03-11 12:27:15,820 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.OutOfMemoryError: Java heap space
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:300)
at java.lang.StringCoding.encode(StringCoding.java:344)
at java.lang.String.getBytes(String.java:918)
at java.io.UnixFileSystem.getBooleanAttributes0(Native Method)
at java.io.UnixFileSystem.getBooleanAttributes(UnixFileSystem.java:242)
at java.io.File.exists(File.java:819)
at sun.misc.URLClassPath$FileLoader.getResource(URLClassPath.java:1282)
at sun.misc.URLClassPath.getResource(URLClassPath.java:239)
at java.net.URLClassLoader$1.run(URLClassLoader.java:365)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.hdfs.server.namenode.JournalSet.close(JournalSet.java:244)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:400)
at org.apache.hadoop.hdfs.server.namenode.FSEditLogAsync.close(FSEditLogAsync.java:112)
at org.apache.hadoop.hdfs.server.namenode.FSImage.close(FSImage.java:1408)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1079)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
2019-03-11 12:27:15,827 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.OutOfMemoryError: Java heap space
2019-03-11 12:27:15,830 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

  

或者出现下面的错误:

2019-03-11 11:09:16,124 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.google.protobuf.CodedInputStream.<init>(CodedInputStream.java:573)
at com.google.protobuf.CodedInputStream.newInstance(CodedInputStream.java:55)
at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:199)
at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
at org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeSection$INode.parseDelimitedFrom(FsImageProto.java:10867)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Loader.loadINodeSection(FSImageFormatPBINode.java:233)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:250)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:176)
at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:937)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:921)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:794)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:724)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:322)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1052)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:666)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:728)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741)
2019-03-11 11:09:16,127 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.lang.OutOfMemoryError: GC overhead limit exceeded

  

解决:

打开hadoop-env.sh文件,找到HADOOP_HEAPSIZE=  和HADOOP_NAMENODE_INIT_HEAPSIZE=  调整这两个参数,具体调整多少,视情况而定,默认是1000m,也就是一个g,我这里调整如下:

export HADOOP_HEAPSIZE=32000
export HADOOP_NAMENODE_INIT_HEAPSIZE=16000 这两个参数去掉前面的#号,两台namenode节点都要调整

  

接着重新启动hdfs,如果还不行,打开hadoop-env.sh文件,找到HADOOP_NAMENODE_OPTS

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender}  $HADOOP_NAMENODE_OPTS"    ----这是系统默认值
调整如下:
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xms6000m -Xmx6000m -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -XX:SoftRefLRUPolicyMSPerMB=0 $HADOOP_NAMENODE_OPTS"

  

接着重新启动hdfs,如果还是报上面的错误,那就继续调大上面

HADOOP_HEAPSIZE和
HADOOP_NAMENODE_INIT_HEAPSIZE  的值

重启hdfs集群的时候,报大量的gc问题。的更多相关文章

  1. vivo 万台规模 HDFS 集群升级 HDFS 3.x 实践

    vivo 互联网大数据团队-Lv Jia Hadoop 3.x的第一个稳定版本在2017年底就已经发布了,有很多重大的改进. 在HDFS方面,支持了Erasure Coding.More than 2 ...

  2. 大数据学习之hdfs集群安装部署04

    1-> 集群的准备工作 1)关闭防火墙(进行远程连接) systemctl stop firewalld systemctl -disable firewalld 2)永久修改设置主机名 vi ...

  3. HDFS集群常见报错汇总

    HDFS集群常见报错汇总 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.DataXceiver error processing WRITE_BLOCK operation 报 ...

  4. 大数据学习笔记03-HDFS-HDFS组件介绍及Java访问HDFS集群

    HDFS组件概述 NameNode 存储数据节点信息及元文件,即:分成了多少数据块,每一个数据块存储在哪一个DataNode中,每一个数据块备份到哪些DataNode中 这个集群有哪些DataNode ...

  5. 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作

    马士兵hadoop第一课:虚拟机搭建和安装hadoop及启动 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作 马士兵hadoop第三课:java开发hdfs 马士兵hadoop第 ...

  6. 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作(转)

    马士兵hadoop第一课:虚拟机搭建和安装hadoop及启动 马士兵hadoop第二课:hdfs集群集中管理和hadoop文件操作 马士兵hadoop第三课:java开发hdfs 马士兵hadoop第 ...

  7. 大数据(2)---HDFS集群搭建

    一.准备工作 1.准备几台机器,我这里使用VMware准备了四台机器,一个name node,三个data node. VMware安装虚拟机:https://www.cnblogs.com/niju ...

  8. HDFS集群balance(2)-- 架构概览

    转载请注明博客地址:http://blog.csdn.net/suileisl HDFS集群balance,对应版本balance design 6 如需word版本,请QQ522173163联系索要 ...

  9. HDFS集群balance(3)-- 架构细节

    转载请注明博客地址:http://blog.csdn.net/suileisl HDFS集群balance,对应版本balance design 6 如需word版本,请QQ522173163联系索要 ...

随机推荐

  1. c#基础知识梳理(一)

    一.C#简介 C#是微软公司发布的一种面向对象的.运行于.NET Framework之上的高级程序设计语言.C#看起来与Java有着惊人的相似:它包括了诸如单一继承.接口.与Java几乎同样的语法和编 ...

  2. php 5.6.36 安装mcrypt

    相关扩展安装步骤 libmcrypt downloadUrl:https://sourceforge.net/projects/mcrypt/files/MCrypt/ versionCode:2.5 ...

  3. Pytorch 1.0升级到Pytorch 1.1.0

    Pytorch 1.0Pytorch 1.0于2018-12-8发布,详见https://github.com/pytorch/pytorch/releases/tag/v1.0.0 主要更新JIT全 ...

  4. 【转载】 C#使用Union方法求两个List集合的并集数据

    在C#语言的编程开发中,有时候需要对List集合数据进行运算,如对两个List集合进行交集运算或者并集运算,其中针对2个List集合的并集运算,可以使用Union方法来快速实现,Union方法的调用格 ...

  5. Fortify漏洞之Sql Injection(sql注入)

    公司最近启用了Fortify扫描项目代码,报出较多的漏洞,安排了本人进行修复,近段时间将对修复的过程和一些修复的漏洞总结整理于此! 本篇先对Fortify做个简单的认识,同时总结一下sql注入的漏洞! ...

  6. ArduPilot存储管理 Storage EEPROM Flash

    AP_HAL::Storage 此类可以应用于所有平台.PX4v1平台支持8k的EEPROM,Pixhawk平台支持16k的FRAM铁电存储器 存储大小定义:libraries/AP_HAL/AP_H ...

  7. (2) openstack--keystone

    yun1 OpenStack packages yum install python-openstackclient -y yum install openstack-selinux SQL data ...

  8. Redis未授权访问漏洞复现及修复方案

    首先,第一个复现Redis未授权访问这个漏洞是有原因的,在 2019-07-24 的某一天,我同学的服务器突然特别卡,卡到连不上的那种,通过 top,free,netstat 等命令查看后发现,CPU ...

  9. js基础知识2

    DOM Document Object Model 文档          对象       模型 对象: 属性和方法 属性:获取值和赋值 方法:赋值方法和条用方法 DOM树 document hea ...

  10. 2020年日期表-python实现

    import pandas as pdimport calendarimport datetime # 生成日期范围date = pd.date_range("2020-01-01" ...