2018-03-13 11:15:17,215 WARN [org.apache.flume.sink.hdfs.HDFSEventSink] - HDFS IO error
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1754)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1337)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2492)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2446)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.create(AuthorizationProviderProxyClientProtocol.java:108)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:388)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1678)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007) at org.apache.hadoop.ipc.Client.call(Client.java:1411)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:264)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1612)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1488)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1413)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:387)
at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:383)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:383)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:327)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773)
at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)
at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:273)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:722)
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183)
at org.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:719)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)

查看hadoop源码:

发些是写HDFS 无法写入权限,空间不足引起

参考:http://blog.csdn.net/xueyao0201/article/details/79530130

cloudera-hdfs 告警处理的更多相关文章

  1. Cloudera Certified Associate Administrator案例之Configure篇

    Cloudera Certified Associate Administrator案例之Configure篇 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.下载CDH集群中最 ...

  2. 大数据平台R语言web UI应用架构 设计与开发

    1. 系统拓扑图 在日常业务分析中,R是非常常用的分析工具,而当数据量较大时,用R语言需要需用更多的时间来完成训练模型,spark作为大规模数据处理框架,采用内存计算,可以短时间内完成大量的数据的处理 ...

  3. namenode No valid image files

    1,角色日志报错 Encountered exception loading fsimage java.io.FileNotFoundException: No valid image files f ...

  4. 即将上线的Spark服务器面临的一系列填坑笔记

    即将上线的Spark服务器面临的一系列填坑笔记 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 把kafka和flume倒腾玩了,以为可以轻松一段时间了,没想到使用CDH部署的spa ...

  5. Impala SQL 语言元素(翻译)[转载]

    原 Impala SQL 语言元素(翻译) 本文来源于http://my.oschina.net/weiqingbin/blog/189413#OSC_h2_2 摘要 http://www.cloud ...

  6. Impala SQL 语言元素(翻译)

    摘要: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Usin ...

  7. 报错:Error, CM server guid updated, expected xxxxx, received xxxxx (未解决)

    报错背景: CDH断电重启后,cloudera-scm-server启动报错, cloudera-scm-server 已死,但 pid 文件仍存 由于没有成熟的解决方案,于是我就重新安装了MySQL ...

  8. 安装CDH5.11.2集群

    master  192.168.1.30 saver1  192.168.1.40 saver2  192.168.1.50 首先,时间同步 然后,ssh互通 接下来开始: 1.安装MySQL5.6. ...

  9. 使用Cloudera Manager搭建HDFS完全分布式集群

    使用Cloudera Manager搭建HDFS完全分布式集群 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 关于Cloudera Manager的搭建我这里就不再赘述了,可以参考 ...

  10. Problem of Uninstall Cloudera: Can't Add Hdfs and Reported Cannot Find CDH's bigtop-detect-javahome

    [toc] 1. Problem We wrote a shell script to uninstall Cloudera Manager(CM) that run in a cluster wit ...

随机推荐

  1. SQL 用到的操作符

    1.LIKE 操作符用于在 WHERE 子句中搜索列中的指定模式. SELECT column_name(s) FROM table_name WHERE column_name LIKE patte ...

  2. Visual Studio for Mac 使用笔记

    无法打开控制台运行程序 Project -> Project Options -> Run -> Configurations -> Default 勾选Run on exte ...

  3. ORACLE数据库 常用命令和Sql常用语句

    ORACLE 账号相关 如何获取表及权限 1.COPY表空间backup scottexp登录管理员账号system2.创建用户 create user han identified(认证) by m ...

  4. SQL Server GROUP BY 后 拼接 字符串

    原文地址:https://blog.csdn.net/u010673842/article/details/79637618 select ID, ,,'') from class a group b ...

  5. Centos7.x Docker桥接网络

    基于Centos7.x构建Docker桥接网络, 配置bridge桥接网络可以直接设置网卡配置文件: 自定义桥接网络设置如下: 关掉docker0 ifconfig docker0 down 删除do ...

  6. python添加到环境变量

    1.命令 vi ~/.bashrc export PATH="/home/tiany/software/python/python3/3.6.1/bin:$PATH" export ...

  7. spring 事务传播

    1.spring实现对事务的控制,使用的是代理的技术.通过生成的代理类来捕捉被代理类(也就是我们编写的类)的异常,决定事务的提交或回滚.从某一角度来说,spring事务是基于异常实现的.对于实现了接口 ...

  8. CSS实现禁止文字选中

    E10平台预览第四版中包含了对 CSS3 新属性 -ms-user-select 的支持,Web 开发人员可以利用这一新属性轻松精确的控制用户可以在网站上选择哪些文本. user-select:non ...

  9. mongodb-参考其他

    MongoDB教程 http://www.runoob.com/mongodb/mongodb-window-install.html

  10. poi excel 加粗

    参考 https://blog.csdn.net/wellto/article/details/52293202 XSSFWorkbook xwb = new XSSFWorkbook(); ... ...