2017-08-16 11:57:28,237 WARN [org.apache.hadoop.hdfs.LeaseRenewer][458] - <Failed to renew lease for [DFSClient_NONMAPREDUCE_-1756242047_26] for 30 seconds.  Will retry shortly ...>
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1404)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4968)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewLease(AuthorizationProviderProxyClientProtocol.java:357)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:633)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy50.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy51.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:745)
提示信息中的网址说的很清楚https://s.apache.org/sbnn-error
3.17. What does the message "Operation category READ/WRITE is not supported in state standby" mean?

In an HA-enabled cluster, DFS clients cannot know in advance which namenode is active at a given time. So when a client contacts a namenode and it happens to be the standby, the READ or WRITE operation will be refused and this message is logged. The client will then automatically contact the other namenode and try the operation again. As long as there is one active and one standby namenode in the cluster, this message can be safely ignored.

If an application is configured to contact only one namenode always, this message indicates that the application is failing to perform any read/write operation. In such situations, the application would need to be modified to use the HA configuration for the cluster. The jira HDFS-3447 deals with lowering the severity of this message (and similar ones) to DEBUG so as to reduce noise in the logs, but is unresolved as of July 2015.

kafka-connect-hdfs中操作hdfs的HdfsStorage.class中需要做修改

/**
* Copyright 2015 Confluent Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
**/ package io.confluent.connect.hdfs.storage; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.kafka.common.TopicPartition; import java.io.IOException;
import java.net.URI; import io.confluent.connect.hdfs.wal.FSWAL;
import io.confluent.connect.hdfs.wal.WAL; public class HdfsStorage implements Storage { private final FileSystem fs;
private final Configuration conf;
private final String url; public HdfsStorage(Configuration conf, String url) throws IOException {
//fs = FileSystem.newInstance(URI.create(url), conf);原来的
fs = FileSystem.newInstance(conf);修改后的
this.conf = conf;
this.url = url;
} @Override
public FileStatus[] listStatus(String path, PathFilter filter) throws IOException {
return fs.listStatus(new Path(path), filter);
} @Override
public FileStatus[] listStatus(String path) throws IOException {
return fs.listStatus(new Path(path));
} @Override
public void append(String filename, Object object) throws IOException { } @Override
public boolean mkdirs(String filename) throws IOException {
return fs.mkdirs(new Path(filename));
} @Override
public boolean exists(String filename) throws IOException {
return fs.exists(new Path(filename));
} @Override
public void commit(String tempFile, String committedFile) throws IOException {
renameFile(tempFile, committedFile);
} @Override
public void delete(String filename) throws IOException {
fs.delete(new Path(filename), true);
} @Override
public void close() throws IOException {
if (fs != null) {
fs.close();
}
} @Override
public WAL wal(String topicsDir, TopicPartition topicPart) {
return new FSWAL(topicsDir, topicPart, this);
} @Override
public Configuration conf() {
return conf;
} @Override
public String url() {
return url;
} private void renameFile(String sourcePath, String targetPath) throws IOException {
if (sourcePath.equals(targetPath)) {
return;
}
final Path srcPath = new Path(sourcePath);
final Path dstPath = new Path(targetPath);
if (fs.exists(srcPath)) {
fs.rename(srcPath, dstPath);
}
}
}

当然 url的相应配置得改成hdfs://nameservice/*,因为要HA 啊。不能按照原来的要求了,原来的要求如下:

// HDFS Group
public static final String HDFS_URL_CONFIG = "hdfs.url";
private static final String HDFS_URL_DOC =
"The HDFS connection URL. This configuration has the format of hdfs:://hostname:port and "
+ "specifies the HDFS to export data to.";
private static final String HDFS_URL_DISPLAY = "HDFS URL";

虽然实例化storage时候不用url了,往hive load还是要的。

    url = connectorConfig.getString(HdfsSinkConnectorConfig.HDFS_URL_CONFIG);
topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
String logsDir = connectorConfig.getString(HdfsSinkConnectorConfig.LOGS_DIR_CONFIG); @SuppressWarnings("unchecked")
Class<? extends Storage> storageClass = (Class<? extends Storage>) Class
.forName(connectorConfig.getString(HdfsSinkConnectorConfig.STORAGE_CLASS_CONFIG));
storage = StorageFactory.createStorage(storageClass, conf, url);

kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA

												

kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA的更多相关文章

  1. kettle连接hadoop&hdfs图文详解

    1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...

  2. kettle入门(三) 之kettle连接hadoop&hdfs图文详解(转)

    1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...

  3. 使用kafka connect,将数据批量写到hdfs完整过程

    版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...

  4. Kafka connect快速构建数据ETL通道

    摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...

  5. Kafka Connect HDFS

    概述 Kafka 的数据如何传输到HDFS?如果仔细思考,会发现这个问题并不简单. 不妨先想一下这两个问题? 1)为什么要将Kafka的数据传输到HDFS上? 2)为什么不直接写HDFS而要通过Kaf ...

  6. kafka-connect-hdfs重启,进去RECOVERY状态,从hadoop hdfs拿租约,很正常,但是也太久了吧

    虽说这个算是正常现象,等的时间也太久了吧.分钟级了.这个RECOVERY里面的WAL有点多余.有这么久的时间,早从新读取kafka写入hdfs了.纯属个人见解. @SuppressWarnings(& ...

  7. 使用python来访问Hadoop HDFS存储实现文件的操作

    原文:http://rfyiamcool.blog.51cto.com/1030776/1258292 在调试环境下,咱们用hadoop提供的shell接口测试增加删除查看,但是不利于复杂的逻辑编程 ...

  8. hadoop hdfs 有内网、公网ip后,本地调试访问不了集群解决

    问题背景: 使用云上的虚拟环境搭建测试集群,导入一些数据,在本地idea做些debug调试,但是发现本地idea连接不上测试环境 集群内部配置hosts映射是内网映射(内网ip与主机名映射),本地只能 ...

  9. Hadoop HDFS 用户指南

    This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...

随机推荐

  1. 重构——与设计模式的恋情

    慢慢的,我发现,我想和<重构>加深感情不那么容易,于是我就想办法,重构有个好闺蜜<设计模式>,他们青梅竹马两小无猜,行为习性喜好都差不多,要让重构爱上我,我或许可以和设计模式多 ...

  2. 原创SQlServer数据库生成简单的说明文档包含(存储过程、视图、数据库批量备份)小工具(附源码)

    这是一款简单的数据库文档生成工具,主要实现了SQlServer生成说明文档的小工具,目前不够完善,主要可以把数据库的表以及表的详细字段信息,导出到 Word中,可以方便开发人员了解数据库的信息或写技术 ...

  3. Mac下写博客工具MarsEdit相关资料

    参考资料: https://www.maoshu.cc/967.html 下载地址: https://www.red-sweater.com/marsedit/ 博客园的配置: http://www. ...

  4. vim 中:wq和:wq的不同之处

  5. c3p0链接池配置使用

    c3p0链接池初步使用:直接上代码 c3p0是开源面粉的连接池,目前使用它的开源项目主要有:Spring,Hibernate等,使用时需要导入相关jar包及配置文件c3p0-config.xml文件 ...

  6. mac gulp: command not found

    mac下执行gulp的时候报错:gulp: command not found 1.查看npm的安装目录 npm root 2.如果不是/usr/local , 说明未全局安装,执行 sudo npm ...

  7. <自动化测试方案_4>第四章、选型标准

    第四章.选型标准 1,免费 2,工具可维护.可扩展 3,支持团队工作

  8. (网页)swiper.js轮播图插件

    Swiper4.x使用方法 1.首先加载插件,需要用到的文件有swiper.min.js和swiper.min.css文件.可下载Swiper文件或使用CDN. <!DOCTYPE html&g ...

  9. 浅析C/C++中的switch/case陷阱

    浅析C/C++中的switch/case陷阱 先看下面一段代码: 文件main.cpp #include<iostream> using namespace std; int main(i ...

  10. CSS布局---居中方法

    在web页面布局中居中是我们常遇到的情况,而居中分为水平居中与垂直居中 文本的居中 CSS中对文本的居中做的非常友好,我们是需要text-align, line-height 两个属性就可以控制文本的 ...