kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA
2017-08-16 11:57:28,237 WARN [org.apache.hadoop.hdfs.LeaseRenewer][458] - <Failed to renew lease for [DFSClient_NONMAPREDUCE_-1756242047_26] for 30 seconds. Will retry shortly ...>
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1826)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1404)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewLease(FSNamesystem.java:4968)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewLease(NameNodeRpcServer.java:875)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewLease(AuthorizationProviderProxyClientProtocol.java:357)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewLease(ClientNamenodeProtocolServerSideTranslatorPB.java:633)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy50.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy51.renewLease(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
at org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
at java.lang.Thread.run(Thread.java:745)
提示信息中的网址说的很清楚https://s.apache.org/sbnn-error
3.17. What does the message "Operation category READ/WRITE is not supported in state standby" mean? In an HA-enabled cluster, DFS clients cannot know in advance which namenode is active at a given time. So when a client contacts a namenode and it happens to be the standby, the READ or WRITE operation will be refused and this message is logged. The client will then automatically contact the other namenode and try the operation again. As long as there is one active and one standby namenode in the cluster, this message can be safely ignored. If an application is configured to contact only one namenode always, this message indicates that the application is failing to perform any read/write operation. In such situations, the application would need to be modified to use the HA configuration for the cluster. The jira HDFS-3447 deals with lowering the severity of this message (and similar ones) to DEBUG so as to reduce noise in the logs, but is unresolved as of July 2015.
kafka-connect-hdfs中操作hdfs的HdfsStorage.class中需要做修改
/**
* Copyright 2015 Confluent Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
**/ package io.confluent.connect.hdfs.storage; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.kafka.common.TopicPartition; import java.io.IOException;
import java.net.URI; import io.confluent.connect.hdfs.wal.FSWAL;
import io.confluent.connect.hdfs.wal.WAL; public class HdfsStorage implements Storage { private final FileSystem fs;
private final Configuration conf;
private final String url; public HdfsStorage(Configuration conf, String url) throws IOException {
//fs = FileSystem.newInstance(URI.create(url), conf);原来的
fs = FileSystem.newInstance(conf);修改后的
this.conf = conf;
this.url = url;
} @Override
public FileStatus[] listStatus(String path, PathFilter filter) throws IOException {
return fs.listStatus(new Path(path), filter);
} @Override
public FileStatus[] listStatus(String path) throws IOException {
return fs.listStatus(new Path(path));
} @Override
public void append(String filename, Object object) throws IOException { } @Override
public boolean mkdirs(String filename) throws IOException {
return fs.mkdirs(new Path(filename));
} @Override
public boolean exists(String filename) throws IOException {
return fs.exists(new Path(filename));
} @Override
public void commit(String tempFile, String committedFile) throws IOException {
renameFile(tempFile, committedFile);
} @Override
public void delete(String filename) throws IOException {
fs.delete(new Path(filename), true);
} @Override
public void close() throws IOException {
if (fs != null) {
fs.close();
}
} @Override
public WAL wal(String topicsDir, TopicPartition topicPart) {
return new FSWAL(topicsDir, topicPart, this);
} @Override
public Configuration conf() {
return conf;
} @Override
public String url() {
return url;
} private void renameFile(String sourcePath, String targetPath) throws IOException {
if (sourcePath.equals(targetPath)) {
return;
}
final Path srcPath = new Path(sourcePath);
final Path dstPath = new Path(targetPath);
if (fs.exists(srcPath)) {
fs.rename(srcPath, dstPath);
}
}
}
当然 url的相应配置得改成hdfs://nameservice/*,因为要HA 啊。不能按照原来的要求了,原来的要求如下:
// HDFS Group
public static final String HDFS_URL_CONFIG = "hdfs.url";
private static final String HDFS_URL_DOC =
"The HDFS connection URL. This configuration has the format of hdfs:://hostname:port and "
+ "specifies the HDFS to export data to.";
private static final String HDFS_URL_DISPLAY = "HDFS URL";
虽然实例化storage时候不用url了,往hive load还是要的。
url = connectorConfig.getString(HdfsSinkConnectorConfig.HDFS_URL_CONFIG);
topicsDir = connectorConfig.getString(HdfsSinkConnectorConfig.TOPICS_DIR_CONFIG);
String logsDir = connectorConfig.getString(HdfsSinkConnectorConfig.LOGS_DIR_CONFIG); @SuppressWarnings("unchecked")
Class<? extends Storage> storageClass = (Class<? extends Storage>) Class
.forName(connectorConfig.getString(HdfsSinkConnectorConfig.STORAGE_CLASS_CONFIG));
storage = StorageFactory.createStorage(storageClass, conf, url);
kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA
kafka-connect-hdfs连接hadoop hdfs时候,竟然是单点的,太可怕了。。。果断改成HA的更多相关文章
- kettle连接hadoop&hdfs图文详解
1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...
- kettle入门(三) 之kettle连接hadoop&hdfs图文详解(转)
1 引言: 项目最近要引入大数据技术,使用其处理加工日上网话单数据,需要kettle把源系统的文本数据load到hadoop环境中 2 准备工作: 1 首先 要了解支持hadoop的Kettle版本情 ...
- 使用kafka connect,将数据批量写到hdfs完整过程
版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...
- Kafka connect快速构建数据ETL通道
摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...
- Kafka Connect HDFS
概述 Kafka 的数据如何传输到HDFS?如果仔细思考,会发现这个问题并不简单. 不妨先想一下这两个问题? 1)为什么要将Kafka的数据传输到HDFS上? 2)为什么不直接写HDFS而要通过Kaf ...
- kafka-connect-hdfs重启,进去RECOVERY状态,从hadoop hdfs拿租约,很正常,但是也太久了吧
虽说这个算是正常现象,等的时间也太久了吧.分钟级了.这个RECOVERY里面的WAL有点多余.有这么久的时间,早从新读取kafka写入hdfs了.纯属个人见解. @SuppressWarnings(& ...
- 使用python来访问Hadoop HDFS存储实现文件的操作
原文:http://rfyiamcool.blog.51cto.com/1030776/1258292 在调试环境下,咱们用hadoop提供的shell接口测试增加删除查看,但是不利于复杂的逻辑编程 ...
- hadoop hdfs 有内网、公网ip后,本地调试访问不了集群解决
问题背景: 使用云上的虚拟环境搭建测试集群,导入一些数据,在本地idea做些debug调试,但是发现本地idea连接不上测试环境 集群内部配置hosts映射是内网映射(内网ip与主机名映射),本地只能 ...
- Hadoop HDFS 用户指南
This document is a starting point for users working with Hadoop Distributed File System (HDFS) eithe ...
随机推荐
- c#等程序中的关于时间的最大值【DateTime.MaxValue】和最小值【DateTime.MinValue】
运行之后得到的结果 c# DateTime.MaxValue:// :: DateTime.MinValue:// :: Sql Server DateTime 类型必须介于 1/1/1753 12: ...
- Newtonsoft.Json(Json.net) 的使用
Newtonsoft.Json(Json.net) 的使用 //Newtonsoft.Json.dll using Newtonsoft.Json; using Newtonsoft.Json.Con ...
- 快乐的一天从JAVA第一课开始,生活美滋滋!!!
---恢复内容开始--- 学JAVA第一天 今天稀里糊涂就把JAVA环境配好了 现在回想一下,吧环境跟大家分享一下…… 第一步:下载 JAVA(推荐使用谷歌浏览器,因为谷歌浏览器右键点 ...
- Java java jdbc thin远程连接并操作Oracle数据库
JAVA jdbc thin远程连接并操作Oracle数据库 by:授客 QQ:1033553122 测试环境 数据库:linux 下Oracle_11g_R2 编码工具:Eclipse 编码平台:W ...
- thinkphp——通过在线编辑器添加的内容在模板里正确显示(只显示内容,而不是html代码)
thinkphp编辑器回显问题如下: 解决办法如下: 对于编辑器发布的内容,前台模板显示为html的解决办法是: 在模板输出字段加入html_entity_decode()函数 也就是:PHP输出时的 ...
- 解决微信开发工具上trace无法检测到设备,一直停留在“正在搜索设备...”或者trace panel,choose device老出现device not found
性能 Trace 工具 微信 Andoid 6.5.10 开始,我们提供了 Trace 导出工具,开发者可以在开发者工具 Trace Panel 中使用该功能. 使用方法 PC 上需要先安装 adb ...
- SpringMVC 异步与定时使用示例
1.Spring 的xml配置: <aop:aspectj-autoproxy/> <task:annotation-driven executor="annotation ...
- Linux查看机器的硬件信息
转载:https://linux.cn/article-9932-1.html
- Linux技术图谱
- Android 与Java 进程退出 killProcess与System.exit
android所有activity都在主进程中,在清单文件Androidmanifest.xml中可以设置启动不同进程,Service需要指定运行在单独进程?主进程中的主线程?还是主进程中的其他线程? ...