虽说这个算是正常现象,等的时间也太久了吧。分钟级了。这个RECOVERY里面的WAL有点多余。有这么久的时间,早从新读取kafka写入hdfs了。纯属个人见解。

@SuppressWarnings("fallthrough")
public boolean recover() {
try {
switch (state) {
case RECOVERY_STARTED:
log.info("Started recovery for topic partition {}", tp);
pause();
nextState();
case RECOVERY_PARTITION_PAUSED:
applyWAL();
nextState();
case WAL_APPLIED:
truncateWAL();
nextState();
case WAL_TRUNCATED:
resetOffsets();
nextState();
case OFFSET_RESET:
resume();
nextState();
log.info("Finished recovery for topic partition {}", tp);
break;
default:
log.error("{} is not a valid state to perform recovery for topic partition {}.", state, tp);
}
} catch (ConnectException e) {
log.error("Recovery failed at state {}", state, e);
setRetryTimeout(timeoutMs);
return false;
}
return true;
}
2017-08-18 01:35:53,716 ERROR [io.confluent.connect.hdfs.TopicPartitionWriter][215] - <Recovery failed at state RECOVERY_PARTITION_PAUSED>
org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file ******/log. Lease recovery is in progress. Try again later.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3113)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3189)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3153)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:612)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProviderProxyClientProtocol.java:125)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:414)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:88)
at io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:105)
at io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:550)
at io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:198)
at io.confluent.connect.hdfs.DataWriter.recover(DataWriter.java:247)
at io.confluent.connect.hdfs.DataWriter.open(DataWriter.java:289)
at io.confluent.connect.hdfs.HdfsSinkTask.open(HdfsSinkTask.java:118)
at org.apache.kafka.connect.runtime.WorkerSinkTask.openPartitions(WorkerSinkTask.java:428)
at org.apache.kafka.connect.runtime.WorkerSinkTask.access$1000(WorkerSinkTask.java:54)
at org.apache.kafka.connect.runtime.WorkerSinkTask$HandleRebalance.onPartitionsAssigned(WorkerSinkTask.java:464)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:234)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$2.onSuccess(AbstractCoordinator.java:255)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$2.onSuccess(AbstractCoordinator.java:250)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:459)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:445)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:702)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:681)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:266)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:366)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:975)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.kafka.connect.runtime.WorkerSinkTask.pollConsumer(WorkerSinkTask.java:316)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:222)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to close file ******/log. Lease recovery is in progress. Try again later.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3113)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3189)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3153)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:612)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProviderProxyClientProtocol.java:125)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:414)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080) at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy50.append(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy51.append(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1767)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1803)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1796)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:323)
at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:319)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:319)
at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1173)
at io.confluent.connect.hdfs.wal.WALFile$Writer.<init>(WALFile.java:221)
at io.confluent.connect.hdfs.wal.WALFile.createWriter(WALFile.java:67)
at io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:73)
... 45 more

kafka-connect-hdfs重启,进去RECOVERY状态,从hadoop hdfs拿租约,很正常,但是也太久了吧的更多相关文章

  1. 启动HDFS之后一直处于安全模式org.apache.hadoop.hdfs.server.namenode.SafeModeException: Log not rolled. Name node is in safe mode.

    一.现象 三台机器 crxy99,crxy98,crxy97(crxy99是NameNode+DataNode,crxy98和crxy97是DataNode) 按正常命令启动HDFS之后,HDFS一直 ...

  2. kafka connect简介以及部署

    https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...

  3. Apache Kafka Connect - 2019完整指南

    今天,我们将讨论Apache Kafka Connect.此Kafka Connect文章包含有关Kafka Connector类型的信息,Kafka Connect的功能和限制.此外,我们将了解Ka ...

  4. 使用Kafka Connect创建测试数据生成器

    在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...

  5. Hadoop HDFS分布式文件系统 常用命令汇总

    引言:我们维护hadoop系统的时候,必不可少需要对HDFS分布式文件系统做操作,例如拷贝一个文件/目录,查看HDFS文件系统目录下的内容,删除HDFS文件系统中的内容(文件/目录),还有HDFS管理 ...

  6. hadoop hdfs 有内网、公网ip后,本地调试访问不了集群解决

    问题背景: 使用云上的虚拟环境搭建测试集群,导入一些数据,在本地idea做些debug调试,但是发现本地idea连接不上测试环境 集群内部配置hosts映射是内网映射(内网ip与主机名映射),本地只能 ...

  7. hadoop hdfs 命令行 设置文件夹大小的上限 quota:配额

    >bin/hdfs dfs -put readme.txt /finance >bin/hdfs dfs -du -s /finance > /finance >bin/hdf ...

  8. 使用kafka connect,将数据批量写到hdfs完整过程

    版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...

  9. Kafka Connect HDFS

    概述 Kafka 的数据如何传输到HDFS?如果仔细思考,会发现这个问题并不简单. 不妨先想一下这两个问题? 1)为什么要将Kafka的数据传输到HDFS上? 2)为什么不直接写HDFS而要通过Kaf ...

随机推荐

  1. WPF TextBlock IsTextTrimmed 判断文本是否超出

    WPF TextBlock/TextBox 设置TextTrimming情况下 判断 isTextTrimmed(Text 文本是否超出 是否出现了省略号) private bool IsTextTr ...

  2. mysql+mycat实现读写分离

    centos7       master slave mycat1.6 client 192.168.41.10 192.168.41.11 192.168.41.12 192.168.41.13 实 ...

  3. mysql中union 查询

    UNION ALL只是简单的将两个结果合并后就返回.这样,如果返回的两个结果集中有重复的数据,那么返回的结果集就会包含重复的数据了. 从效率上说,UNION ALL 要比UNION快很多,所以,如果可 ...

  4. python学习笔记(三)、字典

    字典是一种映射类型的数据类型.辣么什么是映射呢?如果看过<数据结构与算法>这一本书的小伙伴应该有印象(我也只是大学学习过,嘻嘻). 映射:就是将两个集合一 一对应起来,通过集合a的值,集合 ...

  5. STL中的Set用法(详+转)

    set是STL中一种标准关联容器(vector,list,string,deque都是序列容器,而set,multiset,map,multimap是标准关联容器),它底层使用平衡的搜索树——红黑树实 ...

  6. Java五种基本的Annotation,提高程序的可读性

    从JDK5开始,Java增加了对元数据的支持,也就是Annotation(即注解也被翻译为注释). 这里的Annotation和普通的注释有一定的区别,它是代码中的特殊标记,这些标记可以在编译.类加载 ...

  7. JavaScript机器学习之线性回归

    译者按: AI时代,不会机器学习的JavaScript开发者不是好的前端工程师. 原文: Machine Learning with JavaScript : Part 1 译者: Fundebug ...

  8. 如何将字符串格式的对象转换成真正的js对象?

    1.如何将字符串格式的对象转换成真正的js对象? <script>//eval 的作用eval('var a = 100');console.log(a);</script> ...

  9. mac svn的使用

    一.概述 在windows下,我们常常用TortoiseSVN管理svn代码.在mac下,自带svn客户端和服务器端功能. 二.服务端:创建代码仓库,用来存储客户端所上传的代码 (1)创建svn代码存 ...

  10. centos6 自带python2.6升级python2.7+

    centos6系统自带Python为2.6.6版本,升级搞版本操作如下(python2-python3都一样) 1.下载需要升级的python包 官方下载地址:https://www.python.o ...