转发请注明原创地址:http://www.cnblogs.com/dongxiao-yang/p/7652337.html

1 WindowFunction类型不匹配无法编译。

flink 版本:1.3.0

参考https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/windows.html#windowfunction-with-incremental-aggregation写的demo发现reduce加入MyWindowFunction后编译不通过,报错参数类型不匹配。

代码如下

        MyReduceFunction a = new MyReduceFunction();

        DataStream<Tuple3<String, String, Integer>> counts4 = source

                .keyBy(0)
.window(TumblingEventTimeWindows.of(Time
.of(1, TimeUnit.SECONDS)))
.reduce(a,
new WindowFunction<Tuple2<String, Integer>, Tuple3<String, String, Integer>, String, TimeWindow>() {
private static final long serialVersionUID = 1L; @Override
public void apply(
String key,
TimeWindow window,
Iterable<Tuple2<String, Integer>> values,
Collector<Tuple3<String, String, Integer>> out)
throws Exception {
for (Tuple2<String, Integer> in : values) {
out.collect(new Tuple3<>(in.f0, in.f0,
in.f1));
}
}
}); public static class MyReduceFunction implements
ReduceFunction<Tuple2<String, Integer>> { @Override
public Tuple2<String, Integer> reduce(Tuple2<String, Integer> value1,
Tuple2<String, Integer> value2) throws Exception {
// TODO Auto-generated method stub
return new Tuple2<String, Integer>(value1.f0,
(value1.f1 + value2.f1));
} }

上述代码在reduce函数加入WindowFunction后代码一直报错,显示reduce函数包含的参数类型不匹配。其实原因出在keyBy(0)这个用法上,DataStream在调用public KeyedStream<T, Tuple> keyBy(int... fields) 和public KeyedStream<T, Tuple> keyBy(String... fields) 这两个方法的时候会调用

private KeyedStream<T, Tuple> keyBy(Keys<T> keys) {
return new KeyedStream<>(this, clean(KeySelectorUtil.getSelectorForKeys(keys,
getType(), getExecutionConfig())));
}

其中,KeySelectorUtil.getSelectorForKeys返回的是一个ComparableKeySelector类型的KeySelector,而这个类的定义为

public static final class ComparableKeySelector<IN> implements KeySelector<IN, Tuple>, ResultTypeQueryable<Tuple>

根据KeySelector的定义可知,ComparableKeySelector输出的所有key类型都为Tuple,所以上述WindowFunction设置的第三个泛型参数String是不对的。

解决办法

1 自定义KeySelector

    private static class TupleKeySelector implements
KeySelector<Tuple2<String, Integer>, String> { @Override
public String getKey(Tuple2<String, Integer> value) throws Exception {
return value.f0;
}
} DataStream<Tuple3<String, String, Integer>> counts4 = source .keyBy(new TupleKeySelector())

上面首先定义了一个TupleKeySelector,返回Key类型为String,然后keyby的参数设置为对应的new TupleKeySelector(),表示keyStream根据一个String类型的Key分区

2  WindowFunction第三个Key泛型由String改为Tuple

参考问题:https://stackoverflow.com/questions/36917586/cant-apply-custom-functions-to-a-windowedstream-on-flink

2 flink-connector-elasticsearch5 接入问题

flink 版本:1.3.2

问题1:

java.lang.UnsupportedClassVersionError: org/elasticsearch/action/bulk/BulkProcessor$Listener : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)

解决方案:这个问题是由于es5以后的版本要求最低的jdk版本是1.8,如果job的默认运行环境jdk低于1.8的话,需要手动指定java版本。

问题1.1:

设置env.java.home: /opt/jdk1.8.0_121 后上述错误依旧存在

目前看来env.java.home这个环境变量在使用yarn-cluster模式提交的情况下对container的JAVA_HOME是不起作用的

解决方案 :提交作业脚本增加-yD yarn.taskmanager.env.JAVA_HOME=/opt/jdk1.8.0_121 参数

参考文档为:https://ci.apache.org/projects/flink/flink-docs-release-1.1/setup/config.html#full-reference ,就是说这个参数在1.1的文档里提过,到了1.3反而没有了。。

问题2 :WebUI 500错误,yarn 日志报错

ERROR org.apache.flink.runtime.webmonitor.files.StaticFileServerHandler  - Caught exception
java.lang.AbstractMethodError
at io.netty.util.ReferenceCountUtil.touch(ReferenceCountUtil.java:73)
at io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.handler.codec.http.router.Handler.routed(Handler.java:62)
at io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:57)
at io.netty.handler.codec.http.router.DualAbstractHandler.channelRead0(DualAbstractHandler.java:20)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:105)
at org.apache.flink.runtime.webmonitor.HttpRequestHandler.channelRead0(HttpRequestHandler.java:65)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:435)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:250)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)

任务能够正常运行,但是点击ApplicationMaster进入webui的时候报500错误打不开,yarn错误日志如上。

这个问题是flink-connector-elasticsearch5包依赖了一个netty-commom的jar,这个包里面有一个io.netty.util.ReferenceCountUtil类和flink自带的netty-all同名类产生了冲突。

解决方案:netty-commom无法exclude,不然es运行时会缺其他的类,暂时的解决方案是把flink-streaming-java_2.10的<scope>provide</scope>给关掉,把flink原有的ReferenceCountUtil打进fat jar里面,覆盖掉上述错误的class。

        <dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_2.10</artifactId>
<version>${flink.version}</version>
<!-- <scope>provided</scope> -->
</dependency> <dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch5_2.10</artifactId>
<version>${flink.version}</version>
</dependency

3Flink 1.4.1支持yarn node-lable特性

1 手动合并https://github.com/apache/flink/pull/5593这个pr

2 pom文件里的hadoop.version升级到支持node-lable的yarn版本  <hadoop.version>2.8.2</hadoop.version>

4 Caused by: java.lang.ClassNotFoundException: javax.ws.rs.ext.MessageBodyReader

flink 版本 1.5.0

java.lang.NoClassDefFoundError: javax/ws/rs/ext/MessageBodyReader
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.yarn.util.timeline.TimelineUtils.<clinit>(TimelineUtils.java:50)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:179)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.flink.yarn.cli.FlinkYarnSessionCli.getClusterDescriptor(FlinkYarnSessionCli.java:966)
at org.apache.flink.yarn.cli.FlinkYarnSessionCli.createDescriptor(FlinkYarnSessionCli.java:269)
at org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:444)
at org.apache.flink.yarn.cli.FlinkYarnSessionCli.createClusterDescriptor(FlinkYarnSessionCli.java:92)
at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:221)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:210)
at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1020)
at org.apache.flink.client.cli.CliFrontend.lambda$main$14(CliFrontend.java:1096)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1096)
Caused by: java.lang.ClassNotFoundException: javax.ws.rs.ext.MessageBodyReader
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 52 more

打包启动后发现上述错误,解决办法是把flink-shade-hadoop2 pom文件里hadoop-yarn-common 这个依赖对于jersey-core的exclusion取消

<!--<exclusion>-->
<!--<groupId>com.sun.jersey</groupId>-->
<!--<artifactId>jersey-core</artifactId>-->
<!--</exclusion>-->

5 could not find implicit value for evidence parameter 

参考解释:https://www.iteblog.com/archives/2047.html

解决办法: import org.apache.flink.streaming.api.scala._









flink 问题记录的更多相关文章

  1. Flink基础:实时处理管道与ETL

    ​ 往期推荐: Flink基础:入门介绍 Flink基础:DataStream API Flink深入浅出:资源管理 Flink深入浅出:部署模式 Flink深入浅出:内存模型 Flink深入浅出:J ...

  2. 记录工作中groovy动态生成Flink任务

    工作中的痛点:有一个计算的任务,需要配置成前端配置好一些简单的信息,例如名字,计算间隔,计算规则(这个是需要提前写好,开放给用户选择的),然后通过提交到我们的计算引擎中心生成对应的任务jar包提交到服 ...

  3. 整合 KAFKA+Flink 实例(第一部分,趟坑记录)

    2017年后,一大波网络喧嚣,说流式处理如何牛叉,如何高大上,抱歉,工作满负荷,没空玩那个: 今年疫情隔离在家,无聊,开始学习 KAFKA+Flink ,目前的打算是用爬虫抓取网页数据,传递到Kafk ...

  4. hadoop记录-flink测试

    1.启动集群 bin/start-cluster.sh 2.jps查看进程 3.打开网页端(192.168.66.128:8081) 4.造数据:nc -lk 9000 5.执行./bin/flink ...

  5. hadoop记录-[Flink]Flink三种运行模式安装部署以及实现WordCount(转载)

    [Flink]Flink三种运行模式安装部署以及实现WordCount 前言 Flink三种运行方式:Local.Standalone.On Yarn.成功部署后分别用Scala和Java实现word ...

  6. Flink 1.9 FlinkKafkaProducer 使用 EXACTLY_ONCE 错误记录

    使用flink FlinkKafkaProducer 往kafka写入数据的时候要求使用EXACTLY_ONCE语义 本以为本以为按照官网写一个就完事,但是却报错了 代码 package com.me ...

  7. 记录一次Flink作业异常的排查过程

    最近2周开始接手apache flink全链路监控数据的作业,包括指标统计,业务规则匹配等逻辑,计算结果实时写入elasticsearch. 昨天遇到生产环境有作业无法正常重启的问题,我负责对这个问题 ...

  8. Flink 1.1 – ResourceManager

    Flink resource manager的作用如图,   FlinkResourceManager /** * * <h1>Worker allocation steps</h1 ...

  9. Flink - InstanceManager

    InstanceManager用于管理JobManager申请到的taskManager和slots资源 /** * Simple manager that keeps track of which ...

随机推荐

  1. Oracle的锁

    Oracle数据库中的锁机制 数据库是一个多用户使用的共享资源.当多个用户并发地存取数据时,在数据库中就会产生多个事务同时存取同一数据的情况.若对并发操作不加控制就可能会读取和存储不正确的数据,破坏数 ...

  2. [九省联考2018]林克卡特树(DP+wqs二分)

    对于k=0和k=1的点,可以直接求树的直径. 然后对于60分,有一个重要的转化:就是求在树中找出k+1条点不相交的链后的最大连续边权和. 这个DP就好.$O(nk^2)$ 然后我们完全不可以想到,将b ...

  3. 【树状数组】Gym - 101147J - Whistle's New Car

    题意就是对每个点i,统计在其子树内(不含自身),且depj-depi<=xj的点有多少个. 把点分别按照dep-x和dep进行排序,离线处理, 每次把dep-x小于等于当前dep值的点插入树状数 ...

  4. 微服务之SpringCloud实战(二):SpringCloud Eureka服务治理

    服务治理 SpringCloud Eureka是SpringCloud Netflix微服务套件的一部分,它基于Netflix Eureka做了二次封装,主要完成微服务的服务治理功能,SpringCl ...

  5. CSS3:绘制图形

    CSS画图形,基本上所有的实现都是对边框的角度的调整及组合. 以下不包含兼容浏览器的前缀,请使用时在border-radius前加-moz-.-webkit- .... 1.正常得不得了的矩形 .sq ...

  6. Swift,数组

    1.创建(Array)数组(数组内的类型一定要相同,有序的可重复) (1)创建默认值的数组 let array:[Int] array=[Int](repeatElement(3,count:5)) ...

  7. iOS:XMPP即时聊天知识

    XMPP即时聊天框架:XMPPFramework   XMPP The Extensible Messaging and Presence Protocol(可扩展通讯和表示协议). 基于XML XM ...

  8. Office 2007 SP3 正试版补丁包下载

    这仅仅是下载补丁包.下载地址也是微软官网的.以下是MD5是我下载后校验的. 2007 Microsoft Office 套件 Service Pack 3 (SP3) 文件名: office2007s ...

  9. FL2440 rt3070模块station模式移植

    ---------------------------------------------------------------------------------------------------- ...

  10. ES6/ES2015核心内容(下)

    import export 这两个家伙对应的就是es6自己的module功能. 我们之前写的Javascript一直都没有模块化的体系,无法将一个庞大的js工程拆分成一个个功能相对独立但相互依赖的小工 ...