函数代码:

class MySparkJob{
def entry(spark:SparkSession):Unit={
def getInnerRsrp(outer_rsrp: Double, wear_loss: Double, path_loss: Double): Double = {
val innerRsrp: Double = outer_rsrp - wear_loss - (XX) * path_loss innerRsrp
}
spark.udf.register("getXXX", getXXX _) import spark.sql
sql(s"""|select getInnerRsrp(t10.outer_rsrp,t10.wear_loss,t10.path_loss) as rsrp, xx from yy""".stripMargin)
}
}

使用spark-submit提交函数时,抛出异常:

User class threw exception: org.apache.spark.SparkException: Task not serializable 

org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:)
at org.apache.spark.SparkContext.clean(SparkContext.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$.apply(RDD.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.RDD.mapPartitionsWithIndex(RDD.scala:)
at com.dx.fpd_withscenetype.MySparkJob.entry(MySparkJob.scala:)
at com.dx.App$.main(App.scala:)
at com.dx.App.main(App.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$.run(ApplicationMaster.scala:)
Caused by: java.io.NotSerializableException: com.dx.fpd_withscenetype.MySparkJob
Serialization stack:
- object not serializable (class: com.dx.fpd_withscenetype.MySparkJob, value: com.dx.fpd_withscenetype.MySparkJob@e4d4393)
- field (class: com.dx.fpd_withscenetype.MySparkJob$$anonfun$entry$, name: $outer, type: class com.dx.fpd_withscenetype.MySparkJob)
- object (class com.dx.fpd_withscenetype.MySparkJob$$anonfun$entry$, <function2>)
- field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$, name: func$, type: interface scala.Function2)
- object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$, <function1>)
- field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF, name: f, type: interface scala.Function1)
- object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF, UDF:getInnerRsrp(cast(input[, double, true] as int), cast(input[, double, true] as int), cast(input[, double, true] as int)))
- element of array (index: )
- array (class [Ljava.lang.Object;, size )
- field (class: org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$, name: references$, type: class [Ljava.lang.Object;)
- object (class org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$, <function2>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:)
... more

解决方案:

把当前MySparkJob集成Serializable

class MySparkJob extends Serializable {
xxx
}

spark2.1注册内部函数spark.udf.register("xx", xxx _),运行时抛出异常:Task not serializable的更多相关文章

  1. Spark以yarn方式运行时抛出异常

    Spark以yarn方式运行时抛出异常: cluster.YarnClientSchedulerBackend: Yarn application has already exited with st ...

  2. SparkSQL UDF两种注册方式:udf() 和 register()

    调用sqlContext.udf.register() 此时注册的方法 只能在sql()中可见,对DataFrame API不可见 用法:sqlContext.udf.register("m ...

  3. Pyspark 使用 Spark Udf 的一些经验

    起初开始写一些 udf 的时候感觉有一些奇怪,在 spark 的计算中,一般通过转换(Transformation) 在不触发计算(Action) 的情况下就行一些预处理.udf 就是这样一个好用的东 ...

  4. spark udf 初识初用

    直接上代码,详见注释 import org.apache.spark.sql.hive.HiveContext import org.apache.spark.{SparkContext, Spark ...

  5. [Spark内核] 第36课:TaskScheduler内幕天机解密:Spark shell案例运行日志详解、TaskScheduler和SchedulerBackend、FIFO与FAIR、Task运行时本地性算法详解等

    本課主題 通过 Spark-shell 窥探程序运行时的状况 TaskScheduler 与 SchedulerBackend 之间的关系 FIFO 与 FAIR 两种调度模式彻底解密 Task 数据 ...

  6. TaskScheduler内幕天机解密:Spark shell案例运行日志详解、TaskScheduler和SchedulerBackend、FIFO与FAIR、Task运行时本地性算法详解等

    本课主题 通过 Spark-shell 窥探程序运行时的状况 TaskScheduler 与 SchedulerBackend 之间的关系 FIFO 与 FAIR 两种调度模式彻底解密 Task 数据 ...

  7. [Spark內核] 第42课:Spark Broadcast内幕解密:Broadcast运行机制彻底解密、Broadcast源码解析、Broadcast最佳实践

    本课主题 Broadcast 运行原理图 Broadcast 源码解析 Broadcast 运行原理图 Broadcast 就是将数据从一个节点发送到其他的节点上; 例如 Driver 上有一张表,而 ...

  8. Redis on Spark:Task not serializable

    We use Redis on Spark to cache our key-value pairs.This is the code: import com.redis.RedisClient va ...

  9. Spark Broadcast内幕解密:Broadcast运行机制彻底解密、Broadcast源码解析、Broadcast最佳实践

    本课主题 Broadcast 运行原理图 Broadcast 源码解析 Broadcast 运行原理图 Broadcast 就是将数据从一个节点发送到其他的节点上; 例如 Driver 上有一张表,而 ...

随机推荐

  1. ssh 提示Connection closed by * 的解决方案

    使用ssh方式连接linux系统时,发现一直上报这个错误: Connection closed by 192.168.3.71 port 22 刚开始还以为是端口被防火墙禁止了呢,通过关闭和查看,并没 ...

  2. RACSignal的一些常用用法

    NSData + RACSupport.h @interface NSData (RACSupport) // Read the data at the URL using -[NSData init ...

  3. STL --> 高效使用STL

    高效使用STL 仅仅是个选择的问题,都是STL,可能写出来的效率相差几倍:
熟悉以下条款,高效的使用STL:   一.当对象很大时,建立指针的容器而不是对象的容器 1)STL基于拷贝的方式的来工作,任 ...

  4. runtime.getruntime.availableprocessors

    1:获取cpu核心数: Runtime.getRuntime().availableProcessors(); 创建线程池: Executors.newFixedThreadPool(nThreads ...

  5. lua对多个精灵执行一系列动作,延时失效

    function MainPlayerCards:sendCards() local winSize = cc.Director:getInstance():getWinSize() local nS ...

  6. 火狐浏览器中如何删除保存的cookie

    大致分为三步即可: 打开浏览器并查看图示,按照图示操作即可完成:

  7. VS2017调试器无法附加到IIS进程(w3wp.exe)

    问题描述: 当使用VS2017-> 调试->附加到进程来调试IIS进程(w3wp.exe)时,报错"无法附加到进程,已附加了一个调试器" 为了解决这个问题花了不少时间, ...

  8. C语言博客作业—结构体

    一.PTA实验作业 题目1:结构体数组按总分排序 1. 本题PTA提交列表 2. 设计思路 void calc //函数calc求出p指针所指的结构体数组中 n 名学生各自的总分 { 定义循环变量i: ...

  9. Python split()方法

    Python split()方法 描述 Python split()通过指定分隔符对字符串进行切片,如果参数num 有指定值,则仅分隔 num 个子字符串 语法 split()方法语法: str.sp ...

  10. Django 分类标签查找

    from django.conf.urls import url from django.contrib import admin from blog.views import index,stude ...