业务需求,有一部分动态字段,需要在程序中动态加载并解析表达式:

实现方案1):在MapFunction、MapPartitionFunction中使用FelEngine进行解析:

        FelEngine fel = FelEngine.instance;
FelContext ctx = fel.getContext();
ctx.set("rsrp", 100);
ctx.set("rsrq", 80); expValue = Double.valueOf(String.valueOf(fel.eval("rsrp*10-rsrq*8")));

实现方案2):采用selectExpr()函数

package com.dx.streaming.drivers.test;

import org.apache.spark.api.java.function.MapPartitionsFunction;
import org.apache.spark.sql.*;
import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder;
import org.apache.spark.sql.catalyst.encoders.RowEncoder;
import org.apache.spark.sql.streaming.OutputMode;
import org.apache.spark.sql.streaming.StreamingQueryException;
import org.apache.spark.sql.streaming.Trigger;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructType;
import scala.collection.JavaConversions;
import scala.collection.Seq; import java.util.*;
import java.util.concurrent.TimeUnit; public class MrsExpressionDoWithSelectExp {
public static void main(String[] args) {
SparkSession sparkSession = SparkSession.builder().appName("test").master("local[*]").getOrCreate(); StructType type = new StructType();
type = type.add("id", DataTypes.StringType);
type = type.add("cellname", DataTypes.StringType);
type = type.add("rsrp", DataTypes.StringType);
type = type.add("rsrq", DataTypes.StringType);
ExpressionEncoder<Row> encoder = RowEncoder.apply(type); Dataset<String> ds = sparkSession.readStream().textFile("E:\\test-structured-streaming-dir\\*");
Dataset<Row> rows = ds.mapPartitions(new MapPartitionsFunction<String, Row>() {
private static final long serialVersionUID = -1988302292518096148L; @Override
public Iterator<Row> call(Iterator<String> input) throws Exception {
List<Row> rows = new ArrayList<>();
while (input.hasNext()) {
String line = input.next();
String[] items = line.split(",");
rows.add(RowFactory.create(items));
}
return rows.iterator();
}
}, encoder);
rows.printSchema(); int dynamicExprLength=10;
Map<String, String> expMap = new LinkedHashMap<>();
// 从配置文件加载配置公式
expMap.put("rsrpq_count", "rsrp+rsrp");
expMap.put("rsrpq_sum", "rsrp*10+rsrq*10");
for(int i=0;i<dynamicExprLength;i++){
expMap.put("rsrpq_sum"+i, "rsrp*10+rsrq*10");
} expMap.put("$rsrpq_avg", "rsrpq_sum/rsrpq_count"); List<String> firstLayerExpList = new ArrayList<>();
List<String> secondLayerExpList = new ArrayList<>();
firstLayerExpList.add("*");
secondLayerExpList.add("*"); for (Map.Entry<String, String> kv : expMap.entrySet()) {
if (kv.getKey().startsWith("$")) {
secondLayerExpList.add("(" + kv.getValue() + ") as " + kv.getKey().replace("$", ""));
} else {
firstLayerExpList.add("(" + kv.getValue() + ") as " + kv.getKey());
}
} // 第一层计算:select *,(rsrp+rsrp) as rsrpq_count,(rsrp*10+rsrq*10) as rsrpq_sum
//rows = rows.selectExpr(firstLayerExpList.toArray(new String[firstLayerExpList.size()] ));
Seq<String> firstLayerExpSeq = JavaConversions.asScalaBuffer(firstLayerExpList);
rows = rows.selectExpr(firstLayerExpSeq);
//rows.show(); // 第二层计算:select *,(rsrpq_sum/rsrpq_count) as rsrpq_avg
//rows = rows.selectExpr(secondLayerExpList.toArray(new String[secondLayerExpList.size()] ));
Seq<String> secondLayerExpSeq = JavaConversions.asScalaBuffer(secondLayerExpList);
rows = rows.selectExpr(secondLayerExpSeq); rows.printSchema();
//rows.show();
rows.writeStream().format("console").outputMode(OutputMode.Append()).trigger(Trigger.ProcessingTime(1,TimeUnit.MINUTES)).start();
try {
sparkSession.streams().awaitAnyTermination();
} catch (StreamingQueryException e) {
e.printStackTrace();
} }
}

此时动态列dynamicExprLength为10,可以正常输出。

ds.selectExpr()问题发现:

当列设置为500或者1000时,本地测试出现以下问题:

19/07/18 14:18:18 INFO CodeGenerator: Code generated in 105.715218 ms
19/07/18 14:18:19 WARN CodeGenerator: Error calculating stats of compiled class.
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at org.codehaus.janino.util.ClassFile.loadAttribute(ClassFile.java:1509)
at org.codehaus.janino.util.ClassFile.loadAttributes(ClassFile.java:644)
at org.codehaus.janino.util.ClassFile.loadFields(ClassFile.java:623)
at org.codehaus.janino.util.ClassFile.<init>(ClassFile.java:280)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:996)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anonfun$recordCompilationStats$1.apply(CodeGenerator.scala:993)
at scala.collection.Iterator$class.foreach(Iterator.scala:750)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1202)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.recordCompilationStats(CodeGenerator.scala:993)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:961)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1027)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1024)
at org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
at org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:906)
at org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.create(GenerateUnsafeProjection.scala:412)
at org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.create(GenerateUnsafeProjection.scala:366)
at org.apache.spark.sql.catalyst.expressions.codegen.GenerateUnsafeProjection$.create(GenerateUnsafeProjection.scala:32)
at org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:890)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.extractProjection$lzycompute(ExpressionEncoder.scala:263)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.extractProjection(ExpressionEncoder.scala:263)
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:287)
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:573)
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:573)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:235)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19/07/18 14:18:19 INFO CodeGenerator: Code generated in 1354.475257 ms

当发布到yarn上不管是yarn-client还是yarn-cluster都会出现卡死问题,executor/driver创建起来,并且都分配了资源,但是没有任务被分配。

而且没有任何错误日志抛出,一直卡顿,可以持续到无限时间。

Spark2.x(五十四):在spark structured streaming下测试ds.selectExpr(),当返回列多时出现卡死问题。的更多相关文章

  1. Spark2.x(五十五):在spark structured streaming下sink file(parquet,csv等),正常运行一段时间后:清理掉checkpoint,重新启动app,无法sink记录(file)到hdfs。

    场景: 在spark structured streaming读取kafka上的topic,然后将统计结果写入到hdfs,hdfs保存目录按照month,day,hour进行分区: 1)程序放到spa ...

  2. Spark2.2(三十八):Spark Structured Streaming2.4之前版本使用agg和dropduplication消耗内存比较多的问题(Memory issue with spark structured streaming)调研

    在spark中<Memory usage of state in Spark Structured Streaming>讲解Spark内存分配情况,以及提到了HDFSBackedState ...

  3. Spark2.3(四十二):Spark Streaming和Spark Structured Streaming更新broadcast总结(二)

    本次此时是在SPARK2,3 structured streaming下测试,不过这种方案,在spark2.2 structured streaming下应该也可行(请自行测试).以下是我测试结果: ...

  4. Spark2.3(三十五)Spark Structured Streaming源代码剖析(从CSDN和Github中看到别人分析的源代码的文章值得收藏)

    从CSDN中读取到关于spark structured streaming源代码分析不错的几篇文章 spark源码分析--事件总线LiveListenerBus spark事件总线的核心是LiveLi ...

  5. Spark2.3(三十四):Spark Structured Streaming之withWaterMark和windows窗口是否可以实现最近一小时统计

    WaterMark除了可以限定来迟数据范围,是否可以实现最近一小时统计? WaterMark目的用来限定参数计算数据的范围:比如当前计算数据内max timestamp是12::00,waterMar ...

  6. Spark2.2(三十三):Spark Streaming和Spark Structured Streaming更新broadcast总结(一)

    背景: 需要在spark2.2.0更新broadcast中的内容,网上也搜索了不少文章,都在讲解spark streaming中如何更新,但没有spark structured streaming更新 ...

  7. 第三百五十四节,Python分布式爬虫打造搜索引擎Scrapy精讲—数据收集(Stats Collection)

    第三百五十四节,Python分布式爬虫打造搜索引擎Scrapy精讲—数据收集(Stats Collection) Scrapy提供了方便的收集数据的机制.数据以key/value方式存储,值大多是计数 ...

  8. “全栈2019”Java第五十四章:多态详解

    难度 初级 学习时间 10分钟 适合人群 零基础 开发语言 Java 开发环境 JDK v11 IntelliJ IDEA v2018.3 文章原文链接 "全栈2019"Java第 ...

  9. 孤荷凌寒自学python第五十四天使用python来删除Firebase数据库中的文档

    孤荷凌寒自学python第五十四天使用python来删除Firebase数据库中的文档 (完整学习过程屏幕记录视频地址在文末) 今天继续研究Firebase数据库,利用google免费提供的这个数据库 ...

随机推荐

  1. Celery:Monitor

    参考文档:http://docs.celeryproject.org/en/latest/userguide/monitoring.html#guide-monitoring

  2. js跳出循环的方法区别(break,continue,return)(转载)

    转自:http://blog.csdn.net/fxss5201/article/details/52980138 js编程语法之break语句: break语句会使运行的程序立刻退出包含在最内层的循 ...

  3. Android开发之常用Intent.Action【转】

    1.从google搜索内容 Intent intent = new Intent(); intent.setAction(Intent.ACTION_WEB_SEARCH); intent.putEx ...

  4. golang执行Linux和Windows命令

    1. 可接收变参命令 package main import ( "fmt" "os" "os/exec" "strings&qu ...

  5. Python的包管理工具

    Python的包管理工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.为什么使用包管理 Python的模块或者源文件直接可以复制到目标项目目录中,就可以导入使用了. 但是为了 ...

  6. php中函数的类型提示和文件读取功能

    这个没有深入. <?php function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($pri ...

  7. ThinkPHP模板之一

    这个东东,就得多练多写,无它法. 1,Application\Home\Controller\IndexController.class.php <?php namespace Home\Con ...

  8. js 变量以及函数传参

    一.变量: 基本类型是变量对象重新创建一个新值给变量对象空间,虽然是同一个值但是互不影响. 引用类型是也是将一个值重新赋值给新的变量空间,但是这个值是堆中对象的一个指针,新的变量和旧的变量指向是同一个 ...

  9. C# 验证控件的使用RequiredFieldValidator&CompareValidator

    使用验证控件可以向服务器提交表单数据时验证表单内容,下面以RequiredFieldValidator和CompareValidator为例说明验证控件的用法 RequiredFieldValidat ...

  10. vue中引入.svg图标,使用iconfont图标库

    阿里巴巴的iconfont是一个很好的图标库,海量的素材可以快速满足开发人员日常对图标的诉求,我们采用symbol引用,官方介绍 创建SvgIcon组件 <template> <sv ...