Spark Strcutured Streaming中使用Dataset的groupBy agg 与 join 示例(java api)
Dataset的groupBy agg示例
Dataset<Row> resultDs = dsParsed
.groupBy("enodeb_id", "ecell_id")
.agg(
functions.first("scan_start_time").alias("scan_start_time1"),
functions.first("insert_time").alias("insert_time1"),
functions.first("mr_type").alias("mr_type1"),
functions.first("mr_ltescphr").alias("mr_ltescphr1"),
functions.first("mr_ltescpuschprbnum").alias("mr_ltescpuschprbnum1"),
functions.count("enodeb_id").alias("rows1"))
.selectExpr(
"ecell_id",
"enodeb_id",
"scan_start_time1 as scan_start_time",
"insert_time1 as insert_time",
"mr_type1 as mr_type",
"mr_ltescphr1 as mr_ltescphr",
"mr_ltescpuschprbnum1 as mr_ltescpuschprbnum",
"rows1 as rows");
Dataset Join示例:
Dataset<Row> ncRes = sparkSession.read().option("delimiter", "|").option("header", true).csv("/user/csv");
Dataset<Row> mro=sparkSession.sql("。。。");
Dataset<Row> ncJoinMro = ncRes
.join(mro, mro.col("id").equalTo(ncRes.col("id")).and(mro.col("calid").equalTo(ncRes.col("calid"))), "left_outer")
.select(ncRes.col("id").as("int_id"),
mro.col("vendor_id"),
。。。
);
join condition另外一种方式:
leftDfWithWatermark.join(rightDfWithWatermark,
expr(""" leftDfId = rightDfId AND leftDfTime >= rightDfTime AND leftDfTime <= rightDfTime + interval 1 hour"""),
joinType = "leftOuter" )
BroadcastHashJoin示例:
package com.dx.testbroadcast; import org.apache.spark.SparkConf;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.functions; import java.io.*; public class Test {
public static void main(String[] args) {
String personPath = "E:\\person.csv";
String personOrderPath = "E:\\personOrder.csv";
//writeToPersion(personPath);
//writeToPersionOrder(personOrderPath); SparkConf conf = new SparkConf();
SparkSession sparkSession = SparkSession.builder().config(conf).appName("test-broadcast-app").master("local[*]").getOrCreate(); Dataset<Row> person = sparkSession.read()
.option("header", "true")
.option("inferSchema", "true") //是否自动推到内容的类型
.option("delimiter", ",").csv(personPath).as("person");
person.printSchema(); Dataset<Row> personOrder = sparkSession.read()
.option("header", "true")
.option("inferSchema", "true") //是否自动推到内容的类型
.option("delimiter", ",").csv(personOrderPath).as("personOrder");
personOrder.printSchema(); // Default `inner`. Must be one of:`inner`, `cross`, `outer`, `full`, `full_outer`, `left`, `left_outer`,`right`, `right_outer`, `left_semi`, `left_anti`.
Dataset<Row> resultDs = personOrder.join(functions.broadcast(person), personOrder.col("personid").equalTo(person.col("id")),"left");
resultDs.explain();
resultDs.show(10);
} private static void writeToPersion(String personPath) {
BufferedWriter personWriter = null;
try {
personWriter = new BufferedWriter(new FileWriter(personPath));
personWriter.write("id,name,age,address\r\n");
for (int i = ; i < ; i++) {
personWriter.write("" + i + ",person-" + i + "," + i + ",address-address-address-address-address-address-address" + i + "\r\n");
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (personWriter != null) {
try {
personWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
} private static void writeToPersionOrder(String personOrderPath) {
BufferedWriter personWriter = null;
try {
personWriter = new BufferedWriter(new FileWriter(personOrderPath));
personWriter.write("personid,name,age,address\r\n");
for (int i = ; i < ; i++) {
personWriter.write("" + i + ",person-" + i + "," + i + ",address-address-address-address-address-address-address" + i + "\r\n");
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (personWriter != null) {
try {
personWriter.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
打印结果:
== Physical Plan ==
*() BroadcastHashJoin [personid#], [id#], LeftOuter, BuildRight
:- *() FileScan csv [personid#,name#,age#,address#] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/E:/personOrder.csv], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<personid:int,name:string,age:int,address:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[, int, true] as bigint)))
+- *() Project [id#, name#, age#, address#]
+- *() Filter isnotnull(id#)
+- *() FileScan csv [id#,name#,age#,address#] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/E:/person.csv], PartitionFilters: [], PushedFilters: [IsNotNull(id)], ReadSchema: struct<id:int,name:string,age:int,address:string> +--------+--------+---+--------------------+---+--------+---+--------------------+
|personid| name|age| address| id| name|age| address|
+--------+--------+---+--------------------+---+--------+---+--------------------+
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
| |person-| |address-address-a...| |person-| |address-address-a...|
+--------+--------+---+--------------------+---+--------+---+--------------------+
only showing top rows
SparkSQL Broadcast HashJoin
person.createOrReplaceTempView("temp_person");
personOrder.createOrReplaceTempView("temp_person_order");
Dataset<Row> sqlResult = sparkSession.sql(
" SELECT /*+ BROADCAST (t11) */" +
" t11.id,t11.name,t11.age,t11.address," +
" t10.personid as person_id,t10.name as persion_order_name" +
" FROM temp_person_order as t10 " +
" inner join temp_person as t11" +
" on t11.id = t10.personid ");
sqlResult.show();
sqlResult.explain();
打印日志
+---+--------+---+--------------------+---------+------------------+
| id| name|age| address|person_id|persion_order_name|
+---+--------+---+--------------------+---------+------------------+
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
| |person-| |address-address-a...| | person-|
+---+--------+---+--------------------+---------+------------------+
only showing top rows // :: INFO FileSourceStrategy: Pruning directories with:
// :: INFO FileSourceStrategy: Post-Scan Filters: isnotnull(personid#)
// :: INFO FileSourceStrategy: Output Data Schema: struct<personid: int, name: string>
// :: INFO FileSourceScanExec: Pushed Filters: IsNotNull(personid)
// :: INFO FileSourceStrategy: Pruning directories with:
// :: INFO FileSourceStrategy: Post-Scan Filters: isnotnull(id#)
// :: INFO FileSourceStrategy: Output Data Schema: struct<id: int, name: string, age: int, address: string ... more fields>
// :: INFO FileSourceScanExec: Pushed Filters: IsNotNull(id)
== Physical Plan ==
*() Project [id#, name#, age#, address#, personid# AS person_id#, name# AS persion_order_name#]
+- *() BroadcastHashJoin [personid#], [id#], Inner, BuildRight
:- *() Project [personid#, name#]
: +- *() Filter isnotnull(personid#)
: +- *() FileScan csv [personid#,name#] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/E:/personOrder.csv], PartitionFilters: [], PushedFilters: [IsNotNull(personid)], ReadSchema: struct<personid:int,name:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[, int, true] as bigint)))
+- *() Project [id#, name#, age#, address#]
+- *() Filter isnotnull(id#)
+- *() FileScan csv [id#,name#,age#,address#] Batched: false, Format: CSV, Location: InMemoryFileIndex[file:/E:/person.csv], PartitionFilters: [], PushedFilters: [IsNotNull(id)], ReadSchema: struct<id:int,name:string,age:int,address:string>
// :: INFO SparkContext: Invoking stop() from shutdown hook
Spark Strcutured Streaming中使用Dataset的groupBy agg 与 join 示例(java api)的更多相关文章
- Java中访问控制修饰符的详解和示例——Java学习
Java中的四个访问控制修饰符 简述 在Java中共有四个: public -- 对外部完全可见 protected -- 对本包和所有子类可见 默认(不需要修饰符)-- 对本包可见 private ...
- Spark(十六)DataSet
Spark最吸引开发者的就是简单易用.跨语言(Scala, Java, Python, and R)的API. 本文主要讲解Apache Spark 2.0中RDD,DataFrame和Dataset ...
- Spark2.2(三十三):Spark Streaming和Spark Structured Streaming更新broadcast总结(一)
背景: 需要在spark2.2.0更新broadcast中的内容,网上也搜索了不少文章,都在讲解spark streaming中如何更新,但没有spark structured streaming更新 ...
- Spark2.3(三十四):Spark Structured Streaming之withWaterMark和windows窗口是否可以实现最近一小时统计
WaterMark除了可以限定来迟数据范围,是否可以实现最近一小时统计? WaterMark目的用来限定参数计算数据的范围:比如当前计算数据内max timestamp是12::00,waterMar ...
- Spark Streaming中的操作函数分析
根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transformations Window Operations J ...
- Spark2.3(三十五)Spark Structured Streaming源代码剖析(从CSDN和Github中看到别人分析的源代码的文章值得收藏)
从CSDN中读取到关于spark structured streaming源代码分析不错的几篇文章 spark源码分析--事件总线LiveListenerBus spark事件总线的核心是LiveLi ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十六)Structured Streaming中ForeachSink的用法
Structured Streaming默认支持的sink类型有File sink,Foreach sink,Console sink,Memory sink. ForeachWriter实现: 以写 ...
- Spark Streaming中的操作函数讲解
Spark Streaming中的操作函数讲解 根据根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transform ...
- Spark2.x(六十一):在Spark2.4 Structured Streaming中Dataset是如何执行加载数据源的?
本章主要讨论,在Spark2.4 Structured Streaming读取kafka数据源时,kafka的topic数据是如何被执行的过程进行分析. 以下边例子展开分析: SparkSession ...
随机推荐
- 【shell学习笔记】curl命令总结
2014-12-16 20:34 文思海辉 =========== CURL命令总结 1. 下载 curl -o [文件名称] www.baidu.com 2. 显示 HTTP request头信息 ...
- ZOJ 2702 Unrhymable Rhymes 贪心
贪心.能凑成一组就算一组 Unrhymable Rhymes Time Limit: 10 Seconds Memory Limit: 32768 KB Special Judge ...
- Snmp学习总结(二)——WinXP安装和配置SNMP
一.安装SNMP 今天讲解一下在XP下安装SNMP协议,安装步骤如下:
- 使用Axure RP原型设计实践07,注册判断
本篇实现注册页的一些功能.本项目是通过用户名和电子邮件进行注册的. 在本篇之前,在"使用Axure RP原型设计实践03,制作一个登录界面的原型"中已经对注册页做了基本的处理. 打 ...
- iPhone/iPad各种文件路径详解 帮助了解自己的iphone和ipad
以下内容皆为转载分享iPhone里重要的目录路径有哪几个? 1. /private/var/mobile 新刷完的机器,要在这个文件夹下建一个Documents的目录,很多程序都要用到. 2. /pr ...
- C#使用ProtocolBuffer(ProtoBuf)进行Unity中的Socket通信
首先来说一下本文中例子所要实现的功能: 基于ProtoBuf序列化对象 使用Socket实现时时通信 数据包的编码和解码 下面来看具体的步骤: 一.Unity中使用ProtoBuf 导入DLL到Uni ...
- VisualStudio:添加现有项时使用添加为链接
这个特性很容易忘记使用(很多人可能还不知道),这里解释一下. 添加为链接是指:将指定的文件作为链接添加到项目中,这个文件在作用上和一般的文件没有区别,这样做的好处是可以多个项目共享一个文件,如:连接字 ...
- ibatis.net:尽可能的使用匿名类型替换 Hashtable
一切尽在代码中 Hashtable 风格 public Account GetByCustomIdAndAccountType(int customId, AccountType accountTyp ...
- Kali Linux 与 BackTrack Linux
(一)BackTrack BackTrack是基于Ubuntu的自启动运行光盘,它包含了一套安全及计算机取证工具.它其实是依靠融合Auditor Security Linux和WHAX(先前的Who ...
- springboot1.5x版不支持velocity的解决方案 及 spring 5.0.0 版不支持velocity的解决方案
由于老系统是在spring4.x.x下的用到了Velocity. 测试地址 https://sms.reyo.cn/用户名:aa 密码:123456 5.0.0官方申明: 中止的支持 在 API 层面 ...