/**
* Returns a new Dataset that has exactly `numPartitions` partitions, when the fewer partitions
* are requested. If a larger number of partitions is requested, it will stay at the current
* number of partitions. Similar to coalesce defined on an `RDD`, this operation results in
* a narrow dependency, e.g. if you go from 1000 partitions to 100 partitions, there will not
* be a shuffle, instead each of the 100 new partitions will claim 10 of the current partitions.
*
* However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
* this may result in your computation taking place on fewer nodes than
* you like (e.g. one node in the case of numPartitions = 1). To avoid this,
* you can call repartition. This will add a shuffle step, but means the
* current upstream partitions will be executed in parallel (per whatever
* the current partitioning is).
*
* @group typedrel
* @since 1.6.0
*/
def coalesce(numPartitions: Int): Dataset[T] = withTypedPlan {
Repartition(numPartitions, shuffle = false, planWithBarrier)
}

关于coalsece:

1、用于减少分区数量,如果设置的numPartitions超过目前实际有的分区数,则分区数保持不变。

2、窄依赖,不会发生shuffle

3、极端的coalsece可能会影响性能,比如coalsece(1),则只会在一个节点上运行单个任务。这种情况下建议使用repartition,

虽然repartition会发生shuffle,但是repartition对上游的计算,还是多分区并行执行的。

4、应用场景:多用于对一个大数据集filter以后,执行coalsece

  /**
* Returns a new Dataset that has exactly `numPartitions` partitions.
*
* @group typedrel
* @since 1.6.0
*/
def repartition(numPartitions: Int): Dataset[T] = withTypedPlan {
Repartition(numPartitions, shuffle = true, planWithBarrier)
}

关于repartition:

跟coalsece一样,都是用于明确设置多少个分区,但是repartition是个宽依赖,会发生shuffle。主要注意的地方就是repartition不影响上游计算的分区

如果想极端的控制生成的文件数量来避免太多的小文件,建议repartition

测试:对一个大表查询,将查询的结果写到一张表

coalsece测试,coalsece(100),可以看到只有一个stage,并且并行度是100

repartition测试,repartition(100),发生了shuffle。两个stage,stage0对大表指定条件查询,对应的并行度默认是大表的数据量/128M,在repartition将结果输出到表的时候并行度为我们设置的repartition(100),然后shuffle数据,最后输出


  /**
* Return a new RDD that has exactly numPartitions partitions.
*
* Can increase or decrease the level of parallelism in this RDD. Internally, this uses
* a shuffle to redistribute data.
*
* If you are decreasing the number of partitions in this RDD, consider using `coalesce`,
* which can avoid performing a shuffle.
*
* TODO Fix the Shuffle+Repartition data loss issue described in SPARK-23207.
*/
def repartition(numPartitions: Int)(implicit ord: Ordering[T] = null): RDD[T] = withScope {
coalesce(numPartitions, shuffle = true)
} /**
* Return a new RDD that is reduced into `numPartitions` partitions.
*
* This results in a narrow dependency, e.g. if you go from 1000 partitions
* to 100 partitions, there will not be a shuffle, instead each of the 100
* new partitions will claim 10 of the current partitions. If a larger number
* of partitions is requested, it will stay at the current number of partitions.
*
* However, if you're doing a drastic coalesce, e.g. to numPartitions = 1,
* this may result in your computation taking place on fewer nodes than
* you like (e.g. one node in the case of numPartitions = 1). To avoid this,
* you can pass shuffle = true. This will add a shuffle step, but means the
* current upstream partitions will be executed in parallel (per whatever
* the current partitioning is).
*
* @note With shuffle = true, you can actually coalesce to a larger number
* of partitions. This is useful if you have a small number of partitions,
* say 100, potentially with a few partitions being abnormally large. Calling
* coalesce(1000, shuffle = true) will result in 1000 partitions with the
* data distributed using a hash partitioner. The optional partition coalescer
* passed in must be serializable.
*/
def coalesce(numPartitions: Int, shuffle: Boolean = false,
partitionCoalescer: Option[PartitionCoalescer] = Option.empty)
(implicit ord: Ordering[T] = null)
: RDD[T] = withScope {
require(numPartitions > 0, s"Number of partitions ($numPartitions) must be positive.")
if (shuffle) {
/** Distributes elements evenly across output partitions, starting from a random partition. */
val distributePartition = (index: Int, items: Iterator[T]) => {
var position = new Random(hashing.byteswap32(index)).nextInt(numPartitions)
items.map { t =>
// Note that the hash code of the key will just be the key itself. The HashPartitioner
// will mod it with the number of total partitions.
position = position + 1
(position, t)
}
} : Iterator[(Int, T)] // include a shuffle step so that our upstream tasks are still distributed
new CoalescedRDD(
new ShuffledRDD[Int, T, T](mapPartitionsWithIndex(distributePartition),
new HashPartitioner(numPartitions)),
numPartitions,
partitionCoalescer).values
} else {
new CoalescedRDD(this, numPartitions, partitionCoalescer)
}
}

dataset的reparation和coalesce的更多相关文章

  1. Spark编程指南分享

    转载自:https://www.2cto.com/kf/201604/497083.html 1.概述 在高层的角度上看,每一个Spark应用都有一个驱动程序(driver program).驱动程序 ...

  2. Spark学习笔记之SparkRDD

    Spark学习笔记之SparkRDD 一.   基本概念 RDD(resilient distributed datasets)弹性分布式数据集. 来自于两方面 ①   内存集合和外部存储系统 ②   ...

  3. Spark-RDD之Partition源码分析

    概要 Spark RDD主要由Dependency.Partition.Partitioner组成,Partition是其中之一.一份待处理的原始数据会被按照相应的逻辑(例如jdbc和hdfs的spl ...

  4. spark(二)

    一.spark的提交模式 --master(standalone\YRAN\mesos) standalone:-client -cluster  如果我们用client模式去提交程序,我们在哪个地方 ...

  5. Spark:JavaRDD 转化为 Dataset<Row>的两种方案

    JavaRDD 转化为 Dataset<Row>方案一: 实体类作为schema定义规范,使用反射,实现JavaRDD转化为Dataset<Row> Student.java实 ...

  6. Spark入门之DataFrame/DataSet

    目录 Part I. Gentle Overview of Big Data and Spark Overview 1.基本架构 2.基本概念 3.例子(可跳过) Spark工具箱 1.Dataset ...

  7. Update(Stage4):sparksql:第3节 Dataset (DataFrame) 的基础操作 & 第4节 SparkSQL_聚合操作_连接操作

    8. Dataset (DataFrame) 的基础操作 8.1. 有类型操作 8.2. 无类型转换 8.5. Column 对象 9. 缺失值处理 10. 聚合 11. 连接 8. Dataset ...

  8. SQL Server-分页方式、ISNULL与COALESCE性能分析(八)

    前言 上一节我们讲解了数据类型以及字符串中几个需要注意的地方,这节我们继续讲讲字符串行数同时也讲其他内容和穿插的内容,简短的内容,深入的讲解,Always to review the basics. ...

  9. HTML5 数据集属性dataset

    有时候在HTML元素上绑定一些额外信息,特别是JS选取操作这些元素时特别有帮助.通常我们会使用getAttribute()和setAttribute()来读和写非标题属性的值.但为此付出的代价是文档将 ...

随机推荐

  1. EBCDIK,EBCDIC,ASCII,shift JIS間の変換

    http://itdoc.hitachi.co.jp/manuals/3020/3020759580/G5950334.HTM#ID01056

  2. 十四、制作优美的div弹框

    功能描述:确认[调整按钮]弹出精美div弹框 1.jsp页面:perfectAlertDiv.jsp <%@ page contentType="text/html;charset=U ...

  3. How2J学习java-1、环境配置

    JDK环境变量配置分下载,配置,验证三个步骤. 一.首先需要到JDK下载网站下载所需的JDK版本可根据更新来定.主流的开发工具Idear下载. 1.首先看配置成功后的效果 点WIN键->运行(或 ...

  4. WEB安全 - XSS,CSRF

    1. CSRF参考 https://www.ibm.com/developerworks/cn/web/1102_niugang_csrf/ https://en.wikipedia.org/wiki ...

  5. Legal High

    不让任何人承担责任,不想看的东西就回避, 但是,如果想夺回值得夸耀的生存方式,就必须看那些不愿意看的现实,必须带着身负重伤的觉悟前进,这才叫做战斗. 有怨言的话去坟墓里说,钱不是全部,钱就是你们向对手 ...

  6. 弹出USB大容量存储设备时出问题的解决方法

    我的计算机->管理->系统工具->事件查看器->自定义视图->Kernel-Pnp->详情->进程ID 然后在任务管理器里找到该进程(任务管理器->查看 ...

  7. day22-Python运维开发基础(正则函数 / 异常处理)

    1. 正则函数 # ### 正则表达式 => 正则函数 import re # search 通过正则匹配出第一个对象返回,通过group取出对象中的值 strvar = "5*7 9 ...

  8. shell 脚本通过Webhook 发送消息到微信群

    代码如下: #!/bin/sh # Filename: msg.sh # # Usage msg.sh "message text" # # 1. check if missing ...

  9. 「POI2011」Meteors

    「POI2011」Meteors 传送门 整体二分,树状数组实现区间修改单点查询,然后注意修改是在环上的. 参考代码: #include <cstdio> #include <vec ...

  10. 「SPOJ10707」Count on a tree II

    「SPOJ10707」Count on a tree II 传送门 树上莫队板子题. 锻炼基础,没什么好说的. 参考代码: #include <algorithm> #include &l ...