println("--------------------"+data.rdd.getNumPartitions) // 获取DF中partition的数目
val partitions = data.rdd.glom().collect() // 获取所有data下所有的partition,返回一个partition的集合
for(part <- partitions){
println(part.getClass.getName + "::::::::" + part.length) // 每个partition中的数据量
}

结果:

--------------------100
[Lorg.apache.spark.sql.Row;::::::::61516
[Lorg.apache.spark.sql.Row;::::::::61656
[Lorg.apache.spark.sql.Row;::::::::61991
[Lorg.apache.spark.sql.Row;::::::::61269
[Lorg.apache.spark.sql.Row;::::::::61654
[Lorg.apache.spark.sql.Row;::::::::61780
[Lorg.apache.spark.sql.Row;::::::::62059
[Lorg.apache.spark.sql.Row;::::::::61675
[Lorg.apache.spark.sql.Row;::::::::61339
[Lorg.apache.spark.sql.Row;::::::::61783
[Lorg.apache.spark.sql.Row;::::::::61620
[Lorg.apache.spark.sql.Row;::::::::61883
[Lorg.apache.spark.sql.Row;::::::::61631
[Lorg.apache.spark.sql.Row;::::::::61930
[Lorg.apache.spark.sql.Row;::::::::61451
[Lorg.apache.spark.sql.Row;::::::::61797
[Lorg.apache.spark.sql.Row;::::::::61367
[Lorg.apache.spark.sql.Row;::::::::61647
[Lorg.apache.spark.sql.Row;::::::::61488
[Lorg.apache.spark.sql.Row;::::::::61584
[Lorg.apache.spark.sql.Row;::::::::61733
[Lorg.apache.spark.sql.Row;::::::::61491
[Lorg.apache.spark.sql.Row;::::::::61809
[Lorg.apache.spark.sql.Row;::::::::61062
[Lorg.apache.spark.sql.Row;::::::::61658
[Lorg.apache.spark.sql.Row;::::::::61599
[Lorg.apache.spark.sql.Row;::::::::61911
[Lorg.apache.spark.sql.Row;::::::::61602
[Lorg.apache.spark.sql.Row;::::::::61348
[Lorg.apache.spark.sql.Row;::::::::61677
[Lorg.apache.spark.sql.Row;::::::::61722
[Lorg.apache.spark.sql.Row;::::::::61482
[Lorg.apache.spark.sql.Row;::::::::61714
[Lorg.apache.spark.sql.Row;::::::::61241
[Lorg.apache.spark.sql.Row;::::::::61737
[Lorg.apache.spark.sql.Row;::::::::62015
[Lorg.apache.spark.sql.Row;::::::::62062
[Lorg.apache.spark.sql.Row;::::::::61557
[Lorg.apache.spark.sql.Row;::::::::61607
[Lorg.apache.spark.sql.Row;::::::::61175
[Lorg.apache.spark.sql.Row;::::::::61653
[Lorg.apache.spark.sql.Row;::::::::61460
[Lorg.apache.spark.sql.Row;::::::::61705
[Lorg.apache.spark.sql.Row;::::::::61492
[Lorg.apache.spark.sql.Row;::::::::61340
[Lorg.apache.spark.sql.Row;::::::::61767
[Lorg.apache.spark.sql.Row;::::::::61756
[Lorg.apache.spark.sql.Row;::::::::61793
[Lorg.apache.spark.sql.Row;::::::::61417
[Lorg.apache.spark.sql.Row;::::::::61376
[Lorg.apache.spark.sql.Row;::::::::62039
[Lorg.apache.spark.sql.Row;::::::::61571
[Lorg.apache.spark.sql.Row;::::::::61849
[Lorg.apache.spark.sql.Row;::::::::61553
[Lorg.apache.spark.sql.Row;::::::::61612
[Lorg.apache.spark.sql.Row;::::::::61980
[Lorg.apache.spark.sql.Row;::::::::61714
[Lorg.apache.spark.sql.Row;::::::::62376
[Lorg.apache.spark.sql.Row;::::::::61884
[Lorg.apache.spark.sql.Row;::::::::61273
[Lorg.apache.spark.sql.Row;::::::::61669
[Lorg.apache.spark.sql.Row;::::::::61695
[Lorg.apache.spark.sql.Row;::::::::61515
[Lorg.apache.spark.sql.Row;::::::::61247
[Lorg.apache.spark.sql.Row;::::::::61909
[Lorg.apache.spark.sql.Row;::::::::61879
[Lorg.apache.spark.sql.Row;::::::::61913
[Lorg.apache.spark.sql.Row;::::::::61199
[Lorg.apache.spark.sql.Row;::::::::61678
[Lorg.apache.spark.sql.Row;::::::::61619
[Lorg.apache.spark.sql.Row;::::::::61909
[Lorg.apache.spark.sql.Row;::::::::61406
[Lorg.apache.spark.sql.Row;::::::::61775
[Lorg.apache.spark.sql.Row;::::::::61559
[Lorg.apache.spark.sql.Row;::::::::61773
[Lorg.apache.spark.sql.Row;::::::::61888
[Lorg.apache.spark.sql.Row;::::::::61634
[Lorg.apache.spark.sql.Row;::::::::61786
[Lorg.apache.spark.sql.Row;::::::::61666
[Lorg.apache.spark.sql.Row;::::::::61519
[Lorg.apache.spark.sql.Row;::::::::61563
[Lorg.apache.spark.sql.Row;::::::::61481
[Lorg.apache.spark.sql.Row;::::::::61295
[Lorg.apache.spark.sql.Row;::::::::61343
[Lorg.apache.spark.sql.Row;::::::::61750
[Lorg.apache.spark.sql.Row;::::::::61328
[Lorg.apache.spark.sql.Row;::::::::61650
[Lorg.apache.spark.sql.Row;::::::::61541
[Lorg.apache.spark.sql.Row;::::::::61397
[Lorg.apache.spark.sql.Row;::::::::61505
[Lorg.apache.spark.sql.Row;::::::::61761
[Lorg.apache.spark.sql.Row;::::::::61795
[Lorg.apache.spark.sql.Row;::::::::62291
[Lorg.apache.spark.sql.Row;::::::::61566
[Lorg.apache.spark.sql.Row;::::::::61213
[Lorg.apache.spark.sql.Row;::::::::62028
[Lorg.apache.spark.sql.Row;::::::::62634
[Lorg.apache.spark.sql.Row;::::::::61838
[Lorg.apache.spark.sql.Row;::::::::61243
[Lorg.apache.spark.sql.Row;::::::::61585

样例:

--------------------100
[Lorg.apache.spark.sql.Row;::::::::61516
[Lorg.apache.spark.sql.Row;::::::::61656
[Lorg.apache.spark.sql.Row;::::::::61991
[Lorg.apache.spark.sql.Row;::::::::61269
[Lorg.apache.spark.sql.Row;::::::::61654
[Lorg.apache.spark.sql.Row;::::::::61780

spark查看DF的partition数目及每个partition中的数据量【集群模式】的更多相关文章

  1. Spark集群模式&Spark程序提交

    Spark集群模式&Spark程序提交 1. 集群管理器 Spark当前支持三种集群管理方式 Standalone-Spark自带的一种集群管理方式,易于构建集群. Apache Mesos- ...

  2. 【待补充】Spark 集群模式 && Spark Job 部署模式

    0. 说明 Spark 集群模式 && Spark Job 部署模式 1. Spark 集群模式 [ Local ] 使用一个 JVM 模拟 Spark 集群 [ Standalone ...

  3. Spark Tachyon编译部署(含单机和集群模式安装)

    Tachyon编译部署 编译Tachyon 单机部署Tachyon 集群模式部署Tachyon 1.Tachyon编译部署 Tachyon目前的最新发布版为0.7.1,其官方网址为http://tac ...

  4. Spark Streaming揭秘 Day31 集群模式下SparkStreaming日志分析(续)

    Spark Streaming揭秘 Day31 集群模式下SparkStreaming日志分析(续) 今天延续昨天的内容,主要对为什么一个处理会分解成多个Job执行进行解析. 让我们跟踪下Job调用过 ...

  5. Spark Streaming揭秘 Day30 集群模式下SparkStreaming日志分析

    Spark Streaming揭秘 Day30 集群模式下SparkStreaming日志分析 今天通过集群运行模式观察.研究和透彻的刨析SparkStreaming的日志和web监控台. Day28 ...

  6. Spark集群模式概述

    作者:foreyou出处:http://www.foreyou.net/2015/06/22/spark-cluster-mode-overview/声明:本文采用以下协议进行授权: 署名-非商用|C ...

  7. Apache Spark 2.2.0 中文文档 - 集群模式概述 | ApacheCN

    集群模式概述 该文档给出了 Spark 如何在集群上运行.使之更容易来理解所涉及到的组件的简短概述.通过阅读 应用提交指南 来学习关于在集群上启动应用. 组件 Spark 应用在集群上作为独立的进程组 ...

  8. Spark 官方文档(2)——集群模式

    Spark版本:1.6.2 简介:本文档简短的介绍了spark如何在集群中运行,便于理解spark相关组件.可以通过阅读应用提交文档了解如何在集群中提交应用. 组件 spark应用程序通过主程序的Sp ...

  9. Spark学习笔记3(IDEA编写scala代码并打包上传集群运行)

    Spark学习笔记3 IDEA编写scala代码并打包上传集群运行 我们在IDEA上的maven项目已经搭建完成了,现在可以写一个简单的spark代码并且打成jar包 上传至集群,来检验一下我们的sp ...

随机推荐

  1. Azure认知服务之Face API上手体验

    Azure认知服务:Face API Face API是Azure认知服务之一,Face API有两个主要功能: 人脸检测 Face API可在图像中以高精度人脸位置检测多达64个人脸.图像可以通过文 ...

  2. 【原】使用vue2+vue-router+vuex写一个cnode的脚手架

    最近喜欢上了markdown的书写方式,所以博客直接写在github上来,点击查看

  3. xcodebuild构建时报错unknown error -1=ffffffffffffffff Command /bin/sh failed with exit code 1

    CI今日构建时报出如下错误: /Users/xxx/Library/Developer/Xcode/DerivedData/Snowball-ebllohyukujrncbaldsfojfjxwep/ ...

  4. Redis 事物

    MULTI . EXEC . DISCARD 和 WATCH 是 Redis 事务的基础. Multi 和 Exec Multi:开启一个事务,它总是返回 OK .执行之后, 客户端可以继续向服务器发 ...

  5. Spring Security之动态配置资源权限

    在Spring Security中实现通过数据库动态配置url资源权限,需要通过配置验证过滤器来实现资源权限的加载.验证.系统启动时,到数据库加载系统资源权限列表,当有请求访问时,通过对比系统资源权限 ...

  6. centos 7 linux 安装与卸载 tomcat 7

    一.声明 本文采用操作系统版本: Centos 7 Linux系统 版本源:CentOS-7-x86_64-DVD-1708.iso 官网下载地址:http://isoredirect.centos. ...

  7. linux中gdb的使用

    断点 在代码的指定位置中断,使程序在此中断. break <function>    在进入指定函数时停住 break <linenum>    在指定行号停住. break ...

  8. Leetcode 1-10

    这篇文章介绍Leetcode1到10题的解决思路和相关代码. 1. Two sum 问题描述:给定一个整数数组,返回两个数字的索引,使它们加起来等于一个特定的目标. 例子: Given nums = ...

  9. 使用docker部署flask遇到的问题

    容器内能访问,但是外网映射了端口怎么也访问不了 解决方法: app.run() 添加参数host='0.0.0.0'

  10. IIS应用程序池_缓存回收

    本人最近由于公司业务,需要把问卷的问题和答案存入缓存中已提高问卷加载速度,减少数据库压力. 缓存关键代码(公司代码已做封装,这里只贴出关键代码): HttpRuntime.Cache.Insert(k ...