data数据源,请参考我的博客http://www.cnblogs.com/wwxbi/p/6063613.html

import org.apache.Spark.sql.DataFrameStatFunctions

import org.apache.spark.sql.functions._

相关系数

val df = Range(0,10,step=1).toDF("id").withColumn("rand1", rand(seed=10)).withColumn("rand2", rand(seed=27))
df: org.apache.spark.sql.DataFrame = [id: int, rand1: double ... 1 more field] df.show
+---+-------------------+-------------------+
| id| rand1| rand2|
+---+-------------------+-------------------+
| 0|0.41371264720975787| 0.714105256846827|
| 1| 0.7311719281896606| 0.8143487574232506|
| 2| 0.9031701155118229| 0.5282207324381174|
| 3|0.09430205113458567| 0.4420100497826609|
| 4|0.38340505276222947| 0.9387162206758006|
| 5| 0.5569246135523511| 0.6398126862647711|
| 6| 0.4977441406613893| 0.9895498513115722|
| 7| 0.2076666106201438| 0.3398720242725498|
| 8| 0.9571919406508957|0.15042237695815963|
| 9| 0.7429395461204413| 0.7302723457066639|
+---+-------------------+-------------------+ df.stat.corr("rand1", "rand2", "pearson")
res24: Double = -0.10993962467082698

查看数据的统计分布情况

val colArray = Array("age", "yearsmarried", "religiousness", "education", "occupation", "rating")

// 查看数据的统计分布情况
val descrDF = data.describe("age", "yearsmarried", "religiousness", "education", "occupation", "rating")
descrDF: org.apache.spark.sql.DataFrame = [summary: string, age: string ... 5 more fields] descrDF.selectExpr("summary",
"round(age,2) as age",
"round(yearsmarried,2) as yearsmarried",
"round(religiousness,2) as religiousness",
"round(education,2) as education",
"round(occupation,2) as occupation",
"round(rating,2) as rating").show(10, truncate = false)
+-------+-----+------------+-------------+---------+----------+------+
|summary|age |yearsmarried|religiousness|education|occupation|rating|
+-------+-----+------------+-------------+---------+----------+------+
|count |601.0|601.0 |601.0 |601.0 |601.0 |601.0 |
|mean |32.49|8.18 |3.12 |16.17 |4.19 |3.93 |
|stddev |9.29 |5.57 |1.17 |2.4 |1.82 |1.1 |
|min |17.5 |0.13 |1.0 |9.0 |1.0 |1.0 |
|max |57.0 |15.0 |5.0 |20.0 |7.0 |5.0 |
+-------+-----+------------+-------------+---------+----------+------+

统计字段中元素的个数

// 统计字段中元素的个数
val fi = data.stat.freqItems(colArray)
fi: org.apache.spark.sql.DataFrame = [age_freqItems: array<double>, yearsmarried_freqItems: array<double> ... 4 more fields] fi.printSchema()
root
|-- age_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- yearsmarried_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- religiousness_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- education_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- occupation_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- rating_freqItems: array (nullable = true)
| |-- element: double (containsNull = false) val f = fi.selectExpr(
| "size(age_freqItems)",
| "size(yearsmarried_freqItems)",
| "size(religiousness_freqItems)",
| "size(education_freqItems)",
| "size(occupation_freqItems)",
| "size(rating_freqItems)")
f: org.apache.spark.sql.DataFrame = [size(age_freqItems): int, size(yearsmarried_freqItems): int ... 4 more fields] f.show(10, truncate = false)
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+
|size(age_freqItems)|size(yearsmarried_freqItems)|size(religiousness_freqItems)|size(education_freqItems)|size(occupation_freqItems)|size(rating_freqItems)|
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+
|9 |8 |5 |7 |7 |5 |
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+

集合字段的元素

// 集合字段的元素
val f1 = data.stat.freqItems(Array("age", "yearsmarried", "religiousness"))
f1: org.apache.spark.sql.DataFrame = [age_freqItems: array<double>, yearsmarried_freqItems: array<double> ... 1 more field] f1.show(10, truncate = false)
+------------------------------------------------------+-----------------------------------------------+-------------------------+
|age_freqItems |yearsmarried_freqItems |religiousness_freqItems |
+------------------------------------------------------+-----------------------------------------------+-------------------------+
|[32.0, 47.0, 22.0, 52.0, 37.0, 17.5, 27.0, 57.0, 42.0]|[0.75, 0.125, 1.5, 0.417, 4.0, 7.0, 10.0, 15.0]|[2.0, 5.0, 4.0, 1.0, 3.0]|
+------------------------------------------------------+-----------------------------------------------+-------------------------+ // 对数组的元素排序 f1.selectExpr("sort_array(age_freqItems)", "sort_array(yearsmarried_freqItems)", "sort_array(religiousness_freqItems)").show(10, truncate = false)
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+
|sort_array(age_freqItems, true) |sort_array(yearsmarried_freqItems, true) |sort_array(religiousness_freqItems, true)|
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+
|[17.5, 22.0, 27.0, 32.0, 37.0, 42.0, 47.0, 52.0, 57.0]|[0.125, 0.417, 0.75, 1.5, 4.0, 7.0, 10.0, 15.0]|[1.0, 2.0, 3.0, 4.0, 5.0] |
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+ // 集合字段的元素
val f2 = data.stat.freqItems(Array("education", "occupation", "rating"))
f2: org.apache.spark.sql.DataFrame = [education_freqItems: array<double>, occupation_freqItems: array<double> ... 1 more field] f2.show(10, truncate = false)
+-----------------------------------------+-----------------------------------+-------------------------+
|education_freqItems |occupation_freqItems |rating_freqItems |
+-----------------------------------------+-----------------------------------+-------------------------+
|[17.0, 20.0, 14.0, 16.0, 9.0, 18.0, 12.0]|[2.0, 5.0, 4.0, 7.0, 1.0, 3.0, 6.0]|[2.0, 5.0, 4.0, 1.0, 3.0]|
+-----------------------------------------+-----------------------------------+-------------------------+ // 对数组的元素排序
f2.selectExpr("sort_array(education_freqItems)", "sort_array(occupation_freqItems)", "sort_array(rating_freqItems)").show(10, truncate = false)
+-----------------------------------------+--------------------------------------+----------------------------------+
|sort_array(education_freqItems, true) |sort_array(occupation_freqItems, true)|sort_array(rating_freqItems, true)|
+-----------------------------------------+--------------------------------------+----------------------------------+
|[9.0, 12.0, 14.0, 16.0, 17.0, 18.0, 20.0]|[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] |[1.0, 2.0, 3.0, 4.0, 5.0] |
+-----------------------------------------+--------------------------------------+----------------------------------+

Spark2 探索性数据统计分析的更多相关文章

  1. 初识Spark2.0之Spark SQL

    内存计算平台spark在今年6月份的时候正式发布了spark2.0,相比上一版本的spark1.6版本,在内存优化,数据组织,流计算等方面都做出了较大的改变,同时更加注重基于DataFrame数据组织 ...

  2. Hadoop 3.1.2(HA)+Zookeeper3.4.13+Hbase1.4.9(HA)+Hive2.3.4+Spark2.4.0(HA)高可用集群搭建

    目录 目录 1.前言 1.1.什么是 Hadoop? 1.1.1.什么是 YARN? 1.2.什么是 Zookeeper? 1.3.什么是 Hbase? 1.4.什么是 Hive 1.5.什么是 Sp ...

  3. spark2.3.0 配置spark sql 操作hive

    spark可以通过读取hive的元数据来兼容hive,读取hive的表数据,然后在spark引擎中进行sql统计分析,从而,通过spark sql与hive结合实现数据分析将成为一种最佳实践.配置步骤 ...

  4. geotrellis使用(二十五)将Geotrellis移植到spark2.0

    目录 前言 升级spark到2.0 将geotrellis最新版部署到spark2.0(CDH) 总结 一.前言        事情总是变化这么快,前面刚写了一篇博客介绍如何将geotrellis移植 ...

  5. 统计分析中Type I Error与Type II Error的区别

    统计分析中Type I Error与Type II Error的区别 在统计分析中,经常提到Type I Error和Type II Error.他们的基本概念是什么?有什么区别? 下面的表格显示 b ...

  6. Ubuntu14.04或16.04下安装JDK1.8+Scala+Hadoop2.7.3+Spark2.0.2

    为了将Hadoop和Spark的安装简单化,今日写下此帖. 首先,要看手头有多少机器,要安装伪分布式的Hadoop+Spark还是完全分布式的,这里分别记录. 1. 伪分布式安装 伪分布式的Hadoo ...

  7. Hadoop学习笔记—20.网站日志分析项目案例(三)统计分析

    网站日志分析项目案例(一)项目介绍:http://www.cnblogs.com/edisonchou/p/4449082.html 网站日志分析项目案例(二)数据清洗:http://www.cnbl ...

  8. maven+spark2.0.0最大连通分量

    运用到了spark2.0.0的grarhx包,要手动的在pom.xml里面添加依赖包,要什么就在里面添加依赖,然后在run->maven install

  9. Eclipse+maven+scala2.11.8+spark2.0.0的环境部署

    主要在maven-for-scalaIDE纠结了,因为在eclipse版本是luna4.x 里面有自己带有的maven. 根据网上面无脑的下一步下一步,出现了错误,在此讲解各个插件的用途,以此新人看见 ...

随机推荐

  1. CorelDRAW中六种复制对象的方法详解

    复制可保证对象的大小一致,复制也是所有操作中最基本的操作.CorelDRAW软件中支持多种复制对象的操作,本教程将详解CorelDRAW中六种复制对象的方法. 方法一 选择复制对象,点击编辑→复制,再 ...

  2. [转]gluProject 和 gluUnproject 的详解

    gluProject 和 gluUnproject 的详解 简介: 三维空间中,经常需要将 3D 空间中的点转换到 2D(屏幕坐标),或者将 2D 点转换到 3D 空间中.当你使用 OpenGL 的时 ...

  3. WPF中的命令与命令绑定导航

    1.WPF中的命令与命令绑定(一) (引入命令) 2.WPF中的命令与命令绑定(二)(详细介绍命令和命令绑定)

  4. Netty权威指南之NIO通信模型

    NIO简介:与Socket和ServerSocket类相对应,NIO提供了SocketChannel和ServerSocketChannel两种不同的套接字通道实现,这两种新通道都支持阻塞和非阻塞两种 ...

  5. 04python while循环语句

    使用while ture语法 luck_num = 33 flag = True while flag: guess_num = input('请输入您猜测的年龄:') if guess_num &l ...

  6. Ldap 漏洞

    0x00 Ldap安装 官网地址:https://www.userbooster.de/en/download/openldap-for-windows.aspx 在win2008上安装,一路Next ...

  7. Android开发-- findViewById()方法得到空指针

    如果想通过调用findViewById()方法获取到相应的控件,必须要求当前Activity的layout通过setContentView. 如果你通过其他方法添加了一个layout,如需获取这个la ...

  8. 使用一条sql查询多个表中的记录数

    方法一: select t1.num1,t2.num2,t3.num3 from (select count(*) num1 from table1) t1, (select count(*) num ...

  9. 第一篇:Hadoop简介

    前言 本文大致介绍下Hadoop的一些背景知识,为后面深入学习打下铺垫. 什么是Hadoop Hadoop是一个开源分布式计算平台,它以HDFS文件系统和MapReduce计算框架为核心. 前者能够让 ...

  10. Python中定义函数时参数有默认值的小陷阱

    在定义函数的时候,如果函数的参数有默认值,有两种类型的参数,一种是整数,字符串这种不可变类型,另一种是列表这种可变类型,对于第一种情况没有什么特殊的地方,但是对于可变类型,有一个微妙的小陷阱. 可变类 ...