Spark2 探索性数据统计分析
data数据源,请参考我的博客http://www.cnblogs.com/wwxbi/p/6063613.html
import org.apache.Spark.sql.DataFrameStatFunctions
import org.apache.spark.sql.functions._
相关系数
val df = Range(0,10,step=1).toDF("id").withColumn("rand1", rand(seed=10)).withColumn("rand2", rand(seed=27))
df: org.apache.spark.sql.DataFrame = [id: int, rand1: double ... 1 more field] df.show
+---+-------------------+-------------------+
| id| rand1| rand2|
+---+-------------------+-------------------+
| 0|0.41371264720975787| 0.714105256846827|
| 1| 0.7311719281896606| 0.8143487574232506|
| 2| 0.9031701155118229| 0.5282207324381174|
| 3|0.09430205113458567| 0.4420100497826609|
| 4|0.38340505276222947| 0.9387162206758006|
| 5| 0.5569246135523511| 0.6398126862647711|
| 6| 0.4977441406613893| 0.9895498513115722|
| 7| 0.2076666106201438| 0.3398720242725498|
| 8| 0.9571919406508957|0.15042237695815963|
| 9| 0.7429395461204413| 0.7302723457066639|
+---+-------------------+-------------------+ df.stat.corr("rand1", "rand2", "pearson")
res24: Double = -0.10993962467082698
查看数据的统计分布情况
val colArray = Array("age", "yearsmarried", "religiousness", "education", "occupation", "rating") // 查看数据的统计分布情况
val descrDF = data.describe("age", "yearsmarried", "religiousness", "education", "occupation", "rating")
descrDF: org.apache.spark.sql.DataFrame = [summary: string, age: string ... 5 more fields] descrDF.selectExpr("summary",
"round(age,2) as age",
"round(yearsmarried,2) as yearsmarried",
"round(religiousness,2) as religiousness",
"round(education,2) as education",
"round(occupation,2) as occupation",
"round(rating,2) as rating").show(10, truncate = false)
+-------+-----+------------+-------------+---------+----------+------+
|summary|age |yearsmarried|religiousness|education|occupation|rating|
+-------+-----+------------+-------------+---------+----------+------+
|count |601.0|601.0 |601.0 |601.0 |601.0 |601.0 |
|mean |32.49|8.18 |3.12 |16.17 |4.19 |3.93 |
|stddev |9.29 |5.57 |1.17 |2.4 |1.82 |1.1 |
|min |17.5 |0.13 |1.0 |9.0 |1.0 |1.0 |
|max |57.0 |15.0 |5.0 |20.0 |7.0 |5.0 |
+-------+-----+------------+-------------+---------+----------+------+
统计字段中元素的个数
// 统计字段中元素的个数
val fi = data.stat.freqItems(colArray)
fi: org.apache.spark.sql.DataFrame = [age_freqItems: array<double>, yearsmarried_freqItems: array<double> ... 4 more fields] fi.printSchema()
root
|-- age_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- yearsmarried_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- religiousness_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- education_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- occupation_freqItems: array (nullable = true)
| |-- element: double (containsNull = false)
|-- rating_freqItems: array (nullable = true)
| |-- element: double (containsNull = false) val f = fi.selectExpr(
| "size(age_freqItems)",
| "size(yearsmarried_freqItems)",
| "size(religiousness_freqItems)",
| "size(education_freqItems)",
| "size(occupation_freqItems)",
| "size(rating_freqItems)")
f: org.apache.spark.sql.DataFrame = [size(age_freqItems): int, size(yearsmarried_freqItems): int ... 4 more fields] f.show(10, truncate = false)
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+
|size(age_freqItems)|size(yearsmarried_freqItems)|size(religiousness_freqItems)|size(education_freqItems)|size(occupation_freqItems)|size(rating_freqItems)|
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+
|9 |8 |5 |7 |7 |5 |
+-------------------+----------------------------+-----------------------------+-------------------------+--------------------------+----------------------+
集合字段的元素
// 集合字段的元素
val f1 = data.stat.freqItems(Array("age", "yearsmarried", "religiousness"))
f1: org.apache.spark.sql.DataFrame = [age_freqItems: array<double>, yearsmarried_freqItems: array<double> ... 1 more field] f1.show(10, truncate = false)
+------------------------------------------------------+-----------------------------------------------+-------------------------+
|age_freqItems |yearsmarried_freqItems |religiousness_freqItems |
+------------------------------------------------------+-----------------------------------------------+-------------------------+
|[32.0, 47.0, 22.0, 52.0, 37.0, 17.5, 27.0, 57.0, 42.0]|[0.75, 0.125, 1.5, 0.417, 4.0, 7.0, 10.0, 15.0]|[2.0, 5.0, 4.0, 1.0, 3.0]|
+------------------------------------------------------+-----------------------------------------------+-------------------------+ // 对数组的元素排序 f1.selectExpr("sort_array(age_freqItems)", "sort_array(yearsmarried_freqItems)", "sort_array(religiousness_freqItems)").show(10, truncate = false)
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+
|sort_array(age_freqItems, true) |sort_array(yearsmarried_freqItems, true) |sort_array(religiousness_freqItems, true)|
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+
|[17.5, 22.0, 27.0, 32.0, 37.0, 42.0, 47.0, 52.0, 57.0]|[0.125, 0.417, 0.75, 1.5, 4.0, 7.0, 10.0, 15.0]|[1.0, 2.0, 3.0, 4.0, 5.0] |
+------------------------------------------------------+-----------------------------------------------+-----------------------------------------+ // 集合字段的元素
val f2 = data.stat.freqItems(Array("education", "occupation", "rating"))
f2: org.apache.spark.sql.DataFrame = [education_freqItems: array<double>, occupation_freqItems: array<double> ... 1 more field] f2.show(10, truncate = false)
+-----------------------------------------+-----------------------------------+-------------------------+
|education_freqItems |occupation_freqItems |rating_freqItems |
+-----------------------------------------+-----------------------------------+-------------------------+
|[17.0, 20.0, 14.0, 16.0, 9.0, 18.0, 12.0]|[2.0, 5.0, 4.0, 7.0, 1.0, 3.0, 6.0]|[2.0, 5.0, 4.0, 1.0, 3.0]|
+-----------------------------------------+-----------------------------------+-------------------------+ // 对数组的元素排序
f2.selectExpr("sort_array(education_freqItems)", "sort_array(occupation_freqItems)", "sort_array(rating_freqItems)").show(10, truncate = false)
+-----------------------------------------+--------------------------------------+----------------------------------+
|sort_array(education_freqItems, true) |sort_array(occupation_freqItems, true)|sort_array(rating_freqItems, true)|
+-----------------------------------------+--------------------------------------+----------------------------------+
|[9.0, 12.0, 14.0, 16.0, 17.0, 18.0, 20.0]|[1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0] |[1.0, 2.0, 3.0, 4.0, 5.0] |
+-----------------------------------------+--------------------------------------+----------------------------------+
Spark2 探索性数据统计分析的更多相关文章
- 初识Spark2.0之Spark SQL
内存计算平台spark在今年6月份的时候正式发布了spark2.0,相比上一版本的spark1.6版本,在内存优化,数据组织,流计算等方面都做出了较大的改变,同时更加注重基于DataFrame数据组织 ...
- Hadoop 3.1.2(HA)+Zookeeper3.4.13+Hbase1.4.9(HA)+Hive2.3.4+Spark2.4.0(HA)高可用集群搭建
目录 目录 1.前言 1.1.什么是 Hadoop? 1.1.1.什么是 YARN? 1.2.什么是 Zookeeper? 1.3.什么是 Hbase? 1.4.什么是 Hive 1.5.什么是 Sp ...
- spark2.3.0 配置spark sql 操作hive
spark可以通过读取hive的元数据来兼容hive,读取hive的表数据,然后在spark引擎中进行sql统计分析,从而,通过spark sql与hive结合实现数据分析将成为一种最佳实践.配置步骤 ...
- geotrellis使用(二十五)将Geotrellis移植到spark2.0
目录 前言 升级spark到2.0 将geotrellis最新版部署到spark2.0(CDH) 总结 一.前言 事情总是变化这么快,前面刚写了一篇博客介绍如何将geotrellis移植 ...
- 统计分析中Type I Error与Type II Error的区别
统计分析中Type I Error与Type II Error的区别 在统计分析中,经常提到Type I Error和Type II Error.他们的基本概念是什么?有什么区别? 下面的表格显示 b ...
- Ubuntu14.04或16.04下安装JDK1.8+Scala+Hadoop2.7.3+Spark2.0.2
为了将Hadoop和Spark的安装简单化,今日写下此帖. 首先,要看手头有多少机器,要安装伪分布式的Hadoop+Spark还是完全分布式的,这里分别记录. 1. 伪分布式安装 伪分布式的Hadoo ...
- Hadoop学习笔记—20.网站日志分析项目案例(三)统计分析
网站日志分析项目案例(一)项目介绍:http://www.cnblogs.com/edisonchou/p/4449082.html 网站日志分析项目案例(二)数据清洗:http://www.cnbl ...
- maven+spark2.0.0最大连通分量
运用到了spark2.0.0的grarhx包,要手动的在pom.xml里面添加依赖包,要什么就在里面添加依赖,然后在run->maven install
- Eclipse+maven+scala2.11.8+spark2.0.0的环境部署
主要在maven-for-scalaIDE纠结了,因为在eclipse版本是luna4.x 里面有自己带有的maven. 根据网上面无脑的下一步下一步,出现了错误,在此讲解各个插件的用途,以此新人看见 ...
随机推荐
- goquery 文档
https://www.itlipeng.cn/2017/04/25/goquery-%E6%96%87%E6%A1%A3/ http://blog.studygolang.com/2015/04/g ...
- SpringBoot------8080端口被占用抛出异常
异常信息: The Tomcat connector configured to listen on port failed to start. The port may already be in ...
- Nginx(五)-- 配置文件之Rewrite
Rewrite支持URL重写 1.常用指令以及语法 1) if指令 if语法: if 空格 (condition) {} 条件: 1. “=” 来判断相等,用于字符的比较 ...
- O2O(online to offline)营销模式
O2O营销模式又称离线商务模式,是指线上营销线上购买带动线下经营和线下消费.O2O通过打折.提供信息.服务预订等方式,把线下商店的消息推送给互联网用户,从而将他们转换为自己的线下客户,这就特别适合必须 ...
- PHP MYSQL 分表方法
function get_hash_table($table,$uid){ $_str = crc32($uid); if($_str < 0 ){ $ret = "0".s ...
- thinkphp3.2 实现点击图片或文字进入内容页
首先要先把页面渲染出来,http://www.mmkb.com/weixiang/index/index.html <div class="main3 mt"> < ...
- boost::noncopyable介绍
http://blog.csdn.net/huang_xw/article/details/8248960# boost::noncopyable比较简单, 主要用于单例的情况.通常情况下, 要写一个 ...
- 【Shell脚本编程系列】知识储备以及建立规范的脚本
前言 学习shell脚本编程需要的知识储备: vi/vim编辑器命令 vimrc设置要熟练 基础命令,100多个要熟练 基础和常用的网络服务命令要会:nfs . rsync. inotify . la ...
- vue案例 - 使用vue实现自定义多选与单选的答题功能
4月底立得flag,五月底插上小旗,结果拖到六月底七月初才来执行.说什么工作忙都是借口,就是睡的比猪早,起的比猪晚. 本来实现多选单选这个功能,vue组件中在表单方面提供了一个v-model指令,非常 ...
- VIM 如何使用系统的剪切板
想要将系统剪贴板里的内容复制到 vi 编辑的文档中怎么办? 例如,在网页上复制了一段文字,想贴到本地的某个文件中. 使用 vi 打开本地文件,在 输入 模式下,按 Shift + Insert 详细可 ...