Spark RDDs vs DataFrames vs SparkSQL
简介
Spark的 RDD、DataFrame 和 SparkSQL的性能比较。
2方面的比较
单条记录的随机查找
aggregation聚合并且sorting后输出
使用以下Spark的三种方式来解决上面的2个问题,对比性能。
Using RDD’s
Using DataFrames
Using SparkSQL
数据源
在HDFS中3个文件中存储的9百万不同记录
- 每条记录11个字段
总大小 1.4 GB
实验环境
HDP 2.4
Hadoop version 2.7
Spark 1.6
HDP Sandbox
测试结果
原始的RDD 比 DataFrames 和 SparkSQL性能要好
DataFrames 和 SparkSQL 性能差不多
使用DataFrames 和 SparkSQL 比 RDD 操作更直观
Jobs都是独立运行,没有其他job的干扰
2个操作
Random lookup against 1 order ID from 9 Million unique order ID's
GROUP all the different products with their total COUNTS and SORT DESCENDING by product name

代码
RDD Random Lookup
#!/usr/bin/env python from time import time
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("rdd_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## filter where the order_id, the second field, is equal to 96922894
print lines.map(lambda line: line.split('|')).filter(lambda line: int(line[1]) == 96922894).collect() tt = str(time() - t0)
print "RDD lookup performed in " + tt + " seconds"
DataFrame Random Lookup
#!/usr/bin/env python from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("data_frame_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) ## filter where the order_id, the second field, is equal to 96922894
orders_df.where(orders_df['order_id'] == 96922894).show() tt = str(time() - t0)
print "DataFrame performed in " + tt + " seconds"
SparkSQL Random Lookup
#!/usr/bin/env python from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("spark_sql_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) ## register data frame as a temporary table
orders_df.registerTempTable("orders") ## filter where the customer_id, the first field, is equal to 96922894
print sqlContext.sql("SELECT * FROM orders where order_id = 96922894").collect() tt = str(time() - t0)
print "SparkSQL performed in " + tt + " seconds"
RDD with GroupBy, Count, and Sort Descending
#!/usr/bin/env python from time import time
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("rdd_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) counts = lines.map(lambda line: line.split('|')) \
.map(lambda x: (x[5], 1)) \
.reduceByKey(lambda a, b: a + b) \
.map(lambda x:(x[1],x[0])) \
.sortByKey(ascending=False) for x in counts.collect():
print x[1] + '\t' + str(x[0]) tt = str(time() - t0)
print "RDD GroupBy performed in " + tt + " seconds"
DataFrame with GroupBy, Count, and Sort Descending
#!/usr/bin/env python from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("data_frame_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) results = orders_df.groupBy(orders_df['product_desc']).count().sort("count",ascending=False) for x in results.collect():
print x tt = str(time() - t0)
print "DataFrame performed in " + tt + " seconds"
SparkSQL with GroupBy, Count, and Sort Descending
#!/usr/bin/env python from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("spark_sql_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame(lines.map(lambda l: l.split("|")) \
.map(lambda r: Row(product=r[5]))) ## register data frame as a temporary table
orders_df.registerTempTable("orders") results = sqlContext.sql("SELECT product, count(*) AS total_count FROM orders GROUP BY product ORDER BY total_count DESC") for x in results.collect():
print x tt = str(time() - t0)
print "SparkSQL performed in " + tt + " seconds"
原文:https://community.hortonworks.com/articles/42027/rdd-vs-dataframe-vs-sparksql.html
Spark RDDs vs DataFrames vs SparkSQL的更多相关文章
- Spark 官方文档(5)——Spark SQL,DataFrames和Datasets 指南
Spark版本:1.6.2 概览 Spark SQL用于处理结构化数据,与Spark RDD API不同,它提供更多关于数据结构信息和计算任务运行信息的接口,Spark SQL内部使用这些额外的信息完 ...
- Effective Spark RDDs with Alluxio【转】
转自:http://kaimingwan.com/post/alluxio/effective-spark-rdds-with-alluxio 1. 介绍 2. 引言 3. Alluxio and S ...
- Spark(十二)SparkSQL简单使用
一.SparkSQL的进化之路 1.0以前: Shark 1.1.x开始:SparkSQL(只是测试性的) SQL 1.3.x: SparkSQL(正式版本)+Datafram ...
- Spark入门实战系列--6.SparkSQL(上)--SparkSQL简介
[注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 .SparkSQL的发展历程 1.1 Hive and Shark SparkSQL的前身是 ...
- Spark入门实战系列--6.SparkSQL(中)--深入了解SparkSQL运行计划及调优
[注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 1.1 运行环境说明 1.1.1 硬软件环境 线程,主频2.2G,10G内存 l 虚拟软 ...
- Spark入门实战系列--6.SparkSQL(下)--Spark实战应用
[注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 .运行环境说明 1.1 硬软件环境 线程,主频2.2G,10G内存 l 虚拟软件:VMwa ...
- 一个spark SQL和DataFrames的故事
package com.lin.spark import org.apache.spark.sql.{Row, SparkSession} import org.apache.spark.sql.ty ...
- Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN
Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...
- Spark记录-SparkSql官方文档中文翻译(部分转载)
1 概述(Overview) Spark SQL是Spark的一个组件,用于结构化数据的计算.Spark SQL提供了一个称为DataFrames的编程抽象,DataFrames可以充当分布式SQL查 ...
随机推荐
- JDK1.8源码(四)——java.util.Arrays 类
java.util.Arrays 类是 JDK 提供的一个工具类,用来处理数组的各种方法,而且每个方法基本上都是静态方法,能直接通过类名Arrays调用. 1.asList public static ...
- 神奇的RAC宏
先说说RAC中必须要知道的宏 RAC(TARGET, [KEYPATH, [NIL_VALUE]]) 使用: RAC(self.outputLabel, text) = self.inputTex ...
- 【Python】 python对象的文件化 pickle
pickle 之前隐隐约约在哪里看到过pickle这个模块但一直没怎么用过.然后让我下定决心学习一下这个模块的原因竟然是[妹抖龙女(男)主在工作中用到了pickle哈哈哈].嗯嗯,不扯皮了.pickl ...
- WHCTF-babyre
WHCTF-babyre 首先执行file命令得到如下信息 ELF 64-bit LSB executable, x86-64 尝试用IDA64打开,定位到关键函数main发现无法F5,尝试了修复无果 ...
- Archlinux下i3wm与urxvt的配置
前段时间学习了GitHub的两位前辈:Airblader和wlh320.他们的相关教程在https://github.com/Airblader/i3和https://github.com/wlh32 ...
- 《Language Implementation Patterns》之 构建语法树
如果要解释执行或转换一段语言,那么就无法在识别语法规则的同时达到目标,只有那些简单的,比如将wiki markup转换成html的功能,可以通过一遍解析来完成,这种应用叫做 syntax-direct ...
- java unicode和字符串间的转换
package ykxw.web.jyf; /** * Created by jyf on 2017/5/16. */ public class unicode { public static voi ...
- PostMan 调用WCF Rest服务
问题描述: 现在有已有的WCF服务,但是ajax是不能请求到这个服务的: 需要把WCF转成WCF REST 的风格. 以下是从WCF转 WCF REST的步骤 1.首先在接口定义的地方加上 请求 We ...
- $.each遍历json数组
1.遍历单层json数组 我们把idx和obj都打印出来看看,到底是什么东西 var json1 =[{"id":"1","tagName" ...
- php的函数参数按照从左到右来赋值
PHP 中自定义函数参数赋默认值 2012-07-07 13:23:00| 分类: php自定义函数,默|举报|字号 订阅 下载LOFTER我的照片书 | php自定义函数接受参数 ...