简介

Spark的 RDD、DataFrame 和 SparkSQL的性能比较。

2方面的比较

  1. 单条记录的随机查找

  2. aggregation聚合并且sorting后输出

使用以下Spark的三种方式来解决上面的2个问题,对比性能。

  1. Using RDD’s

  2. Using DataFrames

  3. Using SparkSQL

数据源

  • 在HDFS中3个文件中存储的9百万不同记录

  • 每条记录11个字段
  • 总大小 1.4 GB

实验环境

  • HDP 2.4

  • Hadoop version 2.7

  • Spark 1.6

  • HDP Sandbox

测试结果

  • 原始的RDD 比 DataFrames 和 SparkSQL性能要好

  • DataFrames 和 SparkSQL 性能差不多

  • 使用DataFrames 和 SparkSQL 比 RDD 操作更直观

  • Jobs都是独立运行,没有其他job的干扰

2个操作

  1. Random lookup against 1 order ID from 9 Million unique order ID's

  2. GROUP all the different products with their total COUNTS and SORT DESCENDING by product name

代码

RDD Random Lookup

#!/usr/bin/env python

from time import time
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("rdd_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## filter where the order_id, the second field, is equal to 96922894
print lines.map(lambda line: line.split('|')).filter(lambda line: int(line[1]) == 96922894).collect() tt = str(time() - t0)
print "RDD lookup performed in " + tt + " seconds"

DataFrame Random Lookup

#!/usr/bin/env python

from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("data_frame_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) ## filter where the order_id, the second field, is equal to 96922894
orders_df.where(orders_df['order_id'] == 96922894).show() tt = str(time() - t0)
print "DataFrame performed in " + tt + " seconds"

SparkSQL Random Lookup

#!/usr/bin/env python

from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("spark_sql_random_lookup")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) ## register data frame as a temporary table
orders_df.registerTempTable("orders") ## filter where the customer_id, the first field, is equal to 96922894
print sqlContext.sql("SELECT * FROM orders where order_id = 96922894").collect() tt = str(time() - t0)
print "SparkSQL performed in " + tt + " seconds"

RDD with GroupBy, Count, and Sort Descending

#!/usr/bin/env python

from time import time
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("rdd_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) counts = lines.map(lambda line: line.split('|')) \
.map(lambda x: (x[5], 1)) \
.reduceByKey(lambda a, b: a + b) \
.map(lambda x:(x[1],x[0])) \
.sortByKey(ascending=False) for x in counts.collect():
print x[1] + '\t' + str(x[0]) tt = str(time() - t0)
print "RDD GroupBy performed in " + tt + " seconds"

DataFrame with GroupBy, Count, and Sort Descending

#!/usr/bin/env python

from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("data_frame_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame( \
lines.map(lambda l: l.split("|")) \
.map(lambda p: Row(cust_id=int(p[0]), order_id=int(p[1]), email_hash=p[2], ssn_hash=p[3], product_id=int(p[4]), product_desc=p[5], \
country=p[6], state=p[7], shipping_carrier=p[8], shipping_type=p[9], shipping_class=p[10] ) ) ) results = orders_df.groupBy(orders_df['product_desc']).count().sort("count",ascending=False) for x in results.collect():
print x tt = str(time() - t0)
print "DataFrame performed in " + tt + " seconds"

SparkSQL with GroupBy, Count, and Sort Descending

#!/usr/bin/env python

from time import time
from pyspark.sql import *
from pyspark import SparkConf, SparkContext conf = (SparkConf()
.setAppName("spark_sql_aggregation_and_sort")
.set("spark.executor.instances", "")
.set("spark.executor.cores", 2)
.set("spark.dynamicAllocation.enabled", "false")
.set("spark.shuffle.service.enabled", "false")
.set("spark.executor.memory", "500MB"))
sc = SparkContext(conf = conf) sqlContext = SQLContext(sc) t0 = time() path = "/data/customer_orders*"
lines = sc.textFile(path) ## create data frame
orders_df = sqlContext.createDataFrame(lines.map(lambda l: l.split("|")) \
.map(lambda r: Row(product=r[5]))) ## register data frame as a temporary table
orders_df.registerTempTable("orders") results = sqlContext.sql("SELECT product, count(*) AS total_count FROM orders GROUP BY product ORDER BY total_count DESC") for x in results.collect():
print x tt = str(time() - t0)
print "SparkSQL performed in " + tt + " seconds"

原文:https://community.hortonworks.com/articles/42027/rdd-vs-dataframe-vs-sparksql.html

Spark RDDs vs DataFrames vs SparkSQL的更多相关文章

  1. Spark 官方文档(5)——Spark SQL,DataFrames和Datasets 指南

    Spark版本:1.6.2 概览 Spark SQL用于处理结构化数据,与Spark RDD API不同,它提供更多关于数据结构信息和计算任务运行信息的接口,Spark SQL内部使用这些额外的信息完 ...

  2. Effective Spark RDDs with Alluxio【转】

    转自:http://kaimingwan.com/post/alluxio/effective-spark-rdds-with-alluxio 1. 介绍 2. 引言 3. Alluxio and S ...

  3. Spark(十二)SparkSQL简单使用

    一.SparkSQL的进化之路 1.0以前:   Shark 1.1.x开始:SparkSQL(只是测试性的)  SQL 1.3.x:          SparkSQL(正式版本)+Datafram ...

  4. Spark入门实战系列--6.SparkSQL(上)--SparkSQL简介

    [注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 .SparkSQL的发展历程 1.1 Hive and Shark SparkSQL的前身是 ...

  5. Spark入门实战系列--6.SparkSQL(中)--深入了解SparkSQL运行计划及调优

    [注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 1.1  运行环境说明 1.1.1 硬软件环境 线程,主频2.2G,10G内存 l  虚拟软 ...

  6. Spark入门实战系列--6.SparkSQL(下)--Spark实战应用

    [注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 .运行环境说明 1.1 硬软件环境 线程,主频2.2G,10G内存 l  虚拟软件:VMwa ...

  7. 一个spark SQL和DataFrames的故事

    package com.lin.spark import org.apache.spark.sql.{Row, SparkSession} import org.apache.spark.sql.ty ...

  8. Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN

    Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...

  9. Spark记录-SparkSql官方文档中文翻译(部分转载)

    1 概述(Overview) Spark SQL是Spark的一个组件,用于结构化数据的计算.Spark SQL提供了一个称为DataFrames的编程抽象,DataFrames可以充当分布式SQL查 ...

随机推荐

  1. 【Python】 xml解析与生成 xml

    xml *之前用的时候也没想到..其实用BeautifulSoup就可以解析xml啊..因为html只是xml的一种实现方式吧.但是很蛋疼的一点就是,bs不提供获取对象的方法,其find大多获取的都是 ...

  2. sessionStorage和localStorage的用法,不同点和相同点

    一,共同点 (1)存储时用setItem: localStorage.setItem("key","value");//以"key"为名称存 ...

  3. Pla

    Pla(jdoj1006) 题目大意:给你n个矩形,并排放在一起,你的目的是将所有的矩形全部染色.你每次染的形状为一个矩形,问:最少需要染多少次? 注释:n<=10^6,wi , hi<= ...

  4. c# 实时监控数据库 SqlDependency

    http://blog.csdn.net/idays021/article/details/49661855 class Program { private static string _connSt ...

  5. netstat/ps用法

    1.netstat 语法     命令1:netstat -antp | grep :80(查看80端口被哪个服务占用)or netstat -antpuel  | grep ":22&qu ...

  6. 微信app支付详细教程

    微信支付作为三大支付之一,越来越多的客户要求产品中添加微信支付   但是网上能找到可用的demo很少 所以写一篇自己写微信支付的过程,希望能给有需要的开发者一点帮助. 下面让我们来进入正题 1准备工作 ...

  7. linux,windows,ubuntu下git安装与使用

    ubuntu下git安装与使用:首先应该检查本地是否已经安装了git ,如果没有安装的话,在命令模式下输入 sudo apt-get install git 进行安装 输入git命令查看安装状态及常用 ...

  8. beta冲刺用户测评-咸鱼

    测评人:庄加鑫-咸鱼 测评结果  一.使用体验数据加载响应很快!页面切换丝滑流畅!UI有点偏暗,有些字被覆盖了.页面布局过于居中,两侧空白范围较大.总体功能完善.二.登录.注册.忘记密码界面管理员登录 ...

  9. Beta No.5

    今天遇到的困难: 前端大部分代码由我们放逐的组员完成,这影响到了我们解决"Fragment碎片刷新时总产生的固定位置"的进程,很难找到源码对应 新加入的成员对界面代码不熟悉. 我们 ...

  10. C语言第二次博客作业

    一.PTA实验作业 题目1:计算分段函数[2] 本题目要求计算下列分段函数f(x)的值: 1.实验代码 int main (void) { double x,y; scanf("%lf&qu ...