1 Overview
Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine.
 
2 DataFrames
A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python
 
2.1 SQLContext
The entry point into all functionality in Spark SQL is the SQLContext class, or one of its descendants. To create a basic SQLContext, all you need is a SparkContext.
val sqlContext =new org.apache.spark.sql.SQLContext(sc)
 
2.2 Creating DataFrames
val df = sqlContext.read.json("/home/slh/data/people.json")
 
2.3 Operations
show()
printSchema()
select("name").show()
select(df("name"), df("age") + 1).show()
filter(df("age") > 13).show()
groupBy("age").count().show()
 
2.4 Running SQL Queries
The sql function on a SQLContext enables applications to run SQL queries programmatically and returns the result as a DataFrame.
val df =  sqlContext.sql("select * from table")
 
2.5 Interoperating with RDDs
Spark SQL supports two different methods for converting existing RDDs into DataFrames. The first method uses reflection to infer the schema of an RDD that contains specific types of objects. This reflection based approach leads to more concise code and works well when you already know the schema while writing your Spark application.
The second method for creating DataFrames is through a programmatic interface that allows you to construct a schema and then apply it to an existing RDD. While this method is more verbose, it allows you to construct DataFrames when the columns and their types are not known until runtime.
 
3 Data Sources
Spark SQL supports operating on a variety of data sources through the DataFrame interface. A DataFrame can be operated on as normal RDDs and can also be registered as a temporary table. Registering a DataFrame as a table allows you to run SQL queries over its data. This section describes the general methods for loading and saving data using the Spark Data Sources and then goes into specific options that are available for the built-in data sources.
 
3.1 Load/Save Functions
Generic:
val df = sqlContext.read.load("examples/src/main/resources/users.parquet")
df.select("name", "favorite_color").write.save("namesAndFavColors.parquet")
Specifying:
val df = sqlContext.read.format("json").load("examples/src/main/resources/people.json")
df.select("name", "age").write.format("parquet").save("namesAndAges.parquet")
Save Modes:
SaveMode.ErrorIfExists(default)
SaveMode.Append
SaveMode.Overwrite
SaveMode.Ignore
 
3.2 Parquet Files
Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.
 
// The RDD is implicitly converted to a DataFrame by implicits, allowing it to be stored using Parquet.
people.write.parquet("people.parquet")

// Read in the parquet file created above. Parquet files are self-describing so the schema is preserved.
// The result of loading a Parquet file is also a DataFrame.
val parquetFile = sqlContext.read.parquet("people.parquet")

 
3.3 JSON Datasets
val path = "examples/src/main/resources/people.json"
val people = sqlContext.read.json(path)

val anotherPeopleRDD = sc.parallelize(
"""{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val anotherPeople = sqlContext.read.json(anotherPeopleRDD)

 
3.4 Hive Tables
Spark SQL also supports reading and writing data stored in Apache Hive. However, since Hive has a large number of dependencies, it is not included in the default Spark assembly. Hive support is enabled by adding the -Phive and -Phive-thriftserver flags to Spark’s build. This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries (SerDes) in order to access data stored in Hive.
 
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

Spark SQL - DataFrame的更多相关文章

  1. Spark SQL DataFrame新增一列的四种方法

    方法一:利用createDataFrame方法,新增列的过程包含在构建rdd和schema中 方法二:利用withColumn方法,新增列的过程包含在udf函数中 方法三:利用SQL代码,新增列的过程 ...

  2. spark第七篇:Spark SQL, DataFrame and Dataset Guide

    预览 Spark SQL是用来处理结构化数据的Spark模块.有几种与Spark SQL进行交互的方式,包括SQL和Dataset API. 本指南中的所有例子都可以在spark-shell,pysp ...

  3. Spark SQL,如何将 DataFrame 转为 json 格式

    今天主要介绍一下如何将 Spark dataframe 的数据转成 json 数据.用到的是 scala 提供的 json 处理的 api. 用过 Spark SQL 应该知道,Spark dataf ...

  4. Spark操作dataFrame进行写入mysql,自定义sql的方式

    业务场景: 现在项目中需要通过对spark对原始数据进行计算,然后将计算结果写入到mysql中,但是在写入的时候有个限制: 1.mysql中的目标表事先已经存在,并且当中存在主键,自增长的键id 2. ...

  5. spark sql的agg函数,作用:在整体DataFrame不分组聚合

    .agg(expers:column*) 返回dataframe类型 ,同数学计算求值 df.agg(max("age"), avg("salary")) df ...

  6. spark结构化数据处理:Spark SQL、DataFrame和Dataset

    本文讲解Spark的结构化数据处理,主要包括:Spark SQL.DataFrame.Dataset以及Spark SQL服务等相关内容.本文主要讲解Spark 1.6.x的结构化数据处理相关东东,但 ...

  7. Spark SQL 用户自定义函数UDF、用户自定义聚合函数UDAF 教程(Java踩坑教学版)

    在Spark中,也支持Hive中的自定义函数.自定义函数大致可以分为三种: UDF(User-Defined-Function),即最基本的自定义函数,类似to_char,to_date等 UDAF( ...

  8. Spark修炼之道(进阶篇)——Spark入门到精通:第九节 Spark SQL执行流程解析

    1.总体执行流程 使用下列代码对SparkSQL流程进行分析.让大家明确LogicalPlan的几种状态,理解SparkSQL总体执行流程 // sc is an existing SparkCont ...

  9. 大数据技术之_19_Spark学习_03_Spark SQL 应用解析 + Spark SQL 概述、解析 、数据源、实战 + 执行 Spark SQL 查询 + JDBC/ODBC 服务器

    第1章 Spark SQL 概述1.1 什么是 Spark SQL1.2 RDD vs DataFrames vs DataSet1.2.1 RDD1.2.2 DataFrame1.2.3 DataS ...

随机推荐

  1. js中location.search、split()HTML5中localStorage

    1. location.search在客户端获取Url参数的方法 location.search是从当前URL的?号开始的字符串 如:http://www.baidu.com/s?wd=baidu&a ...

  2. 解读Cardinality Estimation<基数估计>算法(第一部分:基本概念)

    基数计数(cardinality counting)是实际应用中一种常见的计算场景,在数据分析.网络监控及数据库优化等领域都有相关需求.精确的基数计数算法由于种种原因,在面对大数据场景时往往力不从心, ...

  3. 【转】从零开始编写自己的C#框架(7)——需求分析

    转自:http://www.cnblogs.com/EmptyFS/p/3653934.html 本章内容虽然叫“需求分析”,实际上关于具体的需求分析操作步骤并没有深入去写,因为细化的话那将是一本厚厚 ...

  4. 第三百零八至三百二十天 how can I 坚持

    十三天..2月4号至2月16号,好快,假期还没开始就结束了.一一回忆下. 2月4号,腊月二十六,最后一天上班,没多大事,好像是玩了一天,东月回家,貌似路上好折腾,晚上D401,和她聊了一路,也聊了好多 ...

  5. 简单版问卷调查系统(Asp.Net+SqlServer2008)

    1.系统主要涉及以下几个表  问卷项目表(Q_Naire) 问卷题目表(Q_Problem) 题目类型表(Q_ProblmeType) 题目选项表(Q_Options) 调查结果表(Q_Answer) ...

  6. NetBeans IDE 7.4 Beta版本build JavaFX时生成的可执行jar包执行时找不到依赖的jar包

    现象,执行时抛出java.lang.ClassNotFoundException异常: Executing E:\secondegg\secondegg-reversi\dist\run8022211 ...

  7. ActiveX控件

    什么是ActiveX控件:一个进程内服务器,支持多种的COM接口.(可以理解为,一个COM接口是一个纯抽象基类,你实现了它,并且它支持自注册,就是一个ActiveX控件了)可以把ActiveX控件看做 ...

  8. MVC个人认为的终极分页

    //传入要查询的字段,查询条件(例如根据姓名查看数据的数据筛选),按照什么排序,页码,信息条数 //T:要操作的类型 //Tkey:根据什么类型来排,ID的话返回的是int类型,但是name的话又会返 ...

  9. CCF 201312-5 I’m stuck! (暴力,BFS)

    问题描述 给定一个R行C列的地图,地图的每一个方格可能是'#', '+', '-', '|', '.', 'S', 'T'七个字符中的一个,分别表示如下意思: '#': 任何时候玩家都不能移动到此方格 ...

  10. UVaLive 6608 Cabin Baggage (水题)

    题意:给定四个数代表长宽高和重,问你是不是满足下面条件,长不高于56,宽不宽于45,高不高于25,或者总和不大于125,并且重量不高于7. 析:判断输出就好,注意这个题是或,不要想错了. 代码如下: ...