Spark教程——(5)PySpark入门
启动PySpark:
[root@node1 ~]# pyspark
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.6.0
/_/
Using Python version 2.7.5 (default, Nov 6 2016 00:28:07)
SparkContext available as sc, HiveContext available as sqlContext.
上下文已经包含 sc 和 sqlContext:
SparkContext available as sc, HiveContext available as sqlContext.
执行脚本:
>>> from __future__ import print_function
>>> import os
>>> import sys
>>> from pyspark import SparkContext
>>> from pyspark.sql import SQLContext
>>> from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType# RDD is created from a list of rows
>>> some_rdd = sc.parallelize([Row(name="John", age=19),Row(name="Smith", age=23),Row(name="Sarah", age=18)])# Infer schema from the first row, create a DataFrame and print the schema
>>> some_df = sqlContext.createDataFrame(some_rdd)
>>> some_df.printSchema()
root
|-- age: long (nullable = true)
|-- name: string (nullable = true)
# Another RDD is created from a list of tuples
>>> another_rdd = sc.parallelize([("John", 19), ("Smith", 23), ("Sarah", 18)])# Schema with two fields - person_name and person_age
>>> schema = StructType([StructField("person_name", StringType(), False),StructField("person_age", IntegerType(), False)])# Create a DataFrame by applying the schema to the RDD and print the schema
>>> another_df = sqlContext.createDataFrame(another_rdd, schema)
>>> another_df.printSchema()
root
|-- person_name: string (nullable = false)
|-- person_age: integer (nullable = false)
进入Github下载people.json文件:

并上传到HDFS上:

继续执行脚本:
# A JSON dataset is pointed to by path.
# The path can be either a single text file or a directory storing text files.
>>> if len(sys.argv) < 2:
... path = "/user/cf/people.json"
... else:
... path = sys.argv[1]
...
# Create a DataFrame from the file(s) pointed to by path
>>> people = sqlContext.jsonFile(path)
[Stage 5:> (0 + 1) / 2]19/07/04 10:34:33 WARN spark.ExecutorAllocationManager: No stages are running, but numRunningTasks != 0
# The inferred schema can be visualized using the printSchema() method.
>>> people.printSchema()
root
|-- age: long (nullable = true)
|-- name: string (nullable = true)
# Register this DataFrame as a table.
>>> people.registerAsTable("people")
/opt/cloudera/parcels/CDH-5.14.2-1.cdh5.14.2.p0.3/lib/spark/python/pyspark/sql/dataframe.py:142: UserWarning: Use registerTempTable instead of registerAsTable.
warnings.warn("Use registerTempTable instead of registerAsTable.")
# SQL statements can be run by using the sql methods provided by sqlContext
>>> teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
>>> for each in teenagers.collect():
... print(each[0])
...
Justin
执行结束:
>>> sc.stop() >>>
参考程序:
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import os
import sys
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import Row, StructField, StructType, StringType, IntegerType
if __name__ == "__main__":
sc = SparkContext(appName="PythonSQL")
sqlContext = SQLContext(sc)
# RDD is created from a list of rows
some_rdd = sc.parallelize([Row(name="John", age=19),
Row(name="Smith", age=23),
Row(name="Sarah", age=18)])
# Infer schema from the first row, create a DataFrame and print the schema
some_df = sqlContext.createDataFrame(some_rdd)
some_df.printSchema()
# Another RDD is created from a list of tuples
another_rdd = sc.parallelize([("John", 19), ("Smith", 23), ("Sarah", 18)])
# Schema with two fields - person_name and person_age
schema = StructType([StructField("person_name", StringType(), False),
StructField("person_age", IntegerType(), False)])
# Create a DataFrame by applying the schema to the RDD and print the schema
another_df = sqlContext.createDataFrame(another_rdd, schema)
another_df.printSchema()
# root
# |-- age: integer (nullable = true)
# |-- name: string (nullable = true)
# A JSON dataset is pointed to by path.
# The path can be either a single text file or a directory storing text files.
if len(sys.argv) < 2:
path = "file://" + \
os.path.join(os.environ['SPARK_HOME'], "examples/src/main/resources/people.json")
else:
path = sys.argv[1]
# Create a DataFrame from the file(s) pointed to by path
people = sqlContext.jsonFile(path)
# root
# |-- person_name: string (nullable = false)
# |-- person_age: integer (nullable = false)
# The inferred schema can be visualized using the printSchema() method.
people.printSchema()
# root
# |-- age: IntegerType
# |-- name: StringType
# Register this DataFrame as a table.
people.registerAsTable("people")
# SQL statements can be run by using the sql methods provided by sqlContext
teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
for each in teenagers.collect():
print(each[0])
sc.stop()
Spark教程——(5)PySpark入门的更多相关文章
- Spark教程——(11)Spark程序local模式执行、cluster模式执行以及Oozie/Hue执行的设置方式
本地执行Spark SQL程序: package com.fc //import common.util.{phoenixConnectMode, timeUtil} import org.apach ...
- Spring_MVC_教程_快速入门_深入分析
Spring MVC 教程,快速入门,深入分析 博客分类: SPRING Spring MVC 教程快速入门 资源下载: Spring_MVC_教程_快速入门_深入分析V1.1.pdf Spring ...
- AFNnetworking快速教程,官方入门教程译
AFNnetworking快速教程,官方入门教程译 分类: IOS2013-12-15 20:29 12489人阅读 评论(5) 收藏 举报 afnetworkingjsonios入门教程快速教程 A ...
- 【译】ASP.NET MVC 5 教程 - 1:入门
原文:[译]ASP.NET MVC 5 教程 - 1:入门 本教程将教你使用Visual Studio 2013 预览版构建 ASP.NET MVC 5 Web 应用程序 的基础知识.本主题还附带了一 ...
- Nginx教程(一) Nginx入门教程
Nginx教程(一) Nginx入门教程 1 Nginx入门教程 Nginx是一款轻量级的Web服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器,并在一个BSD-like协议下发行.由 ...
- spark教程
某大神总结的spark教程, 地址 http://litaotao.github.io/introduction-to-spark?s=inner
- Android基础-系统架构分析,环境搭建,下载Android Studio,AndroidDevTools,Git使用教程,Github入门,界面设计介绍
系统架构分析 Android体系结构 安卓结构有四大层,五个部分,Android分四层为: 应用层(Applications),应用框架层(Application Framework),系统运行层(L ...
- Spark SQL 编程API入门系列之SparkSQL的依赖
不多说,直接上干货! 不带Hive支持 <dependency> <groupId>org.apache.spark</groupId> <artifactI ...
- spark教程(七)-文件读取案例
sparkSession 读取 csv 1. 利用 sparkSession 作为 spark 切入点 2. 读取 单个 csv 和 多个 csv from pyspark.sql import Sp ...
- spark教程(六)-Python 编程与 spark-submit 命令
hadoop 是 java 开发的,原生支持 java:spark 是 scala 开发的,原生支持 scala: spark 还支持 java.python.R,本文只介绍 python spark ...
随机推荐
- socket实现简单的FTP
一.开发环境 server端:centos 7 python-3.6.2 客户端:Windows 7 python-3.6.2 pycharm-2018 程序目的:1.学习使用socketserve ...
- VBA 学习笔记 - 运算符
学习资料:https://www.yiibai.com/vba/vba_operators.html 算术运算符 加减乘除模指,这个没啥特别的. 比较运算符 和 Lua 相比,判断相等变成了一个等于号 ...
- Codeforces Global Round 6 - D. Decreasing Debts(思维)
题意:有$n$个人,$m$个债务关系,$u_{i}$,$v_{i}$,$d_{i}$表示第$u_{i}个人$欠第$v_{i}$个人$d_{i}$块钱,现在你需要简化债务关系,使得债务总额最小.比如,$ ...
- Keras入门——(1)全连接神经网络FCN
Anaconda安装Keras: conda install keras 安装完成: 在Jupyter Notebook中新建并执行代码: import keras from keras.datase ...
- 全排列next_permutation()用法和构造函数赋值
全排列next_permutation()用法 在头文件aglorithm里 就是1~n数组的现在的字典序到最大的字典序的依次增加.(最多可以是n!种情况) int a[n]; do{ }while( ...
- STC8
一 时钟: IRC:24MHZ;LSI:32.768KHZ;HSE:4~33MHZ,外设可分频 二 2种低功耗模式: IDLE:1.3MA@6MHZ,外设可唤醒. STOP: 三:ISP下载更新模式: ...
- Session服务器之Memcached
材料:两台Tomcat(接Session复制一起做) 第一台Tomcat:IP为130 [root@localhost ~]# yum install libevent memcached -y ...
- java.io.IOException: java.io.FileNotFoundException: /tmp/tomcat.2457258178644046891.8080/work/Tomcat/localhost/innovate-admin/C:/up/154884318438733213952/sys-error.log (没有那个文件或目录)
环境: Ubuntu18 vue+elementUI 实现文件的上传 报错信息: MultipartFile.transferTo(dest) 报 FileNotFoundException java ...
- idea设置自带的maven为国内镜像
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明.本文链接:https://blog.csdn.net/panchang199266/articl ...
- A - Bi-shoe and Phi-shoe 素数打表
A - Bi-shoe and Phi-shoe Bamboo Pole-vault is a massively popular sport in Xzhiland. And Master Phi- ...