[Spark][python]以DataFrame方式打开Json文件的例子
[Spark][python]以DataFrame方式打开Json文件的例子:
[training@localhost ~]$ cat people.json
{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}
[training@localhost ~]$
[training@localhost ~]$ hdfs dfs -put people.json
[training@localhost ~]$ hdfs dfs -cat people.json
{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}
In [1]: sqlContext = HiveContext(sc)
In [2]: peopleDF = sqlContext.read.json("people.json")
17/10/01 05:20:22 INFO hive.HiveContext: Initializing execution hive, version 1.1.0
17/10/01 05:20:22 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0-cdh5.7.0
17/10/01 05:20:22 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.7.0
17/10/01 05:20:23 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
17/10/01 05:20:23 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/10/01 05:20:23 INFO hive.metastore: Connected to metastore.
17/10/01 05:20:23 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-839b35f5-91a1-436c-aae5-922ebacb27f1/scratch/training
17/10/01 05:20:23 INFO session.SessionState: Created local directory: /tmp/b3e52bfc-fe3a-4abe-ac7b-da071104b2f9_resources
17/10/01 05:20:23 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-839b35f5-91a1-436c-aae5-922ebacb27f1/scratch/training/b3e52bfc-fe3a-4abe-ac7b-da071104b2f9
17/10/01 05:20:23 INFO session.SessionState: Created local directory: /tmp/training/b3e52bfc-fe3a-4abe-ac7b-da071104b2f9
17/10/01 05:20:23 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-839b35f5-91a1-436c-aae5-922ebacb27f1/scratch/training/b3e52bfc-fe3a-4abe-ac7b-da071104b2f9/_tmp_space.db
17/10/01 05:20:23 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
17/10/01 05:20:23 INFO json.JSONRelation: Listing hdfs://localhost:8020/user/training/people.json on driver
17/10/01 05:20:25 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 251.1 KB, free 251.1 KB)
17/10/01 05:20:25 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 21.6 KB, free 272.7 KB)
17/10/01 05:20:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:42171 (size: 21.6 KB, free: 208.8 MB)
17/10/01 05:20:25 INFO spark.SparkContext: Created broadcast 0 from json at NativeMethodAccessorImpl.java:-2
17/10/01 05:20:26 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/01 05:20:26 INFO spark.SparkContext: Starting job: json at NativeMethodAccessorImpl.java:-2
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Got job 0 (json at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (json at NativeMethodAccessorImpl.java:-2)
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/10/01 05:20:26 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 277.1 KB)
17/10/01 05:20:26 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.4 KB, free 279.5 KB)
17/10/01 05:20:26 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:42171 (size: 2.4 KB, free: 208.8 MB)
17/10/01 05:20:26 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/10/01 05:20:26 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at NativeMethodAccessorImpl.java:-2)
17/10/01 05:20:26 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/10/01 05:20:26 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/01 05:20:26 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
17/10/01 05:20:26 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/01 05:20:27 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/10/01 05:20:27 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/10/01 05:20:27 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/10/01 05:20:27 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/10/01 05:20:27 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/10/01 05:20:27 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2354 bytes result sent to driver
17/10/01 05:20:27 INFO scheduler.DAGScheduler: ResultStage 0 (json at NativeMethodAccessorImpl.java:-2) finished in 0.715 s
17/10/01 05:20:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 667 ms on localhost (1/1)
17/10/01 05:20:27 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/10/01 05:20:27 INFO scheduler.DAGScheduler: Job 0 finished: json at NativeMethodAccessorImpl.java:-2, took 1.084685 s
17/10/01 05:20:27 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
17/10/01 05:20:28 INFO hive.HiveContext: Initializing metastore client version 1.1.0 using Spark classes.
17/10/01 05:20:28 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0-cdh5.7.0
17/10/01 05:20:28 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.7.0
17/10/01 05:20:28 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on localhost:42171 in memory (size: 2.4 KB, free: 208.8 MB)
17/10/01 05:20:28 INFO spark.ContextCleaner: Cleaned accumulator 2
17/10/01 05:20:30 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
17/10/01 05:20:30 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/10/01 05:20:30 INFO hive.metastore: Connected to metastore.
17/10/01 05:20:30 INFO session.SessionState: Created HDFS directory: /tmp/hive/training
17/10/01 05:20:30 INFO session.SessionState: Created local directory: /tmp/8c1eba54-7260-4314-abbf-7b7de85bdf0a_resources
17/10/01 05:20:30 INFO session.SessionState: Created HDFS directory: /tmp/hive/training/8c1eba54-7260-4314-abbf-7b7de85bdf0a
17/10/01 05:20:30 INFO session.SessionState: Created local directory: /tmp/training/8c1eba54-7260-4314-abbf-7b7de85bdf0a
17/10/01 05:20:30 INFO session.SessionState: Created HDFS directory: /tmp/hive/training/8c1eba54-7260-4314-abbf-7b7de85bdf0a/_tmp_space.db
17/10/01 05:20:30 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
In [3]: type(peopleDF)
Out[3]: pyspark.sql.dataframe.DataFrame
In [4]:
[Spark][python]以DataFrame方式打开Json文件的例子的更多相关文章
- pycharm 打开json 文件 \2 自动成了转义字符
打开json 文件 \2 自动成了转义字符 暂时只发现在( \2 ) \ 后面为数字的情况下会出现转义json 文件为是指:在pycharm 中新建 file 后缀为json的文件 如: 1234.j ...
- [Spark][Python][RDD][DataFrame]从 RDD 构造 DataFrame 例子
[Spark][Python][RDD][DataFrame]从 RDD 构造 DataFrame 例子 from pyspark.sql.types import * schema = Struct ...
- gdal以GA_Update方式打开jpg文件的做法
作者:朱金灿 来源:http://blog.csdn.net/clever101 gdal库是不支持以GA_Update方式打开jpg文件的,原因在于gdal_1_10_1\frmts\jpeg文件夹 ...
- C++->以读或写方式打开一个文件
以读或写方式打开一个文件 #include<iostream.h> //.h以C|非C标准引用库文件 #include<fstream.h> #include<std ...
- python webdriver 测试框架-数据驱动json文件驱动的方式
数据驱动json文件的方式 test_data_list.json: [ "邓肯||蒂姆", "乔丹||迈克尔", "库里||斯蒂芬", & ...
- Python【8】-分析json文件
一.本节用到的基础知识 1.逐行读取文件 for line in open('E:\Demo\python\json.txt'): print line 2.解析json字符串 Python中有一些内 ...
- Python3编写网络爬虫09-数据存储方式二-JSON文件存储
2.JSON文件存储 全称为JavaScript Object Notation 通过对象和数组的组合来表示数据,构造简洁且结构化程度非常高.是一种轻量级的数据交换格式 2.1 对象和数组 在Java ...
- VisualStudio如何以源码文本方式打开rc文件
视图 >> 解决方案资源管理器 >> 右击XXX.rc >> 打开方式 >> 源代码(文本)编辑器
- python 实现excel转化成json文件
1.准备工作 python 2.7 安装 安装xlrd -- pip install xlrd 2. 直接上代码 import xlrd from collections import Ordered ...
随机推荐
- 安卓开发_浅谈DatePicker(日期选择器)
DatePicker继承自FrameLayout类,日期选择控件的主要功能是向用户提供包含年.月.日的日期数据并允许用户对其修改.如果要捕获用户修改日期选择控件中的数据事件,需要为DatePicker ...
- Fit项目图片上传和云存储的调通
项目中关于动作的说明需要相应的配图,这样可以更直观的说明动作要点.本篇主要为项目中动作的新增和编辑做准备,确定适合场景的上传操作逻辑以及图片的存储和加载的方法. 一 上传方案 a) 本来所用的模板中是 ...
- 框架模式MVC在安卓中的实践
我们采用ListView来演示我们的MVC模式,目录结构: 实体类:包含了书的名字和图片信息 public class Book { //书名 private String name; //书的图片 ...
- Fiddler抓包使用教程-乱码处理 Decode
转载请标明出处:http://blog.csdn.net/zhaoyanjun6/article/details/73350344 本文出自[赵彦军的博客] 在 Fiddler 的工具栏中有一个 De ...
- lombok的简单介绍(1)
一.项目背景 在写Java程序的时候经常会遇到如下情形: 新建了一个Class类,然后在其中设置了几个字段,最后还需要花费很多时间来建立getter和setter方法 lombok项目的产生就是为了省 ...
- tkinter中Radiobutton单选框控件(七)
Radiobutton控件 由于本次内容中好多知识都是之前重复解释过的,本次就不做解释了.不太清楚的内容请参考tkinter1-6节中的内容 import tkinter wuya = tkinter ...
- PHP实现简单下载功能
PHP实现简单下载 PHP文件为download.php,供下载的文件为1.jpg. <?php $filename="1.jpg"; if(!file_exists($fi ...
- UVA816-Abbott's Revenge(搜索进阶)
Problem UVA816-Abbott's Revenge Accept: 1010 Submit: 10466 Time Limit: 3000 mSec Problem Descriptio ...
- Http接口安全整理
1.Http接口安全概述: 1.1.Http接口是互联网各系统之间对接的重要方式之一,使用http接口,开发和调用都很方便,也是被大量采用的方式,它可以让不同系统之间实现数据的交换和共享,但由于htt ...
- Database hang and Row Cache Lock concurrency troubleshooting
http://www.dadbm.com/database-hang-row-cache-lock-concurrency-troubleshooting/ Issue backgroundThis ...