[Spark][Python]DataFrame中取出有限个记录的例子
[Spark][Python]DataFrame中取出有限个记录的例子:
sqlContext = HiveContext(sc)
peopleDF = sqlContext.read.json("people.json")
peopleDF.limit(3).show()
===
[training@localhost ~]$ hdfs dfs -cat people.json
{"name":"Alice","pcode":"94304"}
{"name":"Brayden","age":30,"pcode":"94304"}
{"name":"Carla","age":19,"pcoe":"10036"}
{"name":"Diana","age":46}
{"name":"Etienne","pcode":"94104"}
[training@localhost ~]$
In [1]: sqlContext = HiveContext(sc)
In [2]: peopleDF = sqlContext.read.json("people.json")
17/10/05 05:03:11 INFO hive.HiveContext: Initializing execution hive, version 1.1.0
17/10/05 05:03:11 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0-cdh5.7.0
17/10/05 05:03:11 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.7.0
17/10/05 05:03:14 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
17/10/05 05:03:14 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/10/05 05:03:15 INFO hive.metastore: Connected to metastore.
17/10/05 05:03:16 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-99a33db4-b69a-46a9-8032-f87d63299040/scratch/training
17/10/05 05:03:16 INFO session.SessionState: Created local directory: /tmp/4e1c5259-7ae8-482c-ae77-94d3a0c51f91_resources
17/10/05 05:03:16 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-99a33db4-b69a-46a9-8032-f87d63299040/scratch/training/4e1c5259-7ae8-482c-ae77-94d3a0c51f91
17/10/05 05:03:16 INFO session.SessionState: Created local directory: /tmp/training/4e1c5259-7ae8-482c-ae77-94d3a0c51f91
17/10/05 05:03:16 INFO session.SessionState: Created HDFS directory: file:/tmp/spark-99a33db4-b69a-46a9-8032-f87d63299040/scratch/training/4e1c5259-7ae8-482c-ae77-94d3a0c51f91/_tmp_space.db
17/10/05 05:03:16 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
17/10/05 05:03:16 INFO json.JSONRelation: Listing hdfs://localhost:8020/user/training/people.json on driver
17/10/05 05:03:19 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 251.1 KB, free 251.1 KB)
17/10/05 05:03:20 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 21.6 KB, free 272.7 KB)
17/10/05 05:03:20 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55073 (size: 21.6 KB, free: 208.8 MB)
17/10/05 05:03:20 INFO spark.SparkContext: Created broadcast 0 from json at NativeMethodAccessorImpl.java:-2
17/10/05 05:03:20 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/05 05:03:21 INFO spark.SparkContext: Starting job: json at NativeMethodAccessorImpl.java:-2
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Got job 0 (json at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (json at NativeMethodAccessorImpl.java:-2)
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/10/05 05:03:21 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.3 KB, free 277.1 KB)
17/10/05 05:03:21 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.4 KB, free 279.5 KB)
17/10/05 05:03:21 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55073 (size: 2.4 KB, free: 208.8 MB)
17/10/05 05:03:21 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/10/05 05:03:21 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at NativeMethodAccessorImpl.java:-2)
17/10/05 05:03:21 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/10/05 05:03:21 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/05 05:03:21 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
17/10/05 05:03:21 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/05 05:03:21 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
17/10/05 05:03:21 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
17/10/05 05:03:21 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
17/10/05 05:03:21 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
17/10/05 05:03:21 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
17/10/05 05:03:22 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2354 bytes result sent to driver
17/10/05 05:03:22 INFO scheduler.DAGScheduler: ResultStage 0 (json at NativeMethodAccessorImpl.java:-2) finished in 0.931 s
17/10/05 05:03:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 850 ms on localhost (1/1)
17/10/05 05:03:22 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/10/05 05:03:22 INFO scheduler.DAGScheduler: Job 0 finished: json at NativeMethodAccessorImpl.java:-2, took 1.388410 s
17/10/05 05:03:23 INFO hive.HiveContext: default warehouse location is /user/hive/warehouse
17/10/05 05:03:23 INFO hive.HiveContext: Initializing metastore client version 1.1.0 using Spark classes.
17/10/05 05:03:23 INFO client.ClientWrapper: Inspected Hadoop version: 2.6.0-cdh5.7.0
17/10/05 05:03:23 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.6.0-cdh5.7.0
17/10/05 05:03:23 INFO spark.ContextCleaner: Cleaned accumulator 2
17/10/05 05:03:23 INFO storage.BlockManagerInfo: Removed broadcast_1_piece0 on localhost:55073 in memory (size: 2.4 KB, free: 208.8 MB)
17/10/05 05:03:25 INFO hive.metastore: Trying to connect to metastore with URI thrift://localhost.localdomain:9083
17/10/05 05:03:25 INFO hive.metastore: Opened a connection to metastore, current connections: 1
17/10/05 05:03:25 INFO hive.metastore: Connected to metastore.
17/10/05 05:03:25 INFO session.SessionState: Created local directory: /tmp/684b38e5-72f0-4712-81d4-4c439e093f5c_resources
17/10/05 05:03:25 INFO session.SessionState: Created HDFS directory: /tmp/hive/training/684b38e5-72f0-4712-81d4-4c439e093f5c
17/10/05 05:03:25 INFO session.SessionState: Created local directory: /tmp/training/684b38e5-72f0-4712-81d4-4c439e093f5c
17/10/05 05:03:25 INFO session.SessionState: Created HDFS directory: /tmp/hive/training/684b38e5-72f0-4712-81d4-4c439e093f5c/_tmp_space.db
17/10/05 05:03:25 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
In [3]: peopleDF.limit(3).show()
17/10/05 05:04:09 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 65.5 KB, free 338.2 KB)
17/10/05 05:04:10 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 21.4 KB, free 359.6 KB)
17/10/05 05:04:10 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:55073 (size: 21.4 KB, free: 208.8 MB)
17/10/05 05:04:10 INFO spark.SparkContext: Created broadcast 2 from showString at NativeMethodAccessorImpl.java:-2
17/10/05 05:04:10 INFO storage.MemoryStore: Block broadcast_3 stored as values in memory (estimated size 251.1 KB, free 610.7 KB)
17/10/05 05:04:11 INFO storage.MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 21.6 KB, free 632.4 KB)
17/10/05 05:04:11 INFO storage.BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:55073 (size: 21.6 KB, free: 208.7 MB)
17/10/05 05:04:11 INFO spark.SparkContext: Created broadcast 3 from showString at NativeMethodAccessorImpl.java:-2
17/10/05 05:04:12 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/05 05:04:12 INFO spark.SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Got job 1 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Final stage: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2)
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[9] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/10/05 05:04:12 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 5.9 KB, free 638.2 KB)
17/10/05 05:04:12 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 3.3 KB, free 641.5 KB)
17/10/05 05:04:12 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:55073 (size: 3.3 KB, free: 208.7 MB)
17/10/05 05:04:12 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
17/10/05 05:04:12 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at showString at NativeMethodAccessorImpl.java:-2)
17/10/05 05:04:12 INFO scheduler.TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/10/05 05:04:12 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/05 05:04:12 INFO executor.Executor: Running task 0.0 in stage 1.0 (TID 1)
17/10/05 05:04:12 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/05 05:04:14 INFO codegen.GenerateUnsafeProjection: Code generated in 1563.240244 ms
17/10/05 05:04:14 INFO codegen.GenerateSafeProjection: Code generated in 182.529448 ms
17/10/05 05:04:15 INFO executor.Executor: Finished task 0.0 in stage 1.0 (TID 1). 2328 bytes result sent to driver
17/10/05 05:04:15 INFO scheduler.DAGScheduler: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2) finished in 2.549 s
17/10/05 05:04:15 INFO scheduler.DAGScheduler: Job 1 finished: showString at NativeMethodAccessorImpl.java:-2, took 2.852393 s
17/10/05 05:04:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 1) in 2547 ms on localhost (1/1)
17/10/05 05:04:15 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
+----+-------+-----+-----+
| age| name|pcode| pcoe|
+----+-------+-----+-----+
|null| Alice|94304| null|
| 30|Brayden|94304| null|
| 19| Carla| null|10036|
+----+-------+-----+-----+
In [4]:
[Spark][Python]DataFrame中取出有限个记录的例子的更多相关文章
- [Spark][Python]DataFrame where 操作例子
[Spark][Python]DataFrame中取出有限个记录的例子 的 继续 [15]: myDF=peopleDF.where("age>21") In [16]: m ...
- [Spark][Python]DataFrame select 操作例子
[Spark][Python]DataFrame中取出有限个记录的例子 的 继续 In [4]: peopleDF.select("age")Out[4]: DataFrame[a ...
- [Spark][Python]DataFrame select 操作例子II
[Spark][Python]DataFrame中取出有限个记录的 继续 In [4]: peopleDF.select("age","name") In ...
- [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子
[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子 sqlContext = HiveContext(sc) peopleDF = sqlContext. ...
- [Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子
[Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子 $ hdfs dfs -cat people.json {"name":&quo ...
- [Spark][Python][DataFrame][Write]DataFrame写入的例子
[Spark][Python][DataFrame][Write]DataFrame写入的例子 $ hdfs dfs -cat people.json {"name":" ...
- [Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子
[Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子 $cat people.json {"name":" ...
- [Spark][Python]DataFrame的左右连接例子
[Spark][Python]DataFrame的左右连接例子 $ hdfs dfs -cat people.json {"name":"Alice",&quo ...
- Python dataframe中如何使y列按x列进行统计?
如图:busy=0 or 1,求出busy=1时los的平均,同样对busy=0时也求出los的平均 Python dataframe中如何使y列按x列进行统计? >> python这个答 ...
随机推荐
- 最新安全狗 apache v4.0 sql注入 bypass
前言 最近没事学习一下 waf 的 bypass , 本文介绍下 bypass 安全狗的笔记.个人感觉 bypass 的总思路(正则匹配型 waf)就是利用各种语法特性来逃避正则(当然要保证语法正确性 ...
- The ADB instructions
adb kill-server 杀死adb服务. adb start-server 开启adb服务. adb install xxx.apk 安装应用. adb uninstall 应用的包名.卸载应 ...
- Java并发编程(五)Lock
一.synchronized的缺陷 synchronized是java中的一个关键字,也就是说是Java语言内置的特性.那么为什么会出现Lock呢? 在上面一篇文章中,我们了解到如果一个代码块被syn ...
- Redis系列(一):Redis的简介与安装
原文链接(转载请注明出处):Redis系列(一):Redis的简介与安装 什么是 Redis Redis 是一个使用ANSI C 编写的开源.支持网络协议.基于内存.可选持久性的键值对数据库,它是一个 ...
- 洗礼灵魂,修炼python(45)--巩固篇—【转载】类的__now__和__init__
学到这里了,相信你应该对__init__非常熟悉了,就是构造器呗,当类被实例化时初始化的作用 但__init__其实不是实例化一个类的时候第一个自动调用的方法.当实例化一个类时,最先被调用的方法 其实 ...
- python第三十天-类
编程范式 编程是程序员用特定的语法+数据结构+算法组成的代码来告诉计算机如何执行任务的过程 , 一个程序是程序员为了得到一个任务结果而编写的一组指令的集合,正所谓条条大路通罗马,实现一个任务的方式有很 ...
- SQL Server2008 18456错误
1.以windows验证模式进入数据库管理器. 第二步:右击sa,选择属性: 在常规选项卡中,重新填写密码和确认密码(改成个好记的).把强制实施密码策略去掉. 第三步:点击状态选项卡:勾选 ...
- jeDeta 日历控件的那些坑
经过亲自测试 jeDeta 发现 jeDeta 还是有坑的: 1.参数 options 里面的 format 有很多种格式 API 里面写的是 format: 'YYYY-MM-DD hh:mm:ss ...
- 转://Oracle 11gR2 RAC ASM磁盘全部丢失后的恢复
一.环境描述 (1)Oracle 11.2.0.3 RAC ON Oracle Linux 6 x86_64,只有一个ASM外部冗余磁盘组--DATA: (2)OCR,VOTEDISK,DATAFIL ...
- 有时间研究一下Spark的HashPartitioner和RangePartitioner
有时间研究一下Spark的HashPartitioner和RangePartitioner有时间研究一下Spark的HashPartitioner和RangePartitioner有时间研究一下Spa ...