[Spark][Python]DataFrame中取出有限个记录的例子

的 继续

In [4]: peopleDF.select("age")
Out[4]: DataFrame[age: bigint]

In [5]: myDF=people.select("age")
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-b5b723b62a49> in <module>()
----> 1 myDF=people.select("age")

NameError: name 'people' is not defined

In [6]: myDF=peopleDF.select("age")

In [7]: myDF.take(3)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 230.1 KB, free 871.7 KB)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 21.4 KB, free 893.1 KB)
17/10/05 05:13:02 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:55073 (size: 21.4 KB, free: 208.7 MB)
17/10/05 05:13:02 INFO spark.SparkContext: Created broadcast 5 from take at <ipython-input-7-745486715568>:1
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 251.1 KB, free 1144.2 KB)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 21.6 KB, free 1165.8 KB)
17/10/05 05:13:02 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:55073 (size: 21.6 KB, free: 208.7 MB)
17/10/05 05:13:02 INFO spark.SparkContext: Created broadcast 6 from take at <ipython-input-7-745486715568>:1
17/10/05 05:13:03 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/05 05:13:03 INFO spark.SparkContext: Starting job: take at <ipython-input-7-745486715568>:1
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Got job 2 (take at <ipython-input-7-745486715568>:1) with 1 output partitions
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (take at <ipython-input-7-745486715568>:1)
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[14] at take at <ipython-input-7-745486715568>:1), which has no missing parents
17/10/05 05:13:03 INFO storage.MemoryStore: Block broadcast_7 stored as values in memory (estimated size 4.3 KB, free 1170.2 KB)
17/10/05 05:13:03 INFO storage.MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 2.5 KB, free 1172.6 KB)
17/10/05 05:13:03 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on localhost:55073 (size: 2.5 KB, free: 208.7 MB)
17/10/05 05:13:03 INFO spark.SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1006
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[14] at take at <ipython-input-7-745486715568>:1)
17/10/05 05:13:03 INFO scheduler.TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
17/10/05 05:13:03 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/05 05:13:03 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 2)
17/10/05 05:13:03 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/05 05:13:03 INFO codegen.GenerateUnsafeProjection: Code generated in 113.719806 ms
17/10/05 05:13:03 INFO executor.Executor: Finished task 0.0 in stage 2.0 (TID 2). 2235 bytes result sent to driver
17/10/05 05:13:03 INFO scheduler.DAGScheduler: ResultStage 2 (take at <ipython-input-7-745486715568>:1) finished in 0.493 s
17/10/05 05:13:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 487 ms on localhost (1/1)
17/10/05 05:13:03 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Job 2 finished: take at <ipython-input-7-745486715568>:1, took 0.737231 s
Out[7]: [Row(age=None), Row(age=30), Row(age=19)]

In [8]:

[Spark][Python]DataFrame select 操作例子的更多相关文章

  1. [Spark][Python]DataFrame select 操作例子II

    [Spark][Python]DataFrame中取出有限个记录的   继续 In [4]: peopleDF.select("age","name") In ...

  2. [Spark][Python]DataFrame where 操作例子

    [Spark][Python]DataFrame中取出有限个记录的例子 的 继续 [15]: myDF=peopleDF.where("age>21") In [16]: m ...

  3. [Spark][Python]RDD flatMap 操作例子

    RDD flatMap 操作例子: flatMap,对原RDD的每个元素(行)执行函数操作,然后把每行都“拍扁” [training@localhost ~]$ hdfs dfs -put cats. ...

  4. [Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子

    [Spark][Python][DataFrame][SQL]Spark对DataFrame直接执行SQL处理的例子 $cat people.json {"name":" ...

  5. [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子

    [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子 sqlContext = HiveContext(sc) peopleDF = sqlContext. ...

  6. [Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子

    [Spark][Python][DataFrame][RDD]从DataFrame得到RDD的例子 $ hdfs dfs -cat people.json {"name":&quo ...

  7. [Spark][Python][DataFrame][Write]DataFrame写入的例子

    [Spark][Python][DataFrame][Write]DataFrame写入的例子 $ hdfs dfs -cat people.json {"name":" ...

  8. [Spark][Python]DataFrame的左右连接例子

    [Spark][Python]DataFrame的左右连接例子 $ hdfs dfs -cat people.json {"name":"Alice",&quo ...

  9. [Spark][Python]DataFrame中取出有限个记录的例子

    [Spark][Python]DataFrame中取出有限个记录的例子: sqlContext = HiveContext(sc) peopleDF = sqlContext.read.json(&q ...

随机推荐

  1. LeanCloud云引擎相关问题

    (1).Windows 用户可以在 Github releases 页面 根据操作系统版本下载最新的 32 位 或 64 位 msi 安装包进行安装,安装成功之后在 Windows 命令提示符(或 P ...

  2. Android architecture

  3. scala spark 机器学习初探

    Transformer: 是一个抽象类包含特征转换器, 和最终的学习模型, 需要实现transformer方法 通常transformer为一个RDD增加若干列, 最终转化成另一个RDD, 1. 特征 ...

  4. MySQL5.7中的sql_mode默认值

    简介 在正常项目开发过程中,如果MySQL版本从5.6升级到5.7版本.作为DBA在考虑数据库版本升级带来的影响时,一般会有几个注意点: sql_mode 默认值的改变 optimizer_switc ...

  5. FUSE 用户空间文件系统 (Filesystem in Userspace)

    FUSE 仓库 Wiki FUSE 性能评测 关于Fuse文件系统: FUSE (Filesystem in Userspace) is an interface for userspace prog ...

  6. 用Python实现数据结构之链表

    链表 链表与栈,队列不一样,它是由一个个节点构成的,每个节点存储着本身的一些信息,也存储着其他一个或多个节点的引用,可以从一个节点找到其他的节点,节点与节点之间就像是有链连在一起一样,这种数据结构就叫 ...

  7. ActivityThread

    /** * This manages the execution of the main thread in an * application process, scheduling and exec ...

  8. 浮动、清除浮动、BFC

    一. 浮动 1. 浮动的定义 使元素脱离文档流,按照向左或向右的方向移动,直到它的外边缘碰到包含它的框或另一个浮动框为止. 脱离文档流就是在页面中不占位置了. 左浮动右浮动此处就不再赘述了. 2. 看 ...

  9. 【js】++i和i++

    解析: i++ 执行完语句后再加1 ++i 先加1再执行后面的语句 例如 var i=0; ++i 的值为1 i++的值为0 注意:不管前置++还是后置++,i的值都会发生变化,值为1 例子 1.va ...

  10. Python高级网络编程系列之第二篇

    在上一篇中,我们深入探讨了TCP/IP协议的11种状态,理解这些状态对我们编写服务器的时候有很大的帮助,但一般写服务器都是使用C/Java语言,因为这些语言对高并发的支持特别好.我们写的这些简单的服务 ...