[spark][python]Spark map 处理
map 就是对一个RDD的各个元素都施加处理,得到一个新的RDD 的过程
[training@localhost ~]$ cat names.txt
Year,First Name,County,Sex,Count
2012,DOMINIC,CAYUGA,M,6
2012,ADDISON,ONONDAGA,F,14
2012,ADDISON,ONONDAGA,F,14
2012,JULIA,ONONDAGA,F,15
[training@localhost ~]$ hdfs dfs -put names.txt
[training@localhost ~]$ hdfs dfs -cat names.txt
Year,First Name,County,Sex,Count
2012,DOMINIC,CAYUGA,M,6
2012,ADDISON,ONONDAGA,F,14
2012,ADDISON,ONONDAGA,F,14
2012,JULIA,ONONDAGA,F,15
[training@localhost ~]$
In [98]: t_names = sc.textFile("names.txt")
17/09/24 06:24:22 INFO storage.MemoryStore: Block broadcast_27 stored as values in memory (estimated size 230.5 KB, free 2.3 MB)
17/09/24 06:24:23 INFO storage.MemoryStore: Block broadcast_27_piece0 stored as bytes in memory (estimated size 21.5 KB, free 2.3 MB)
17/09/24 06:24:23 INFO storage.BlockManagerInfo: Added broadcast_27_piece0 in memory on localhost:33950 (size: 21.5 KB, free: 208.6 MB)
17/09/24 06:24:23 INFO spark.SparkContext: Created broadcast 27 from textFile at NativeMethodAccessorImpl.java:-2
In [99]: rows=t_names.map(lambda line: line.split(","))
In [100]: rows.take(1)
17/09/24 06:25:23 INFO mapred.FileInputFormat: Total input paths to process : 1
17/09/24 06:25:23 INFO spark.SparkContext: Starting job: runJob at PythonRDD.scala:393
17/09/24 06:25:23 INFO scheduler.DAGScheduler: Got job 15 (runJob at PythonRDD.scala:393) with 1 output partitions
17/09/24 06:25:23 INFO scheduler.DAGScheduler: Final stage: ResultStage 15 (runJob at PythonRDD.scala:393)
17/09/24 06:25:23 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/09/24 06:25:23 INFO scheduler.DAGScheduler: Missing parents: List()
17/09/24 06:25:23 INFO scheduler.DAGScheduler: Submitting ResultStage 15 (PythonRDD[46] at RDD at PythonRDD.scala:43), which has no missing parents
17/09/24 06:25:23 INFO storage.MemoryStore: Block broadcast_28 stored as values in memory (estimated size 5.2 KB, free 2.3 MB)
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_26_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 8
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_18_piece0 on localhost:33950 in memory (size: 3.7 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO storage.MemoryStore: Block broadcast_28_piece0 stored as bytes in memory (estimated size 3.3 KB, free 2.3 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 9
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_19_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 10
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Added broadcast_28_piece0 in memory on localhost:33950 (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.SparkContext: Created broadcast 28 from broadcast at DAGScheduler.scala:1006
17/09/24 06:25:24 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 15 (PythonRDD[46] at RDD at PythonRDD.scala:43)
17/09/24 06:25:24 INFO scheduler.TaskSchedulerImpl: Adding task set 15.0 with 1 tasks
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_20_piece0 on localhost:33950 in memory (size: 3.7 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 11
17/09/24 06:25:24 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 15.0 (TID 15, localhost, partition 0,PROCESS_LOCAL, 2147 bytes)
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_21_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 12
17/09/24 06:25:24 INFO executor.Executor: Running task 0.0 in stage 15.0 (TID 15)
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_22_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 13
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_23_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 14
17/09/24 06:25:24 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/names.txt:0+136
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_24_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 15
17/09/24 06:25:24 INFO storage.BlockManagerInfo: Removed broadcast_25_piece0 on localhost:33950 in memory (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:24 INFO spark.ContextCleaner: Cleaned accumulator 16
17/09/24 06:25:24 INFO python.PythonRunner: Times: total = 78, boot = 49, init = 25, finish = 4
17/09/24 06:25:24 INFO executor.Executor: Finished task 0.0 in stage 15.0 (TID 15). 2203 bytes result sent to driver
17/09/24 06:25:24 INFO scheduler.DAGScheduler: ResultStage 15 (runJob at PythonRDD.scala:393) finished in 0.438 s
17/09/24 06:25:24 INFO scheduler.DAGScheduler: Job 15 finished: runJob at PythonRDD.scala:393, took 1.160085 s
17/09/24 06:25:24 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 15.0 (TID 15) in 429 ms on localhost (1/1)
17/09/24 06:25:24 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 15.0, whose tasks have all completed, from pool
Out[100]: [[u'Year', u'First Name', u'County', u'Sex', u'Count']]
In [101]: rows.take(2)
17/09/24 06:25:29 INFO spark.SparkContext: Starting job: runJob at PythonRDD.scala:393
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Got job 16 (runJob at PythonRDD.scala:393) with 1 output partitions
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Final stage: ResultStage 16 (runJob at PythonRDD.scala:393)
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Missing parents: List()
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Submitting ResultStage 16 (PythonRDD[47] at RDD at PythonRDD.scala:43), which has no missing parents
17/09/24 06:25:29 INFO storage.MemoryStore: Block broadcast_29 stored as values in memory (estimated size 5.2 KB, free 2.2 MB)
17/09/24 06:25:29 INFO storage.MemoryStore: Block broadcast_29_piece0 stored as bytes in memory (estimated size 3.3 KB, free 2.2 MB)
17/09/24 06:25:29 INFO storage.BlockManagerInfo: Added broadcast_29_piece0 in memory on localhost:33950 (size: 3.3 KB, free: 208.6 MB)
17/09/24 06:25:29 INFO spark.SparkContext: Created broadcast 29 from broadcast at DAGScheduler.scala:1006
17/09/24 06:25:29 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 16 (PythonRDD[47] at RDD at PythonRDD.scala:43)
17/09/24 06:25:29 INFO scheduler.TaskSchedulerImpl: Adding task set 16.0 with 1 tasks
17/09/24 06:25:29 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 16.0 (TID 16, localhost, partition 0,PROCESS_LOCAL, 2147 bytes)
17/09/24 06:25:29 INFO executor.Executor: Running task 0.0 in stage 16.0 (TID 16)
17/09/24 06:25:29 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/names.txt:0+136
17/09/24 06:25:29 INFO python.PythonRunner: Times: total = 71, boot = 25, init = 45, finish = 1
17/09/24 06:25:29 INFO executor.Executor: Finished task 0.0 in stage 16.0 (TID 16). 2267 bytes result sent to driver
17/09/24 06:25:30 INFO scheduler.DAGScheduler: ResultStage 16 (runJob at PythonRDD.scala:393) finished in 0.196 s
17/09/24 06:25:30 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 16.0 (TID 16) in 202 ms on localhost (1/1)
17/09/24 06:25:30 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 16.0, whose tasks have all completed, from pool
17/09/24 06:25:30 INFO scheduler.DAGScheduler: Job 16 finished: runJob at PythonRDD.scala:393, took 0.408908 s
Out[101]:
[[u'Year', u'First Name', u'County', u'Sex', u'Count'],
[u'2012', u'DOMINIC', u'CAYUGA', u'M', u'6']]
In [102]:
来自:
https://www.supergloo.com/fieldnotes/apache-spark-transformations-python-examples/
[spark][python]Spark map 处理的更多相关文章
- [Spark][Python]spark 从 avro 文件获取 Dataframe 的例子
[Spark][Python]spark 从 avro 文件获取 Dataframe 的例子 从如下地址获取文件: https://github.com/databricks/spark-avro/r ...
- [Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子:
[Spark][Python]Spark 访问 mysql , 生成 dataframe 的例子: mydf001=sqlContext.read.format("jdbc").o ...
- [Spark][Python]Spark Python 索引页
Spark Python 索引页 为了查找方便,建立此页 === RDD 基本操作: [Spark][Python]groupByKey例子
- [Spark][Python]Spark Join 小例子
[training@localhost ~]$ hdfs dfs -cat people.json {"name":"Alice","pcode&qu ...
- 【原】Learning Spark (Python版) 学习笔记(三)----工作原理、调优与Spark SQL
周末的任务是更新Learning Spark系列第三篇,以为自己写不完了,但为了改正拖延症,还是得完成给自己定的任务啊 = =.这三章主要讲Spark的运行过程(本地+集群),性能调优以及Spark ...
- [Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子
[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子 sqlContext = HiveContext(sc) peopleDF = sqlContext. ...
- [Spark][Python]DataFrame中取出有限个记录的例子
[Spark][Python]DataFrame中取出有限个记录的例子: sqlContext = HiveContext(sc) peopleDF = sqlContext.read.json(&q ...
- [Spark][python]以DataFrame方式打开Json文件的例子
[Spark][python]以DataFrame方式打开Json文件的例子: [training@localhost ~]$ cat people.json{"name":&qu ...
- [Spark][Python]sortByKey 例子
[Spark][Python]sortByKey 例子: [training@localhost ~]$ hdfs dfs -cat test02.txt00002 sku01000001 sku93 ...
随机推荐
- [iOS] KVC 和 KVO
开发iOS经常会看见KVO和KVC这两个概念,特地了解了一下. 我的新博客wossoneri.com link KVC Key Value Coding KVC是一种用间接方式访问类的属性的机制.比如 ...
- JDBC-Statement,prepareStatement,CallableStatement的比较
参考:https://www.cnblogs.com/Lxiaojiang/p/6708570.html JDBC核心API提供了三种向数据库发送SQL语句的类: Statement:使用create ...
- Java 中声明和语句
public class Example { int[] arr = new int[4]; // OK! 定义属性并初始化 arr[0] = 1; // 错误! 这是语句,必须写在方法体里 arr[ ...
- 19LaTeX学习系列之---LaTeX的总结
目录 目录 前言 (一)本系列的章节目录 (二)快速温习LaTeX 1.介绍 2.源文件结构 3.文档的结构 4.字体的设置 5.图片的插入 6.表格的插入 7.数学公式的插入 8.交叉引用与浮动体 ...
- Windows端部署zabbix-agent
一.windows客户端的配置关闭windows防火墙或者开通10050和10051端口(1).关闭防火墙(不推荐直接关闭,测试可以这样做,尤其是最近勒索病毒猛烈)开始—控制面板—windows防火墙 ...
- Linux读写执行权限对目录和文件的影响
提示:这里的用户指的是普通用户 读写执行权限对root无效 对于目录来说 1)只拥有读权限 可以ls 查看目录内容,不能切换进目录中去 也不能创建目录或文件 [support@node1 opt]$ ...
- Hadoop2.7.6_08_Federation联邦机制
前言: 本文章是在 Hadoop2.7.6_07_HA高可用 的基础上完成的,所以不清楚的可参见这篇文章. 1. Hadoop的federation机制 文件的元数据是放在namenode上的,只 ...
- 转载------------C函数之memcpy()函数用法
转载于http://blog.csdn.net/tigerjibo/article/details/6841531 函数原型 void *memcpy(void*dest, const void *s ...
- 【项目 · WonderLand】 系 统 设 计
团 队 作 业 ---- 系 统 设 计 Part 0 · 简 要 目 录 Part 1 · 完 善 需 求 规 格 说 明 书 Part 2 · 团 队 编 码 规 范 Part 3 · 数 据 库 ...
- Python3编写网络爬虫04-爬取猫眼电影排行实例
利用requests库和正则表达式 抓取猫眼电影TOP100 (requests比urllib使用更方便,由于没有学习HTML系统解析库 选用re) 1.目标 抓取电影名称 时间 评分 图片等 url ...