've got big RDD(1gb) in yarn cluster. On local machine, which use this cluster I have only 512 mb. I'd like to iterate over values in RDD on my local machine. I can't use collect(), because it would create too big array locally which more then my heap. I need some iterative way. There is method iterator(), but it requires some additional information, I can't provide.

UDP: commited toLocalIterator method

asked Feb 11 '14 at 9:55
epahomov

111117
 
    
toLocalIterator is not ideal if you want to iterate locally over a partition at a time – Landon Kuhn Oct 29 '14 at 2:25
2  
@LandonKuhn why not? – Tom Yubing Dong Aug 4 '15 at 23:02

5 Answers

Update: RDD.toLocalIterator method that appeared after the original answer has been written is a more efficient way to do the job. It uses runJob to evaluate only a single partition on each step.

TL;DR And the original answer might give a rough idea how it works:

First of all, get the array of partition indexes:

val parts = rdd.partitions

Then create smaller rdds filtering out everything but a single partition. Collect the data from smaller rdds and iterate over values of a single partition:

for (p <- parts) {
val idx = p.index
val partRdd = rdd.mapPartitionsWithIndex(a => if (a._1 == idx) a._2 else Iterator(), true)
//The second argument is true to avoid rdd reshuffling
val data = partRdd.collect //data contains all values from a single partition
//in the form of array
//Now you can do with the data whatever you want: iterate, save to a file, etc.
}

I didn't try this code, but it should work. Please write a comment if it won't compile. Of cause, it will work only if the partitions are small enough. If they aren't, you can always increase the number of partitions with rdd.coalesce(numParts, true).

answered Feb 15 '14 at 18:33
Wildfire

4,53811739
 
    
does this code cause each partition to be computed in serial when it loops through and call mapPartitionsWithIndex? What's the best way to remedy this? – foboi1122 Nov 18 '15 at 0:42
    
@foboi1122 Please see updated answer – Wildfire Nov 18 '15 at 8:36 
    
@Wildfire Will this approach resolve this. Else how to resolve using any or might be this approach. – ChikuMiku 2 days ago 

Did you find this question interesting? Try our newsletter

Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).

Wildfire answer seems semantically correct, but I'm sure you should be able to be vastly more efficient by using the API of Spark. If you want to process each partition in turn, I don't see why you can't using map/filter/reduce/reduceByKey/mapPartitions operations. The only time you'd want to have everything in one place in one array is when your going to perform a non-monoidal operation - but that doesn't seem to be what you want. You should be able to do something like:

rdd.mapPartitions(recordsIterator => your code that processes a single chunk)

Or this

rdd.foreachPartition(partition => {
partition.toArray
// Your code
})
answered Mar 30 '14 at 11:05
samthebest

10.4k54369
 
    
Is't these operators execute on cluster? – epahomov Apr 3 '14 at 7:05
1  
Yes it will, but why are you avoiding that? If you can process each chunk in turn, you should be able to write the code in such a way so it can distribute - like using aggregate. – samthebest Apr 3 '14 at 15:54
    
Is not the iterator returned by forEachPartitition the data iterator for a single partition - and not an iterator of all partitions? – javadba May 20 at 8:23

Here is the same approach as suggested by @Wildlife but written in pyspark.

The nice thing about this approach - it lets user access records in RDD in order. I'm using this code to feed data from RDD into STDIN of the machine learning tool's process.

rdd = sc.parallelize(range(100), 10)
def make_part_filter(index):
def part_filter(split_index, iterator):
if split_index == index:
for el in iterator:
yield el
return part_filter for part_id in range(rdd.getNumPartitions()):
part_rdd = rdd.mapPartitionsWithIndex(make_part_filter(part_id), True)
data_from_part_rdd = part_rdd.collect()
print "partition id: %s elements: %s" % (part_id, data_from_part_rdd)

Produces output:

partition id: 0 elements: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
partition id: 1 elements: [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
partition id: 2 elements: [20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
partition id: 3 elements: [30, 31, 32, 33, 34, 35, 36, 37, 38, 39]
partition id: 4 elements: [40, 41, 42, 43, 44, 45, 46, 47, 48, 49]
partition id: 5 elements: [50, 51, 52, 53, 54, 55, 56, 57, 58, 59]
partition id: 6 elements: [60, 61, 62, 63, 64, 65, 66, 67, 68, 69]
partition id: 7 elements: [70, 71, 72, 73, 74, 75, 76, 77, 78, 79]
partition id: 8 elements: [80, 81, 82, 83, 84, 85, 86, 87, 88, 89]
partition id: 9 elements: [90, 91, 92, 93, 94, 95, 96, 97, 98, 99]
answered Jun 5 '15 at 20:07
vvladymyrov

2,9781124
 

Map/filter/reduce using Spark and download the results later? I think usual Hadoop approach will work.

Api says that there are map - filter - saveAsFile commands:https://spark.incubator.apache.org/docs/0.8.1/scala-programming-guide.html#transformations

answered Feb 11 '14 at 10:09
ya_pulser

1,2601715
 
    
Bad option. I don't want to do serialization/deserialization. So I want this data retrieving from spark – epahomov Feb 11 '14 at 10:37
    
How do you intend to get 1gb without serde(i.e. storing on the disk.) ? on a node with 512mb ? – scrapcodesFeb 12 '14 at 9:13
1  
By iterating over the RDD. You should be able to get each partition in sequence to send each data item in sequence to the master, which can then pull them off the network and work on them. – interfect Feb 12 '14 at 18:07

For Spark 1.3.1 , the format is as follows

val parts = rdd.partitions
for (p <- parts) {
val idx = p.index
val partRdd = data.mapPartitionsWithIndex {
case(index:Int,value:Iterator[(String,String,Float)]) =>
if (index == idx) value else Iterator()}
val dataPartitioned = partRdd.collect
//Apply further processing on data
}

 

Spark: Best practice for retrieving big data from RDD to local machine的更多相关文章

  1. Why Apache Spark is a Crossover Hit for Data Scientists [FWD]

    Spark is a compelling multi-purpose platform for use cases that span investigative, as well as opera ...

  2. [Spark] 02 - Practice Spark

    开发环境 教学视频:Spark的环境搭建,需安装配置环境:Java, Hadoop 环境配置:玩转大数据分析!Spark2.X+Python 精华实战课程(免费)[其实只是环境搭建] 进入pyspar ...

  3. spark SQL (四)数据源 Data Source----Parquet 文件的读取与加载

    spark SQL Parquet 文件的读取与加载 是由许多其他数据处理系统支持的柱状格式.Spark SQL支持阅读和编写自动保留原始数据模式的Parquet文件.在编写Parquet文件时,出于 ...

  4. Spark菜鸟学习营Day1 从Java到RDD编程

    Spark菜鸟学习营Day1 从Java到RDD编程 菜鸟训练营主要的目标是帮助大家从零开始,初步掌握Spark程序的开发. Spark的编程模型是一步一步发展过来的,今天主要带大家走一下这段路,让我 ...

  5. The ‘Microsoft.ACE.OLEDB.12.0′ provider is not registered on the local machine. (System.Data)

    When you try to import Excel 2007 or later “.xlsx” files into an SQL Server 2008 database you may ge ...

  6. Microsoft SQL Server 17导出xlsx文件时报错:The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. (System.Data)

    导出数据时报错: 如果你是导出office 2007格式 TITLE: SQL Server Import and Export Wizard ---------------------------- ...

  7. Spark学习之键值对(pair RDD)操作(3)

    Spark学习之键值对(pair RDD)操作(3) 1. 我们通常从一个RDD中提取某些字段(如代表事件时间.用户ID或者其他标识符的字段),并使用这些字段为pair RDD操作中的键. 2. 创建 ...

  8. <Spark><Programming><Loading and Saving Your Data>

    Motivation Spark是基于Hadoop可用的生态系统构建的,因此Spark可以通过Hadoop MapReduce的InputFormat和OutputFormat接口存取数据. Spar ...

  9. spark SQL (五)数据源 Data Source----json hive jdbc等数据的的读取与加载

    1,JSON数据集 Spark SQL可以自动推断JSON数据集的模式,并将其作为一个Dataset[Row].这个转换可以SparkSession.read.json()在一个Dataset[Str ...

随机推荐

  1. mac系统下ionic环境配置

    本人是在mac环境下进行配置的: 下载nodejs:https://nodejs.org/download/ 并双击安装 Cordova and Ionic command-line tools 安装 ...

  2. Android Studio经常使用配置及使用技巧(二)

    在<Android Studio经常使用配置及使用技巧(一)>中具体描写叙述了Android Studio的project结构和打开开源project的一些配置方法.本篇将从我个人的使用情 ...

  3. plsql 常用快捷键(自动替换)

      plsql 常用快捷键 CreateTime--2018年4月23日17:33:05 Author:Marydon 说明:这里的快捷键,不同于以往的快捷键,输入指定字符,按快捷键,可以自动替换成你 ...

  4. 新浪微博api出现认证失败问题 (获取code字段值的问题)

    出现该提示的原因:`` - 说: (2015-10-30 18:06:14)回调地址不一致,`` - 说: (2015-10-30 18:07:38)请在编辑开发者信息中将网站地址和应用信息--高级信 ...

  5. 去除DataTable重复数据的三种方法(转)

    转自:https://www.cnblogs.com/sunxi/p/4767577.html 业务需求 最近做一个把源数据库的数据批次导出到目标数据库.源数据库是采集程序采集而来的原始数据库,所以需 ...

  6. java Socket Tcp 浏览器和服务器(二)

    package cn.itcast.net.p2.ie_server; import java.io.IOException;import java.io.InputStream;import jav ...

  7. free -m 内存

    查看内存及交换swap分区大小及使用率 man  free NAME free - Display amount of free and used memory in the system SYNOP ...

  8. 警告: [SetContextPropertiesRule]{Context} Setting property 'source' to 'org.eclipse.jst.jee.server:

    当你用Eclipse运行web项目的时候,你就会看到控制台出现: 警告: [SetContextPropertiesRule]{Context} Setting property 'source' t ...

  9. Visual C#两分钟搭建BHO IE钩子

    微软在1997年正式推出Browser Helper Object (BHO), 使程序员能够更好的对IE进行二次开发和操作. 在通过编写BHO程序数月后, 我希望把我的一些经验告诉才开始的同志, 避 ...

  10. HDUOJ---(1995)汉诺塔V

    汉诺塔V Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others) Total Submi ...