从groupby 理解mapper-reducer
注,reduce之前已经shuff。

mapper.py
#!/usr/bin/env python
"""mapper.py"""
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (word, 1)
reducer.py
#!/usr/bin/env python
"""reducer.py"""
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
# input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# parse the input we got from mapper.py
word, count = line.split('\t', 1)
# convert count (currently a string) to int
try:
count = int(count)
except ValueError:
# count was not a number, so silently
# ignore/discard this line
continue
# this IF-switch only works because Hadoop sorts map output
# by key (here: word) before it is passed to the reducer
if current_word == word:
current_count += count
else:
if current_word:
# write result to STDOUT
print '%s\t%s' % (current_word, current_count)
current_count = count
current_word = word
# do not forget to output the last word if needed!
if current_word == word:
print '%s\t%s' % (current_word, current_count)
Improved Mapper and Reducer code: using Python iterators and generators
mapper.py
#!/usr/bin/env python
"""A more advanced Mapper, using Python iterators and generators."""
import sys
def read_input(file):
for line in file:
# split the line into words
yield line.split()
def main(separator='\t'):
# input comes from STDIN (standard input)
data = read_input(sys.stdin)
for words in data:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
for word in words:
print '%s%s%d' % (word, separator, 1)
if __name__ == "__main__":
main()
reducer.py
#!/usr/bin/env python
"""A more advanced Reducer, using Python iterators and generators."""
from itertools import groupby
from operator import itemgetter
import sys
def read_mapper_output(file, separator='\t'):
for line in file:
yield line.rstrip().split(separator, 1)
def main(separator='\t'):
# input comes from STDIN (standard input)
data = read_mapper_output(sys.stdin, separator=separator)
# groupby groups multiple word-count pairs by word,
# and creates an iterator that returns consecutive keys and their group:
# current_word - string containing a word (the key)
# group - iterator yielding all ["<current_word>", "<count>"] items
for current_word, group in groupby(data, itemgetter(0)):
try:
total_count = sum(int(count) for current_word, count in group)
print "%s%s%d" % (current_word, separator, total_count)
except ValueError:
# count was not a number, so silently discard this item
pass
if __name__ == "__main__":
main()
从groupby 理解mapper-reducer的更多相关文章
- hadoop2.7之Mapper/reducer源码分析
一切从示例程序开始: 示例程序 Hadoop2.7 提供的示例程序WordCount.java package org.apache.hadoop.examples; import java.io.I ...
- hadoop mapper reducer
Local模式运行MR流程------------------------- 1.创建外部Job(mapreduce.Job),设置配置信息 2.通过jobsubmitter将job.xml + sp ...
- Mapper 与 Reducer 解析
1 . 旧版 API 的 Mapper/Reducer 解析 Mapper/Reducer 中封装了应用程序的数据处理逻辑.为了简化接口,MapReduce 要求所有存储在底层分布式文件系统上的数据均 ...
- Mapper类/Reducer类中的setup方法和cleanup方法以及run方法的介绍
在hadoop的源码中,基类Mapper类和Reducer类中都是只包含四个方法:setup方法,cleanup方法,run方法,map方法.如下所示: 其方法的调用方式是在run方法中,如下所示: ...
- JVM | 第1部分:自动内存管理与性能调优《深入理解 Java 虚拟机》
目录 前言 1. 自动内存管理 1.1 JVM运行时数据区 1.2 Java 内存结构 1.3 HotSpot 虚拟机创建对象 1.4 HotSpot 虚拟机的对象内存布局 1.5 访问对象 2. 垃 ...
- hadoop之mapper类妙用
1. Mapper类 首先 Mapper类有四个方法: (1) protected void setup(Context context) (2) Protected void map(KEYIN k ...
- Mybatis 入门到理解篇
MyBatis MyBatis 本是apache的一个开源项目iBatis, 2010年这个项目由apache software foundation 迁移到了google code, ...
- 【转】Hive配置文件中配置项的含义详解(收藏版)
http://www.aboutyun.com/thread-7548-1-1.html 这里面列出了hive几乎所有的配置项,下面问题只是说出了几种配置项目的作用.更多内容,可以查看内容问题导读:1 ...
- 为你揭秘知乎是如何搞AI的——窥大厂 | 数智方法论第1期
文章发布于公号[数智物语] (ID:decision_engine),关注公号不错过每一篇干货. 数智物语(公众号ID:decision_engine)出品 策划.编写:卷毛雅各布 「我们相信,在垃圾 ...
随机推荐
- linux系统卡顿 性能分析
systemtrap 是一个内核开发者要掌握的工具. linux performance analysis 系统瓶颈性能分析软件
- 批处理快速合并多分Excel文件并将指定列的数据去重复
1.批处理快速合并多个excel文件方法: 新建一个.txt文本文件,就命名为合并.txt吧. 而后开启文件,复制以下代码到文件中: @echo off E: cd xls dir copy *.cs ...
- Python 访问一个网址之后输入信息进行检索
window Python 3 Pycharm软件 from selenium import webdriver #导入Selenium的webdriver from selenium.webdriv ...
- Deepin中安装docker
1.sudo apt install docker-ce: 2.安装好后可以用docker version查看一下是否成功,还可以通过网络详情里是否多了一个docker0来判断: 3.sudo use ...
- QPS、TPS、PV、UV、GMV、IP、RPS?
QPS.TPS.PV.UV.GMV.IP.RPS QPSQueries Per Second,每秒查询数.每秒能够响应的查询次数. QPS是对一个特定的查询服务器在规定时间内所处理流量多少的衡量标准, ...
- git实现码云的上传和下载
上传步骤: 1.码云上新建一个项目 XXXX? ?(项目名) 2.本地创建一个文件夹E:/XXXX,然后使用git bash? ?? 3.cd 到本地文件夹中E:/XXXX? //如果是在创建的文件中 ...
- [.Net Core] - Asp.Net Core 编译成功,发布失败之解决
背景 Asp.Net Core 项目编译成功,发布失败. 错误 Assets file 'D:\……\obj\project.assets.json' doesn't have a target fo ...
- ABP中的AutoMapper
在我们的业务中经常需要使用到类型之间的映射,特别是在和前端页面进行交互的时候,我们需要定义各种类型的Dto,并且需要需要这些Dto和数据库中的实体进行映射,对于有些大对象而言,需要赋值太多的属性,这样 ...
- scau 9502 ARDF一个变量的问题
哨兵变量flag不小心没 设置成0..所以一直WA 9502 ARDF 时间限制:1000MS 内存限制:65535K 提交次数:0 通过次数:0 题型: 编程题 语言: G++;GCC Des ...
- Python程序计算ax^2+bx+c=0方程根
程序用来计算ax^2+bx+c=0的两个根,有些异常暂时无法处理: #!/usr/bin/python # -*- coding: utf-8 -*- #当程序存在中文时,注释表明使用utf-8编码解 ...