用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python programming language.
Motivation
Even though the Hadoop framework is written in Java, programs for Hadoop need not to be coded in Java but can also be developed in other languages like Python or C++ (the latter since version 0.14.1). However, Hadoop’s documentation and the most prominent Python example on the Hadoop website could make you think that you must translate your Python code using Jython into a Java jar file. Obviously, this is not very convenient and can even be problematic if you depend on Python features not provided by Jython. Another issue of the Jython approach is the overhead of writing your Python program in such a way that it can interact with Hadoop – just have a look at the example in $HADOOP_HOME/src/examples/python/WordCount.py and you see what I mean.
That said, the ground is now prepared for the purpose of this tutorial: writing a Hadoop MapReduce program in a more Pythonic way, i.e. in a way you should be familiar with.
What we want to do
We will write a simple MapReduce program (see also the MapReduce article on Wikipedia) for Hadoop in Python but without using Jython to translate our code to Java jar files.
Our program will mimick the WordCount, i.e. it reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occured, separated by a tab.
Prerequisites
You should have an Hadoop cluster up and running because we will get our hands dirty. If you don’t have a cluster yet, my following tutorials might help you to build one. The tutorials are tailored to Ubuntu Linux but the information does also apply to other Linux/Unix variants.
- Running Hadoop On Ubuntu Linux (Single-Node Cluster) – How to set up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System (HDFS)
- Running Hadoop On Ubuntu Linux (Multi-Node Cluster) – How to set up a distributed, multi-node Hadoop cluster backed by the Hadoop Distributed File System (HDFS)
Python MapReduce Code
The “trick” behind the following Python code is that we will use the Hadoop Streaming API (see also the corresponding wiki entry) for helping us passing data between our Map and Reduce code via STDIN (standard input) and STDOUT (standard output). We will simply use Python’s sys.stdin to read input data and print our own output to sys.stdout. That’s all we need to do because Hadoop Streaming will take care of everything else!
Map step: mapper.py
Save the following code in the file /home/hduser/mapper.py. It will read data from STDIN, split it into words and output a list of lines mapping words to their (intermediate) counts to STDOUT. The Map script will not compute an (intermediate) sum of a word’s occurrences though. Instead, it will output <word> 1 tuples immediately – even though a specific word might occur multiple times in the input. In our case we let the subsequent Reduce step do the final sum count. Of course, you can change this behavior in your own scripts as you please, but we will keep it like that in this tutorial because of didactic reasons. :-)
Make sure the file has execution permission (chmod +x /home/hduser/mapper.py should do the trick) or you will run into problems.
1 |
|
Reduce step: reducer.py
Save the following code in the file /home/hduser/reducer.py. It will read the results of mapper.py from STDIN (so the output format of mapper.py and the expected input format of reducer.py must match) and sum the occurrences of each word to a final count, and then output its results to STDOUT.
Make sure the file has execution permission (chmod +x /home/hduser/reducer.py should do the trick) or you will run into problems.
1 |
|
Test your code (cat data | map | sort | reduce)
I recommend to test your mapper.py and reducer.py scripts locally before using them in a MapReduce job. Otherwise your jobs might successfully complete but there will be no job result data at all or not the results you would have expected. If that happens, most likely it was you (or me) who screwed up.
Here are some ideas on how to test the functionality of the Map and Reduce scripts.
1 |
|
Running the Python Code on Hadoop
Download example input data
We will use three ebooks from Project Gutenberg for this example:
- The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson
- The Notebooks of Leonardo Da Vinci
- Ulysses by James Joyce
Download each ebook as text files in Plain Text UTF-8 encoding and store the files in a local temporary directory of choice, for example /tmp/gutenberg.
1 |
|
Copy local example data to HDFS
Before we run the actual MapReduce job, we must first copy the files from our local file system to Hadoop’s HDFS.
1 |
|
Run the MapReduce job
Now that everything is prepared, we can finally run our Python MapReduce job on the Hadoop cluster. As I said above, we leverage the Hadoop Streaming API for helping us passing data between our Map and Reduce code via STDIN and STDOUT.
1 |
|
If you want to modify some Hadoop settings on the fly like increasing the number of Reduce tasks, you can use the -D option:
1 |
|
The job will read all the files in the HDFS directory /user/hduser/gutenberg, process it, and store the results in the HDFS directory /user/hduser/gutenberg-output. In general Hadoop will create one output file per reducer; in our case however it will only create a single file because the input files are very small.
Example output of the previous command in the console:
1 |
|
As you can see in the output above, Hadoop also provides a basic web interface for statistics and information. When the Hadoop cluster is running, open http://localhost:50030/ in a browser and have a look around. Here’s a screenshot of the Hadoop web interface for the job we just ran.

Check if the result is successfully stored in HDFS directory /user/hduser/gutenberg-output:
1 |
|
You can then inspect the contents of the file with the dfs -cat command:
1 |
|
Note that in this specific output above the quote signs (") enclosing the words have not been inserted by Hadoop. They are the result of how our Python code splits words, and in this case it matched the beginning of a quote in the ebook texts. Just inspect the part-00000 file further to see it for yourself.
Improved Mapper and Reducer code: using Python iterators and generators
The Mapper and Reducer examples above should have given you an idea of how to create your first MapReduce application. The focus was code simplicity and ease of understanding, particularly for beginners of the Python programming language. In a real-world application however, you might want to optimize your code by using Python iterators and generators (an even better introduction in PDF).
Generally speaking, iterators and generators (functions that create iterators, for example with Python’s yield statement) have the advantage that an element of a sequence is not produced until you actually need it. This can help a lot in terms of computational expensiveness or memory consumption depending on the task at hand.
Precisely, we compute the sum of a word’s occurrences, e.g. ("foo", 4), only if by chance the same word (foo) appears multiple times in succession. In the majority of cases, however, we let the Hadoop group the (key, value) pairs between the Map and the Reduce step because Hadoop is more efficient in this regard than our simple Python scripts.
mapper.py
1 |
|
reducer.py
1 |
|
Related Links
From yours truly:
- Running Hadoop On Ubuntu Linux (Single-Node Cluster)
- Running Hadoop On Ubuntu Linux (Multi-Node Cluster)
From others:
from: http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python的更多相关文章
- 【初学者教程】在电脑上安装Python,写第一个程序
欢迎来到Python的世界 1.存在Python 2和Python 3两个版本,我该用哪个?如果书是关于2的,下载2:如果书是关于3的,就下载3.建议用Python 3,不过用2也是可以的. 2.下载 ...
- Python语言基础04-构造程序逻辑
本文收录在Python从入门到精通系列文章系列 学完前面的几个章节后,博主觉得有必要在这里带大家做一些练习来巩固之前所学的知识,虽然迄今为止我们学习的内容只是Python的冰山一角,但是这些内容已经足 ...
- 如何用VS2017用C++语言写Hello world 程序?
1,首先,打开VS2017. 2,左上角按文件——新建——项目,或按ctrl+shift+n. 3,按照图片里的选,选完按“确定”. 4,右键“源文件”,再按添加——新建项. 5,剩下的就很简单了,只 ...
- [转]Wote用python语言写的imgHash.py
#!/usr/bin/python import glob import os import sys from PIL import Image EXTS = 'jpg', 'jpeg', 'JPG' ...
- 用python语言写一个简单的计算器
假如我们有这样一个式子: 1 - 2 * ( (60-30 +(-40/5) * (9-2*5/3 + 7 /3*99/4*2998 +10 * 568/14 )) - (-4*3)/ (16-3*2 ...
- 计算均值mean的MapReduce程序Computing mean with MapReduce
In this post we'll see how to compute the mean of the max temperatures of every month for the city o ...
- Python 实现根据不同的程序运行环境存放日志目录,Python实现Linux和windows系统日志的存放
说明:在我们开发的时候,有时候是在windows系统下开发的代码,我们的生产环境是Linux系统,更新代码就需要修改日志的环境,本文实现了代码更新,不需要配置日志文件的目录,同样也可以延伸到ip地址 ...
- 用Python语言设计GUI界面
我们大家都编写过程序,但是如果能够设计一个GUI界面,会使程序增添一个很大的亮点!今天就让我们来用目前十分流行的python语言写出一个最基本的GUI,为日后设计更加漂亮的GUI打下基础. 工具/原料 ...
- 使用Python实现Hadoop MapReduce程序
转自:使用Python实现Hadoop MapReduce程序 英文原文:Writing an Hadoop MapReduce Program in Python 根据上面两篇文章,下面是我在自己的 ...
随机推荐
- shell心得
向loader.ctl中插入文本
- Asp.Net Core2.0允许跨域请求设置
1.services /// <summary> /// /// </summary> /// <param name="services">& ...
- scala递归实现换零钱算法
import scala.collection.mutable.ArrayBuffer import scala.util.control.Breaks object Exchange { def d ...
- 2018年全国多校算法寒假训练营练习比赛(第一场)J - 闯关的lulu
链接:https://www.nowcoder.com/acm/contest/67/J来源:牛客网 题目描述 勇者lulu某天进入了一个高度10,000,000层的闯关塔,在塔里每到一层楼,他都会获 ...
- Adobe Photoshop CC 2017-18.0安装教程
Adobe Photoshop CC 2017-18.0安装教程 注:下载链接在文章后面 第一步:首先请将电脑的网络断开,很简单:禁用本地连接或者拔掉网线,这样就可以免除登录Creative Clou ...
- php操作mongodb or查询这样写!
$where['$or'] = [ ['id' => ['lt'=>0]], ['id2' => ['lt'=>1]] ]; 这个是查询 id>0 或者id2>1的 ...
- 监控cpu、内存 <shell>
获取cpu.内存结果 pid=$1 #获取进程pid echo $pid interval=1 #设置采集间隔 while true do echo $(date +"%y-%m-%d %H ...
- 学会使用DNSPod,仅需三步
学会使用DNSPod,仅需三步 第一步:在DNSPod添加记录 1.访问 https://www.dnspod.cn网站,在DNSPod官网首页的右上角,有[注册],如下图所示,点击[注册]按钮 ...
- Keystone几种token生成的方式分析
从Keystone的配置文件中,我们可见,Token的提供者目前支持四种. Token Provider:UUID, PKI, PKIZ, or Fernet 结合源码及官方文档,我们用一个表格来阐述 ...
- MyBatis 插入时返回刚插入记录的主键值
MyBatis 插入时返回刚插入记录的主键值 一.要求: 1.数据库表中的主键是自增长的,如:id: 2.获取刚刚插入的记录的id值: 二.源代码: 1.User.java package cn.co ...