用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python programming language.
Motivation
Even though the Hadoop framework is written in Java, programs for Hadoop need not to be coded in Java but can also be developed in other languages like Python or C++ (the latter since version 0.14.1). However, Hadoop’s documentation and the most prominent Python example on the Hadoop website could make you think that you must translate your Python code using Jython into a Java jar file. Obviously, this is not very convenient and can even be problematic if you depend on Python features not provided by Jython. Another issue of the Jython approach is the overhead of writing your Python program in such a way that it can interact with Hadoop – just have a look at the example in $HADOOP_HOME/src/examples/python/WordCount.py and you see what I mean.
That said, the ground is now prepared for the purpose of this tutorial: writing a Hadoop MapReduce program in a more Pythonic way, i.e. in a way you should be familiar with.
What we want to do
We will write a simple MapReduce program (see also the MapReduce article on Wikipedia) for Hadoop in Python but without using Jython to translate our code to Java jar files.
Our program will mimick the WordCount, i.e. it reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occured, separated by a tab.
Prerequisites
You should have an Hadoop cluster up and running because we will get our hands dirty. If you don’t have a cluster yet, my following tutorials might help you to build one. The tutorials are tailored to Ubuntu Linux but the information does also apply to other Linux/Unix variants.
- Running Hadoop On Ubuntu Linux (Single-Node Cluster) – How to set up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System (HDFS)
- Running Hadoop On Ubuntu Linux (Multi-Node Cluster) – How to set up a distributed, multi-node Hadoop cluster backed by the Hadoop Distributed File System (HDFS)
Python MapReduce Code
The “trick” behind the following Python code is that we will use the Hadoop Streaming API (see also the corresponding wiki entry) for helping us passing data between our Map and Reduce code via STDIN (standard input) and STDOUT (standard output). We will simply use Python’s sys.stdin to read input data and print our own output to sys.stdout. That’s all we need to do because Hadoop Streaming will take care of everything else!
Map step: mapper.py
Save the following code in the file /home/hduser/mapper.py. It will read data from STDIN, split it into words and output a list of lines mapping words to their (intermediate) counts to STDOUT. The Map script will not compute an (intermediate) sum of a word’s occurrences though. Instead, it will output <word> 1 tuples immediately – even though a specific word might occur multiple times in the input. In our case we let the subsequent Reduce step do the final sum count. Of course, you can change this behavior in your own scripts as you please, but we will keep it like that in this tutorial because of didactic reasons. :-)
Make sure the file has execution permission (chmod +x /home/hduser/mapper.py should do the trick) or you will run into problems.
1 |
|
Reduce step: reducer.py
Save the following code in the file /home/hduser/reducer.py. It will read the results of mapper.py from STDIN (so the output format of mapper.py and the expected input format of reducer.py must match) and sum the occurrences of each word to a final count, and then output its results to STDOUT.
Make sure the file has execution permission (chmod +x /home/hduser/reducer.py should do the trick) or you will run into problems.
1 |
|
Test your code (cat data | map | sort | reduce)
I recommend to test your mapper.py and reducer.py scripts locally before using them in a MapReduce job. Otherwise your jobs might successfully complete but there will be no job result data at all or not the results you would have expected. If that happens, most likely it was you (or me) who screwed up.
Here are some ideas on how to test the functionality of the Map and Reduce scripts.
1 |
|
Running the Python Code on Hadoop
Download example input data
We will use three ebooks from Project Gutenberg for this example:
- The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson
- The Notebooks of Leonardo Da Vinci
- Ulysses by James Joyce
Download each ebook as text files in Plain Text UTF-8 encoding and store the files in a local temporary directory of choice, for example /tmp/gutenberg.
1 |
|
Copy local example data to HDFS
Before we run the actual MapReduce job, we must first copy the files from our local file system to Hadoop’s HDFS.
1 |
|
Run the MapReduce job
Now that everything is prepared, we can finally run our Python MapReduce job on the Hadoop cluster. As I said above, we leverage the Hadoop Streaming API for helping us passing data between our Map and Reduce code via STDIN and STDOUT.
1 |
|
If you want to modify some Hadoop settings on the fly like increasing the number of Reduce tasks, you can use the -D option:
1 |
|
The job will read all the files in the HDFS directory /user/hduser/gutenberg, process it, and store the results in the HDFS directory /user/hduser/gutenberg-output. In general Hadoop will create one output file per reducer; in our case however it will only create a single file because the input files are very small.
Example output of the previous command in the console:
1 |
|
As you can see in the output above, Hadoop also provides a basic web interface for statistics and information. When the Hadoop cluster is running, open http://localhost:50030/ in a browser and have a look around. Here’s a screenshot of the Hadoop web interface for the job we just ran.

Check if the result is successfully stored in HDFS directory /user/hduser/gutenberg-output:
1 |
|
You can then inspect the contents of the file with the dfs -cat command:
1 |
|
Note that in this specific output above the quote signs (") enclosing the words have not been inserted by Hadoop. They are the result of how our Python code splits words, and in this case it matched the beginning of a quote in the ebook texts. Just inspect the part-00000 file further to see it for yourself.
Improved Mapper and Reducer code: using Python iterators and generators
The Mapper and Reducer examples above should have given you an idea of how to create your first MapReduce application. The focus was code simplicity and ease of understanding, particularly for beginners of the Python programming language. In a real-world application however, you might want to optimize your code by using Python iterators and generators (an even better introduction in PDF).
Generally speaking, iterators and generators (functions that create iterators, for example with Python’s yield statement) have the advantage that an element of a sequence is not produced until you actually need it. This can help a lot in terms of computational expensiveness or memory consumption depending on the task at hand.
Precisely, we compute the sum of a word’s occurrences, e.g. ("foo", 4), only if by chance the same word (foo) appears multiple times in succession. In the majority of cases, however, we let the Hadoop group the (key, value) pairs between the Map and the Reduce step because Hadoop is more efficient in this regard than our simple Python scripts.
mapper.py
1 |
|
reducer.py
1 |
|
Related Links
From yours truly:
- Running Hadoop On Ubuntu Linux (Single-Node Cluster)
- Running Hadoop On Ubuntu Linux (Multi-Node Cluster)
From others:
from: http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python的更多相关文章
- 【初学者教程】在电脑上安装Python,写第一个程序
欢迎来到Python的世界 1.存在Python 2和Python 3两个版本,我该用哪个?如果书是关于2的,下载2:如果书是关于3的,就下载3.建议用Python 3,不过用2也是可以的. 2.下载 ...
- Python语言基础04-构造程序逻辑
本文收录在Python从入门到精通系列文章系列 学完前面的几个章节后,博主觉得有必要在这里带大家做一些练习来巩固之前所学的知识,虽然迄今为止我们学习的内容只是Python的冰山一角,但是这些内容已经足 ...
- 如何用VS2017用C++语言写Hello world 程序?
1,首先,打开VS2017. 2,左上角按文件——新建——项目,或按ctrl+shift+n. 3,按照图片里的选,选完按“确定”. 4,右键“源文件”,再按添加——新建项. 5,剩下的就很简单了,只 ...
- [转]Wote用python语言写的imgHash.py
#!/usr/bin/python import glob import os import sys from PIL import Image EXTS = 'jpg', 'jpeg', 'JPG' ...
- 用python语言写一个简单的计算器
假如我们有这样一个式子: 1 - 2 * ( (60-30 +(-40/5) * (9-2*5/3 + 7 /3*99/4*2998 +10 * 568/14 )) - (-4*3)/ (16-3*2 ...
- 计算均值mean的MapReduce程序Computing mean with MapReduce
In this post we'll see how to compute the mean of the max temperatures of every month for the city o ...
- Python 实现根据不同的程序运行环境存放日志目录,Python实现Linux和windows系统日志的存放
说明:在我们开发的时候,有时候是在windows系统下开发的代码,我们的生产环境是Linux系统,更新代码就需要修改日志的环境,本文实现了代码更新,不需要配置日志文件的目录,同样也可以延伸到ip地址 ...
- 用Python语言设计GUI界面
我们大家都编写过程序,但是如果能够设计一个GUI界面,会使程序增添一个很大的亮点!今天就让我们来用目前十分流行的python语言写出一个最基本的GUI,为日后设计更加漂亮的GUI打下基础. 工具/原料 ...
- 使用Python实现Hadoop MapReduce程序
转自:使用Python实现Hadoop MapReduce程序 英文原文:Writing an Hadoop MapReduce Program in Python 根据上面两篇文章,下面是我在自己的 ...
随机推荐
- 手动安装pydev
在网上下载pydev.zip,解压后有两个文件夹,features和plugins.把这两个文件夹复制到eclipse目录下的dropins文件夹下.
- Gitlab-system-hooks
当创建或者删除,用户或者项目时,可能想收到一个通知.Gitlab支持这种类型的system hooks. 下面事件可以触发一个system webhook调用. Project created Pro ...
- thinkphp 5.0 lnmp环境下 无法访问,报错500(public目录)
两种方法: 1.修改fastcgi的配置文件 /usr/local/nginx/conf/fastcgi.conf fastcgi_param PHP_ADMIN_VALUE "open_b ...
- thinkphp中I()方法的详解
I('post.email','','email'); int boolean float validate_regexp validate_url validate_email validate_i ...
- php7的新特性
新增操作符1.??$username = $_GET['user'] ?? '';$username = isset($_GET['user']) ? $_GET['user'] : 'nobody' ...
- Springboot listener
在启动流程中,会出发许多ApplicationEvent.这时会调用对应的listener的onApplicationEvent方法.ApplicationEvent时观察者模式, (1) 实体继承A ...
- win32创建窗口函数(windows程序内部运行机制)
利用win32创建窗口函数,主要操作步骤为: 1.设计一个窗口类 2.注册窗口类 3.创建窗口 4.显示及窗口更新 5.消息循环 6.窗口过程函数 (1)设计一个窗口类 设计窗口类,这样的类型已经 ...
- mysql 通过cmd 在命令行创建数据库
一.连接MYSQL 格式: mysql -h主机地址 -u用户名 -p用户密码 1. 连接到本机上的MYSQL. 首先打开DOS窗口,然后进入目录mysql\bin,再键入命令mysql -u roo ...
- Codeforces.739E.Gosha is hunting(DP 带权二分)
题目链接 \(Description\) 有\(n\)只精灵,两种精灵球(高级和低级),每种球能捕捉到第\(i\)只精灵的概率已知.求用\(A\)个低级球和\(B\)个高级球能捕捉到精灵数的最大期望. ...
- 【洛谷】NOIP提高组模拟赛Day2【动态开节点/树状数组】【双头链表模拟】
U41571 Agent2 题目背景 炎炎夏日还没有过去,Agent们没有一个想出去外面搞事情的.每当ENLIGHTENED总部组织活动时,人人都说有空,结果到了活动日,却一个接着一个咕咕咕了.只有不 ...