Hadoop MapReduce Next Generation - Setting up a Single Node Cluster
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster.
Purpose
This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop Distributed File System (HDFS).
Prerequisites
Supported Platforms
- GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes.
Required Software
Required software for Linux include:
- Java™ must be installed. Recommended Java versions are described at HadoopJavaVersions.
- ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons.
Installing Software
If your cluster doesn't have the requisite software you will need to install it.
For example on Ubuntu Linux:
$ sudo apt-get install ssh
$ sudo apt-get install rsync
Download
To get a Hadoop distribution, download a recent stable release from one of the Apache Download Mirrors.
Prepare to Start the Hadoop Cluster
Unpack the downloaded Hadoop distribution. In the distribution, edit the file etc/hadoop/hadoop-env.sh to define some parameters as follows:
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest # Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
Try the following command:
$ bin/hadoop
This will display the usage documentation for the hadoop script.
Now you are ready to start your Hadoop cluster in one of the three supported modes:
- Local (Standalone) Mode
- Pseudo-Distributed Mode
- Fully-Distributed Mode
Standalone Operation
By default, Hadoop is configured to run in a non-distributed mode, as a single Java process. This is useful for debugging.
The following example copies the unpacked conf directory to use as input and then finds and displays every match of the given regular expression. Output is written to the given output directory.
$ mkdir input
$ cp etc/hadoop/*.xml input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
$ cat output/*
Pseudo-Distributed Operation
Hadoop can also be run on a single-node in a pseudo-distributed mode where each Hadoop daemon runs in a separate Java process.
Configuration
Use the following:
# etc/hadoop/core-site.xml: <configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration> # etc/hadoop/hdfs-site.xml: <configuration>
<property>
<name>dfs.replication</name>
<value></value>
</property>
</configuration>
Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost
If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
Execution
The following instructions are to run a MapReduce job locally. If you want to execute a job on YARN, see YARN on Single Node.
- Format the filesystem:
$ bin/hdfs namenode -format
- Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
The hadoop daemon log output is written to the $HADOOP_LOG_DIR directory (defaults to $HADOOP_HOME/logs).
- Browse the web interface for the NameNode; by default it is available at:
- NameNode - http://localhost:50070/
- Make the HDFS directories required to execute MapReduce jobs:
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username> - Copy the input files into the distributed filesystem:
$ bin/hdfs dfs -put etc/hadoop input
- Run some of the examples provided:
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar grep input output 'dfs[a-z.]+'
- Examine the output files:
Copy the output files from the distributed filesystem to the local filesystem and examine them:
$ bin/hdfs dfs -get output output
$ cat output/*or
View the output files on the distributed filesystem:
$ bin/hdfs dfs -cat output/*
- When you're done, stop the daemons with:
$ sbin/stop-dfs.sh
YARN on Single Node
You can run a MapReduce job on YARN in a pseudo-distributed mode by setting a few parameters and running ResourceManager daemon and NodeManager daemon in addition.
The following instructions assume that 1. ~ 4. steps of the above instructions are already executed.
- Configure parameters as follows:
etc/hadoop/mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>etc/hadoop/yarn-site.xml:
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration> - Start ResourceManager daemon and NodeManager daemon:
$ sbin/start-yarn.sh
- Browse the web interface for the ResourceManager; by default it is available at:
- ResourceManager - http://localhost:8088/
- Run a MapReduce job.
- When you're done, stop the daemons with:
$ sbin/stop-yarn.sh
Hadoop MapReduce Next Generation - Setting up a Single Node Cluster的更多相关文章
- Setting up a Single Node Cluster Hadoop on Ubuntu/Debian
Hadoop: Setting up a Single Node Cluster. Hadoop: Setting up a Single Node Cluster. Purpose Prerequi ...
- CentOS6.4安装Hadoop2.0.5 alpha - Single Node Cluster
1.安装JDK7 rpm到/usr/java/jdk1.7.0_40,并建立软链接/usr/java/default到/usr/java/jdk1.7.0_40 [root@server-308 ~] ...
- Hadoop Single Node Setup(hadoop本地模式和伪分布式模式安装-官方文档翻译 2.7.3)
Purpose(目标) This document describes how to set up and configure a single-node Hadoop installation so ...
- Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simpleMapReduce program for Hadoop in thePython prog ...
- Hadoop MapReduce编程学习
一直在搞spark,也没时间弄hadoop,不过Hadoop基本的编程我觉得我还是要会吧,看到一篇不错的文章,不过应该应用于hadoop2.0以前,因为代码中有 conf.set("map ...
- 用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python
In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python pr ...
- Hadoop mapreduce自定义分组RawComparator
本文发表于本人博客. 今天接着上次[Hadoop mapreduce自定义排序WritableComparable]文章写,按照顺序那么这次应该是讲解自定义分组如何实现,关于操作顺序在这里不多说了,需 ...
- 下一代Apache Hadoop MapReduce框架的架构
背景 随着集群规模和负载增加,MapReduce JobTracker在内存消耗,线程模型和扩展性/可靠性/性能方面暴露出了缺点,为此需要对它进行大整修. 需求 当我们对Hadoop MapReduc ...
- Hadoop MapReduce编程 API入门系列之join(二十六)(未完)
不多说,直接上代码. 天气记录数据库 Station ID Timestamp Temperature 气象站数据库 Station ID Station Name 气象站和天气记录合并之后的示意图如 ...
随机推荐
- Cocos2d-x 3.0- 在Visual Studio 2012中执行測试项目
Cocos2d-x - 怎样在Win32执行cpp-tests 2014年4月30日 星期三 小雨 微凉 稍显疲惫 注:本篇文章来自Cocos2d-x官网,小巫仅仅是粗略翻译眼下最新版本号的,教大家怎 ...
- SqlServer表中两条全然同样的记录,怎样删除当中1条
描写叙述:表无主键ID,误插入两遍数据,怎样删除内容同样的记录,而仅仅留下1条. SELECT DISTINCT * INTO #temp FROM grade; DROP TABLE grade; ...
- 漏洞大爆光:QQ漏洞、飞秋漏洞、360浏览器劫持…
随着互联网应用的高速发展,信息安全已深入到诸多领域,前段时间发生的"Struts 2"漏洞及"心脏出血"漏洞影响了二亿中国网民的信息安全.原因是程序猿缺少细致的 ...
- find_if函数与partition函数的转换
编写程序,求大于等于一个给定长度的单词有多少.我们还会修改输出,使程序只打印大于等于给定长度的单词. 使用find_if实现的代码如下: #include<algorithm> #incl ...
- mysql sqlmap 注入尝试
假设注入点为 http://www.abc.com/news.php?id=12 //探测数据库信息 sqlmap -u http://www.abc.com/news.php?id=12 –dbs ...
- jQuery Ajax 实例 全解析
jQuery Ajax 实例 全解析 jQuery确实是一个挺好的轻量级的JS框架,能帮助我们快速的开发JS应用,并在一定程度上改变了我们写JavaScript代码的习惯. 废话少说,直接进入正题,我 ...
- android .9文件的一点处理
Android上面有很多平台,造成比较严重的碎片问题,适配比较困难,作为应用,一般都需要图文并茂,图片又是比较占资源的.面对缩放的问题,于是出来了矢量图片文件,作一点矢量处理,于是就是.9图片,IOS ...
- FOR XML PATH 转换问题
以下我带大家了解关于 FOR XML PATH 首先我们看下所熟悉的表数据 之后转换 <骨牌编号>1</骨牌编号> <骨牌颜色>橙</骨牌颜色> < ...
- 主机访问 虚拟机web注意事项
在这里, 我通过NAT的方式, 通过主机访问虚拟机. 需要做的是, 将主机中访问的端口, 映射为虚拟机的'编辑->虚拟网络编辑器->vmnet8', 如下图 在弹出的'映射传入端口'界面中 ...
- WCF学习系列一_创建第一个WCF服务
原创作者:灰灰虫的家http://hi.baidu.com/grayworm WCF开发实战系列一:创建第一个WCF服务 在这个实战中我们将使用DataContract,ServiceContract ...