安装Hadoop系列 — 安装Hadoop
所以尽管设置里全局的JAVA_HOME,但是用户的环境还是可以设置一层,可以为不同用户使用不同版本的hadoop而不冲突
export JRE_HOME=/usr/local/java/latest/jre
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME=/home/hadoop/hadoop-1.0.3
export PATH=$PATH:$HADOOP_HOME/bin
export JAVA_HOME=/usr/local/java/latest
export HADOOP_HOME=/home/hadoop/hadoop-1.0.3
export PATH=$PATH:$HADOOP_HOME/bin
Hadoop 1.0.3
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192
Compiled by hortonfo on Tue May 8 20:31:25 UTC 2012
From source with checksum e6b0c1e23dcf76907c5fecb4b832f3be
- 在hadoop目录下新建一个input文件夹
build.xml hadoop-client-1.0.3.jar ivy
c++ hadoop-core-1.0.3.jar ivy.xml README.txt
CHANGES.txt hadoop-examples-1.0.3.jar lib sbin
conf hadoop-minicluster-1.0.3.jar libexec share
contrib hadoop-test-1.0.3.jar LICENSE.txt src
docs hadoop-tools-1.0.3.jar logs webapps
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/namesecondary</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-1.0.3/tmp/hadoop-${user.name}</value>
</property>
</configuration>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.http.address</name>
<value>0.0.0.0:50070</value>
</property>
<property>
<name>dfs.secondary.http.address</name>
<value>0.0.0.0:28680</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:50010</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data</value>
</property>
</configuration>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>mapred.job.tracker.http.address</name>
<value>0.0.0.0:50030</value>
</property>
<property>
<name>mapred.task.tracker.http.address</name>
<value>0.0.0.0:50060</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data/mapred/local</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/home/hadoop/hadoop-1.0.3/hdfs/data/system</value>
</property>
</configuration>
# bin/hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
14/07/04 10:08:50 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop-ThinkPad/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.3
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by 'hortonfo' on Tue May 8 20:31:25 UTC 2012
************************************************************/
Re-format filesystem in /home/hadoop/hadoop-1.0.3/hdfs/name ? (Y or N) Y //注意是大写字母
14/07/04 10:08:52 INFO util.GSet: VM type = 64-bit
14/07/04 10:08:52 INFO util.GSet: 2% max memory = 17.78 MB
14/07/04 10:08:52 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/07/04 10:08:52 INFO util.GSet: recommended=2097152, actual=2097152
14/07/04 10:08:52 INFO namenode.FSNamesystem: fsOwner=hadoop
14/07/04 10:08:53 INFO namenode.FSNamesystem: supergroup=supergroup
14/07/04 10:08:53 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/07/04 10:08:53 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/07/04 10:08:53 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/07/04 10:08:53 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/04 10:08:53 INFO common.Storage: Image file of size 112 saved in 0 seconds.
14/07/04 10:08:53 INFO common.Storage: Storage directory /home/hadoop/hadoop-1.0.3/hdfs/name has been successfully formatted.
14/07/04 10:08:53 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-ThinkPad/127.0.1.1
************************************************************/
9)启动hadoop
# bin/start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-namenode-hadoop-ThinkPad.out
localhost: starting datanode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-datanode-hadoop-ThinkPad.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-secondarynamenode-hadoop-ThinkPad.out
starting jobtracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-jobtracker-hadoop-ThinkPad.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hadoop-tasktracker-hadoop-ThinkPad.out
11860 SecondaryNameNode
11621 DataNode
11944 JobTracker
22170 Jps
11326 NameNode
12175 TaskTracker
http://localhost:50060/ - Hadoop Task Tracker 状态
http://localhost:50070/ - Hadoop DFS 状态
至此,hadoop的伪分布模式已经安装成功,于是,再次在伪分布模式下运行一下hadoop自带的例子WordCount来感受以下MapReduce过程:
这时注意程序是在文件系统dfs运行的,创建的文件也都基于文件系统:
首先在dfs中创建input目录
# bin/hadoop dfs -mkdir input
# bin/hadoop dfs -copyFromLocal conf/* input
# bin/hadoop jar hadoop-examples-1.0.3.jar wordcount input output
14/07/03 21:58:42 INFO mapred.JobClient: Launched map tasks=19
14/07/03 21:58:42 INFO mapred.JobClient: Launched reduce tasks=1
14/07/03 21:58:42 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=66577
14/07/03 21:58:42 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/07/03 21:58:42 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=119704
14/07/03 21:58:42 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/07/03 21:58:42 INFO mapred.JobClient: Data-local map tasks=19
14/07/03 21:58:42 INFO mapred.JobClient: File Output Format Counters
14/07/03 21:58:42 INFO mapred.JobClient: Bytes Written=15997
运行成功后结果会保存在output文件夹中:
want 1
when 1
where 2
where, 1
which 17
who 3
will 8
with 5
worker 1
would 7
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" 1
you 1
12)当Hadoop结束时,可以通过stop-all.sh脚本来关闭Hadoop的守护进程
# bin/stop-all.sh
PS:单机模式和伪分布模式均用于开发和调试的目的。真实Hadoop集群的运行采用的是第三种模式,即全分布模式。。
与Eclipse结合开发MapReduce程序
eclipse中开发Hadoop2.x的Map/Reduce项目汇总
http://blog.csdn.net/hitwengqi/article/details/8008203#
http://blog.sina.com.cn/s/blog_61ef49250100uvab.html
http://www.cnblogs.com/welbeckxu/category/346329.html
安装Hadoop系列 — 安装Hadoop的更多相关文章
- 安装Hadoop系列 — 安装JDK-8u5
安装步骤如下: 1)下载 JDK 8 从http://www.oracle.com/technetwork/java/javasebusiness/downloads/ 选择下载JDK的最新版本 JD ...
- 安装Hadoop系列 — 导入Hadoop源码项目
将Hadoop源码导入Eclipse有个最大好处就是通过 "ctrl + shift + r" 可以快速打开Hadoop源码文件. 第一步:在Eclipse新建一个Java项目,h ...
- 安装Hadoop系列 — 安装Eclipse
1.下载 Eclipse从 http://www.eclipse.org/downloads/index-developer.php下载合适版本,如:Eclipse IDE for C/C++ Dev ...
- 安装Hadoop系列 — 安装SSH免密码登录
配置ssh免密码登录 1) 验证是否安装ssh:ssh -version显示如下的话则成功安装了OpenSSH_6.2p2 Ubuntu-6ubuntu0.1, OpenSSL 1.0.1e 11 ...
- Spark入门实战系列--2.Spark编译与部署(中)--Hadoop编译安装
[注]该系列文章以及使用到安装包/测试数据 可以在<倾情大奉送--Spark入门实战系列>获取 .编译Hadooop 1.1 搭建环境 1.1.1 安装并设置maven 1. 下载mave ...
- hadoop系列一:hadoop集群安装
转载请在页首明显处注明作者与出处 http://www.cnblogs.com/zhuxiaojie/p/6384393.html 一:说明 此为大数据系列的一些博文,有空的话会陆续更新,包含大数据 ...
- Hadoop 系列文章(一) Hadoop 的安装,以及 Standalone Operation 的启动模式测试
以前都是玩 java,没搞过 hadoop,所以以此系列文章来记录下学习过程 安装的文件版本.操作系统说明 centos-6.5-x86_64 [bamboo@hadoop-senior opt]$ ...
- 【大数据系列】hadoop单机模式安装
一.添加用户和用户组 adduser hadoop 将hadoop用户添加进sudo用户组 sudo usermod -G sudo hadoop 或者 visudo 二.安装jdk 具体操作参考:c ...
- Hadoop 系列(二)安装配置
Hadoop 系列(二)安装配置 Hadoop 官网:http://hadoop.apache.or 一.Hadoop 安装 1.1 Hadoop 依赖的组件 JDK :从 Oracle 官网下载,设 ...
随机推荐
- jquery文字左右滚动
实现jquery文字左右滚动 <div class="fl">中奖名单:</div> <div class="scrollText" ...
- 导出excel表格
一. 1.获取数据源2.DataTable dt = st.Tables[0]; HttpResponse resp; // HTTP响应信息 resp = Page.Response; resp.C ...
- Java Thread and runnable
java中可有两种方式实现多线程, 一种是继承Thread类,(Thread本身实现了Runnable接口,就是说需要写void run 方法,来执行相关操作) 一种是实现Runnable接口 sta ...
- 编写高性能 Web 应用程序的 10 个技巧
使用 ASP.NET 编写 Web 应用程序的简单程度令人不敢相信.正因为如此简单,所以很多开发人员就不会花时间来设计其应用程序的结构,以获得更好的性能了.在本文中,我将讲述 10 个用于编写高性能 ...
- 【转】【C#】无边框窗体移动的三种方法
1. 重写WndProc protected override void WndProc(ref Message m) { const int WM_NCHITTEST = 0x84; const i ...
- Spark Streaming揭秘 Day15 No Receivers方式思考
Spark Streaming揭秘 Day15 No Receivers方式思考 在前面也有比较多的篇幅介绍了Receiver在SparkStreaming中的应用,但是我们也会发现,传统的Recei ...
- AVL树的python实现
AVL树是带有平衡条件的二叉查找树,一般要求每个节点的左子树和右子树的高度最多差1(空树的高度定义为-1). 在高度为h的AVL树中,最少的节点数S(h)由S(h)=S(h-1)+S(h-2)+1得出 ...
- Linux下mail/mailx命令发送邮件
最近看到项目中经常会用mail/mailx命令发送由java程序生成的report,比较新鲜.下面就简单介绍下mail/mailx命令用法.本文以mail命令举例(mail/mailx)效果都是一样的 ...
- Oracle访问数据的存取方法
1) 全表扫描(Full Table Scans, FTS) 为实现全表扫描,Oracle读取表中所有的行,并检查每一行是否满足语句的WHERE限制条件.Oracle顺序地读取分配给表的每个数据块,直 ...
- make问题:make[1] entering directory
执行make distclean命令.