Apache Hadoop 集群安装文档
简介:
Apache Hadoop 集群安装文档
软件:jdk-8u111-linux-x64.rpm、hadoop-2.8.0.tar.gz
http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz
系统:CentOS 6.8 x64 主机列表及配置信息: master.hadoop datanode[:].hadoop CPU: MEM: 16G 8G DISK: 100G* 100G*
一、系统初始化
# master.hadoop
shell > vim /etc/hosts 192.168.1.25 master.hadoop
192.168.1.27 datanode01.hadoop
192.168.1.28 datanode02.hadoop
192.168.1.29 datanode03.hadoop shell > yum -y install epel-release
shell > yum -y install ansible shell > ssh-keygen # 生成密钥
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 root@datanode01.hadoop"
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 root@datanode02.hadoop"
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 root@datanode03.hadoop" shell > vim /etc/ansible/hosts # datanode.hadoop [datanode] datanode[:].hadoop shell > ansible datanode -m shell -a 'useradd hadoop && echo hadoop | passwd --stdin hadoop' shell > ansible datanode -m shell -a "echo '* - nofile 65536' >> /etc/security/limits.conf" shell > ansible datanode -m copy -a 'src=/etc/hosts dest=/etc/hosts' # 同步 hosts shell > ansible datanode -m shell -a '/etc/init.d/iptables stop && chkconfig --del iptables' # 关闭防火墙 shell > ansible datanode -m shell -a 'sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config' # 关闭 SELinux shell > ansible datanode -m shell -a 'echo 'vm.swappiness = ' >> /etc/sysctl.conf' # 修改内核参数 shell > ansible datanode -m shell -a 'echo 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag' # 关闭透明大页 shell > ansible datanode -m shell -a 'echo 'echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag' >> /etc/rc.local' shell > ansible datanode -m shell -a 'reboot'
# 上面的 ansible 操作,master.hadoop 也要执行
二、时间同步
# master.hadoop
shell > /bin/cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime shell > yum -y install ntp shell > /etc/init.d/ntpd stop && chkconfig --del ntpd shell > ntpdate us.pool.ntp.org | hwclock -w shell > vim /etc/ntp.conf
# 允许时间同步客户端
restrict 192.168.1.0 mask 255.255.255.0 nomodify
# Server 向谁同步时间
server us.pool.ntp.org prefer
# Server 无法向时间服务器同步时,使用本地时钟
server 127.127.1.0
fudge 127.127.1.0 stratum shell > /etc/init.d/ntpd start shell > echo -e '\n/usr/sbin/ntpdate us.pool.ntp.org | hwclock -w > /dev/null' >> /etc/rc.local shell > echo -e '\n/etc/init.d/ntpd start > /dev/null' >> /etc/rc.local shell > ansible datanode -m shell -a 'yum -y install ntpdate' shell > ansible datanode -m shell -a '/bin/cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime' shell > ansible datanode -m shell -a 'ntpdate master.hadoop | hwclock -w' shell > ansible datanode -m cron -a "name='ntpdate master.hadoop' minute=0 hour=0 job='/usr/sbin/ntpdate master.hadoop | hwclock -w > /dev/null'"
三、集群部署
# master.hadoop
1、安装 jdk、下载、解压 apache hadoop、设置主机间 hadoop 用户无密码登录
shell > rpm -ivh /usr/local/src/jdk-8u111-linux-x64.rpm shell > echo 'export JAVA_HOME=/usr/java/default' >> /etc/profile && source /etc/profile shell > tar zxf /usr/local/src/hadoop-2.8..tar.gz -C /usr/local/ shell > chown -R hadoop.hadoop /usr/local/hadoop-2.8. shell > echo -e '\nexport PATH=$PATH:/usr/local/hadoop-2.8.0/bin' >> /etc/profile && source /etc/profile shell > su - hadoop hadoop shell > ssh-keygen hadoop shell > cat .ssh/id_rsa.pub > .ssh/authorized_keys && chmod .ssh/authorized_keys hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 hadoop@datanode01.hadoop"
hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 hadoop@datanode02.hadoop"
hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 hadoop@datanode03.hadoop"
2、配置 apache hadoop
# 指定 Slave、即 DataNode、NodeManager 角色
hadoop shell > vim /usr/local/hadoop-2.8./etc/hadoop/slaves
datanode01.hadoop
datanode02.hadoop
datanode03.hadoop
# 修改 hadoop-env.sh
hadoop shell > vim /usr/local/hadoop-2.8./etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/java/default
# 修改 core-site.xml
<configuration> <property>
<name>fs.defaultFS</name>
<value>hdfs://master.hadoop:8020</value>
</property> <property>
<name>hadoop.tmp.dir</name>
<value>file:///data/hadoop/tmp</value>
</property> <property>
<name>fs.trash.interval</name>
<value></value>
</property> <property>
<name>io.file.buffer.size</name>
<value></value>
</property> </configuration>
# hadoop 核心配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml
# fs.defaultFS NameNode IP:PORT,老版本为 fs.default.name
# hadoop.tmp.dir hadoop 临时目录,很多目录不明确配置时,都基于该目录 ( 默认 /tmp,系统重启时会被删除 ),很重要!
# fs.trash.interval 开启垃圾回收,1440 分钟,默认 0 关闭 ( 用户文件系统级删除的数据会被移到回收站,24小时后被删除 )
# io.file.buffer.size 读写流文件缓存大小,减少IO次数,默认 4096 字节
# 修改 hdfs-site.xml
hadoop shell > vim /usr/local/hadoop-2.8./etc/hadoop/hdfs-site.xml <configuration> <property>
<name>dfs.blocksize</name>
<value></value>
</property> <property>
<name>dfs.replication</name>
<value></value>
</property> <property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/dfs/nn</value>
</property> <property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:///data/dfs/sn</value>
</property> <property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/dfs/dn</value>
</property> <property>
<name>dfs.namenode.handler.count</name>
<value></value>
</property> </configuration>
# HDFS 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
# dfs.hosts / dfs.hosts.exclude 允许或排除某些 DataNode 连接 NameNode
# dfs.blocksize 块大小,默认 134217728 ( 128M )
# dfs.replication 默认副本数,数据冗余
# dfs.namenode.name.dir NameNode 元数据存放位置,可以配置多个目录,以 , 分割,用作数据冗余!
# dfs.namenode.checkpoint.dir SecondaryNameNode 数据存储目录,该角色负责将 NameNode 的 edit log 合并到 fsimage
# dfs.datanode.data.dir DataNode 数据存放位置,可以配置多个目录,以 , 分割,数据轮询写入,增加写入速度 ( 多个目录应该对应多个设备 DISK )
# dfs.namenode.handler.count NameNode 线程数,用于跟 DataNode 通信,默认 10,增大该参数可以优化性能,但是资源也相应提升
# 修改 yarn-site.xml
hadoop shell > vim /usr/local/hadoop-2.8./etc/hadoop/yarn-site.xml <configuration> <property>
<name>yarn.resourcemanager.hostname</name>
<value>master.hadoop</value>
</property> <property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property> <property>
<name>yarn.nodemanager.log-dirs</name>
<value>${yarn.log.dir}/userlogs</value>
</property> <property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/tmp/logs</value>
</property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property> </configuration>
# YARN 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
# yarn.resourcemanager.hostname ReSourceManager 主机,其余关于端口的监听都基于该配置项
# yarn.resourcemanager.scheduler.class 资源调度算法,CapacityScheduler 计算能力调度、FairScheduler 公平调度、Fifo Scheduler 先进先出调度
# yarn.nodemanager.log-dirs nodemanager 日志目录
# yarn.nodemanager.remote-app-log-dir nodemanager 中间结果保持目录
# 修改 mapred-site.xml
hadoop shell > cat /usr/local/hadoop-2.8./etc/hadoop/mapred-site.xml.template > /usr/local/hadoop-2.8./etc/hadoop/mapred-site.xml
hadoop shell > vim /usr/local/hadoop-2.8./etc/hadoop/mapred-site.xml <configuration> <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> <property>
<name>mapreduce.jobhistory.address</name>
<value>master.hadoop:</value>
</property> <property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master.hadoop:</value>
</property> <property>
<name>yarn.app.mapreduce.am.staging-dir</name>
<value>/tmp/hadoop-yarn/staging</value>
</property> </configuration>
# MAPREDUCE 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
# mapreduce.framework.name 使用 yarn 来管理资源
# yarn.app.mapreduce.am.staging-dir 提交作业时的临时目录,提交作业历史目录 mapreduce.jobhistory.done-dir、mapreduce.jobhistory.intermediate-done-dir 都基于该目录
hadoop shell > exit
3、部署 Slave
shell > ansible datanode -m copy -a 'src=/usr/local/src/jdk-8u111-linux-x64.rpm dest=/usr/local/src/' shell > yum -y install rsync shell > ansible datanode -m shell -a 'yum -y install rsync' shell > ansible datanode -m synchronize -a 'src=/usr/local/hadoop-2.8.0 dest=/usr/local/' # 我还傻傻的用 copy 模块,结果慢的要死,synchroize 为 rsync 模块,好快! shell > ansible datanode -m shell -a 'rpm -ivh /usr/local/src/jdk-8u111-linux-x64.rpm' shell > ansible datanode -m shell -a "echo -e '\nexport JAVA_HOME=/usr/java/default' >> /etc/profile && source /etc/profile"
四、启动集群
# master.hadoop
shell > chmod -R a+w /data
shell > ansible datanode -m shell -a 'chmod -R a+w /data' # 需要给 /data 目录写入权限,否则无法初始化文件系统 hdfs namenode -format shell > su - hadoop hadoop shell > hdfs namenode -format # 初次启动需要格式化文件系统 hadoop shell > sh /usr/local/hadoop-2.8./sbin/start-all.sh # 启动所有服务 / stop-all.sh 关闭服务 hadoop shell > jps
ResourceManager
Jps
NameNode
SecondaryNameNode
# 这是 master.hadoop 启动的角色
# http://192.168.1.25:50070 # NameNode
# http://192.168.1.25:8088 # ReSourceManagerv
# http://192.168.1.25:10020 # MapReduce JobHistory Server :19888 webui
# datanode.hadoop
hadoop shell > jps
Jps
DataNode
NodeManager
# 这是 datanode.hadoop 启动的角色
hadoop shell > hdfs dfs -ls
ls: `.': No such file or directory hadoop shell > hdfs dfs -mkdir /user
hadoop shell > hdfs dfs -mkdir /user/hadoop hadoop shell > hdfs dfs -ls
# 为 hadoop 用户创建家目录
五、运行示例
# master.hadoop
hadoop shell > hdfs dfs -put shakespeare.txt # 上传本地文件到 hdfs
hadoop shell > hdfs dfs -ls
Found items
-rw-r--r-- hadoop supergroup -- : shakespeare.txt hadoop shell > hadoop jar /usr/local/hadoop-2.8./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8..jar grep shakespeare.txt outfile what # 执行官方示例,词频统计 ( 统计 what 出现次数 ) hadoop shell > hdfs dfs -ls
drwxr-xr-x - hadoop supergroup -- : outfile
-rw-r--r-- hadoop supergroup -- : shakespeare.txt hadoop shell > hdfs dfs -cat outfile/*
2309 what
报错管理:
1、bin/hdfs namenode -format # 初始化文件系统报错
// :: ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /data/dfs/namenode/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:)
# 解决方法
shell > chmod -R a+w /data
shell > ansible datanode -m shell -a 'chmod -R a+w /data'
2、WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable # 迷之警告
Apache Hadoop 集群安装文档的更多相关文章
- Apache HBase 集群安装文档
简介: Apache HBase 是一个分布式的.面向列的开源 NoSQL 数据库.具有高性能.高可靠性.可伸缩.面向列.分布式存储的特性. HBase 的数据文件最终落地在 HDFS 之上,所以在 ...
- Apache Hadoop集群安装(NameNode HA + SPARK + 机架感知)
1.主机规划 序号 主机名 IP地址 角色 1 nn-1 192.168.9.21 NameNode.mr-jobhistory.zookeeper.JournalNode 2 nn-2 ).HA的集 ...
- Apache Hadoop集群安装(NameNode HA + YARN HA + SPARK + 机架感知)
1.主机规划 序号 主机名 IP地址 角色 1 nn-1 192.168.9.21 NameNode.mr-jobhistory.zookeeper.JournalNode 2 nn-2 192.16 ...
- Hadoop集群搭建文档
环境: Win7系统装虚拟机虚拟机VMware-workstation-full-9.0.0-812388.exe Linux系统Ubuntu12.0.4 JDK j ...
- [转] Kubernetes集群安装文档-v1.6版本
[From] https://www.kubernetes.org.cn/1870.html http://jimmysong.io/kubernetes-handbook
- Apache Hadoop集群离线安装部署(三)——Hbase安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(二)——Spark-2.1.0 on Yarn安装
Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS.YARN.MR)安装:http://www.cnblogs.com/pojishou/p/6366542.html Apac ...
- Apache Hadoop集群离线安装部署(一)——Hadoop(HDFS、YARN、MR)安装
虽然我已经装了个Cloudera的CDH集群(教程详见:http://www.cnblogs.com/pojishou/p/6267616.html),但实在太吃内存了,而且给定的组件版本是不可选的, ...
- HP DL160 Gen9服务器集群部署文档
HP DL160 Gen9服务器集群部署文档 硬件配置=======================================================Server Memo ...
随机推荐
- DOM之概述
body, table{font-family: 微软雅黑; font-size: 10pt} table{border-collapse: collapse; border: solid gray; ...
- SublimeText实用快捷键
Markdown Ctrl+Alt+O: Preview Markup in Browser Ctrl+Alt+X: Export Markup as HTML Ctrl+Alt+C: Copy Ma ...
- React Native入门指南
转载自:http://www.jianshu.com/p/b88944250b25 前言 React Native 诞生于 2015 年,名副其实的富二代,主要使命是为父出征,与 Apple 和 Go ...
- C#统计给定的文本中字符出现的次数,使用循环和递归两种方法
前几天看了一个.net程序员面试题目,题目是”统计给定的文本中字符出现的次数,使用循环和递归两种方法“. 下面是我对这个题目的解法: 1.使用循环: /// <summary> /// 使 ...
- CF1143D/1142A The Beatles
CF1143D/1142A The Beatles 将题目中所给条件用同余方程表示,可得 \(s-1\equiv \pm a,s+l-1\equiv \pm b\mod k\). 于是可得 \(l\e ...
- POJ 2406Power Strings(KMP)
POJ 2406 其实就是一个简单的kmp应用: ans = n % (n - f[n]) == 0 ? n / (n - f[n]) : 1 其中f是失配函数 //#pragma comment(l ...
- debugfs文件系统
debugfs: https://www.cnblogs.com/helloworldtoyou/p/6281415.html /proc/bus/input/devices /sys/kernel/ ...
- 在linux,arm上的屏幕搜索wifi并连接(qt,多选择,wifi按信号排列)转
先上代码!! #include "widget.h"#include "ui_widget.h"#include <QVBoxLayout>#inc ...
- JSP动作指令
JSP动作指令 动作指令与编译指令不间,编译指令是通知 Servlet 引擎的处理消息,而动作指令只是运行时的脚本动作.编译指令在将JSP 编译成 Servlet 时起作用:处理指令通常可替换成 Ja ...
- maven库 mvn依赖
http://maven.outofmemory.cn/ http://mvnrepository.com/ 先执行 mvn clean 然后执行 mvn 命令 如:mvn compile . ...