企业内部从零开始安装docker hadoop 提纲
下载apache 项目 http://mirror.bit.edu.cn/apache/
下载 centos 7 安装 盘 iso 大约7G
安装 centos7
copy 光盘盘中的 packages repodata 到硬盘
建立 httpd 服务修改 /etc/httpd/conf/httpd.conf 中的 docmentroot ??
service httpd start .
如果有selinux 注意semanage chcon restorecon 命令 保持 与 /var/www 一致的上下文 ,使用 ls -Z 查看
web网站建立后
编写 /etc/yum.repo.d 中的 文件 ,
测试 yum 命令 : yum clean all ;yum makecache
如果有新的 rpm 也可放到 package目录 不过要使用 createrepo 重新建立 索引数据库
下载 docker 1.9
使用 rpm 安装
测试 service docker start
到 csphere 下载 安装 分析安装sh(找一个centos虚拟机在互联网上安装 然后 使用 docker save ;docker load 装载到企业本地) 主要管理docker 方便
使用网上的一个脚本建立 centos 的docker image https://raw.githubusercontent.com/docker/docker/master/contrib/mkimage-yum.sh
起名 centos
基于centos ,建立 jdk8 sshd 起名 jdk8:centos7
from centos7:7.2. Add jdk-8u65-linux-x64.gz /usr/sbin
env JAVA_HOME /usr/sbin/jdk1..0_65
env CLASSPATH /usr/sbin/jdk1..0_65/lib/dt.jar:/usr/sbin/jdk1..0_65/lib/tool.jar run echo "JAVA_HOME=$JAVA_HOME;export JAVA_HOME;" >>/etc/profile
run echo "CLASSPATH=$CLASSPATH:$JAVA_HOME;export CLASSPATH;" >>/etc/profile
run echo "PATH=$PATH:$JAVA_HOME/bin;export PATH ;">>/etc/profile
run echo "PATH=$PATH:$JAVA_HOME/bin;export PATH ;">>/etc/bashrc
run rm -f /etc/yum.repos.d/Cent*
add yum.repo /etc/yum.repos.d
run systemctl enable sshd.service
run /usr/lib/systemd/systemd --system &
run yum -y install which openssl sshd wget net-tools openssh-client openssh-server
run ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N ""
run ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""
run ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -N ""
run ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N ""
run /usr/sbin/sshd
run echo root | passwd root --stdin
run yum makecache &&yum clean all
run ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ""; cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys
run echo "StrictHostKeyChecking no " >>~/.ssh/config Entrypoint /usr/sbin/sshd;/bin/bash
[local]
name=local
baseurl=http://XXX.XXX/yum
enable=
gpgcheck=
yum.repo
基于jdk8:centos7 建立 hadoop2.6
from jdk8:centos7 Add hadoop-2.6..tar.gz /home/ run ln -s /home/hadoop-2.6./ /home/hadoop && cd /home/hadoop workdir /home/hadoop
expose
copy etc /home/hadoop/etc/hadoop run echo "export PATH=$PATH:$JAVA_HOME/bin:/home/hadoop/sbin:/home/hadoop/bin;">>/etc/profile
run echo "export PATH=$PATH:$JAVA_HOME/bin:/home/hadoop/sbin:/home/hadoop/bin;" >>/etc/bashrc
run systemctl enable sshd.service
run /usr/lib/systemd/systemd --system & copy hadoop-config.sh /home/hadoop/libexec
Entrypoint /usr/sbin/sshd;/bin/bash
测试 单节点hadoop 能否启动 (出现java没找到 ,手工修改 /libexec 中的 hadoop-config.sh )
start-dfs.sh start-yarn.sh
zoo image 的 dockerfile
from hadoop
Add zookeeper-3.4.7.tar.gz /home/
EXPOSE 16020 16202 16010 60000 60010 22 7373 7946 9000 50010 50020 50070 50075 50090 50475 8030 8031 8032 8033 8040 8042 8060 8088 50060 2888 2181 3888 8480 10020 19888
run echo "export ZOOKEEPER_HOME=/home/zookeeper-3.4.7" >>/etc/profile
run echo "export ZOOKEEPER_HOME=/home/zookeeper-3.4.7" >>/etc/bashrc run echo "export PATH=$PATH:$JAVA_HOME/bin:/home/hadoop/sbin:/home/hadoop/bin:/home/zookeeper-3.4.7/bin:/home/zookeeper-3.4.7/conf:/home/hbase-1.0.2/bin" >>/etc/profile
run echo "export PATH=$PATH:$JAVA_HOME/bin:/home/hadoop/sbin:/home/hadoop/bin:/home/zookeeper-3.4.7/bin:/home/zookeeper-3.4.7/conf:/home/hbase-1.0.2/bin" >>/etc/bashrc
volume /data/hadoop
copy zoo/zoo.cfg /home/zookeeper-3.4.7/conf/zoo.cfg
copy ha_etc/core-site.xml /home/hadoop/etc/hadoop/core-site.xml
copy ha_etc/hdfs-site.xml /home/hadoop/etc/hadoop/hdfs-site.xml
copy ha_etc/mapred-site.xml /home/hadoop/etc/hadoop/mapred-site.xml
copy ha_etc/yarn-site.xml /home/hadoop/etc/hadoop/yarn-site.xml
copy ha_etc/hosts.allow /data/hadoop/tmp/hosts.allow copy ha_etc/slaves_datanode.txt /home/hadoop/etc/hadoop/slaves
run mkdir /home/zookeeper-3.4.7/data
env HA_ID rm1 Add hbase-1.0.2-bin.tar.gz /home/
run sed -i "s/# export JAVA_HOME=\/usr\/java\/jdk1.6.0\//export JAVA_HOME=\/usr\/sbin\/jdk1.8.0_65/g" /home/hbase-1.0.2/conf/hbase-env.sh
run sed -i "s/# export HBASE_MANAGES_ZK=true/export HBASE_MANAGES_ZK=false/g" /home/hbase-1.0.2/conf/hbase-env.sh
run echo "export HBASE_MANAGES_ZK=false" >>/etc/profile
run echo "export HBASE_MANAGES_ZK=false" >>/etc/bashrc
Entrypoint /usr/sbin/sshd;/bin/bash
运行 zookeeper 的docker脚本 ,运行后生成 四个容器 (参数 1 )
#!/bin/bash
#更改host
inner_host=127.0.0.1
updateHost()
{
# read
inner_host=`cat /etc/hosts | grep ${in_url} | awk '{print $1}'`
if [ ${inner_host} = ${in_ip} ];then
echo "${inner_host} ${in_url} ok"
else
if [ ${inner_host} != "" ];then
echo " change is ok "
else
inner_ip_map="${in_ip} ${in_url}"
echo ${inner_ip_map} >> /etc/hosts
if [ $? = ]; then
echo "${inner_ip_map} to hosts success host is `cat /etc/hosts`"
fi
echo "shuld appand "
fi
fi
}
# run N slave containers
N=$ # the defaut node number is
if [ $# = ]
then
N=
fi docker build --rm -t zoo . # delete old master container and start new master container
sudo docker rm -f master_hadoop &> /dev/null
echo "start master container..."
sudo docker run -d -t --dns 127.0.0.1 -v /etc/hosts:/etc/hosts -p : -P -v /data/hadoop/master:/data/hadoop --name master_hadoop -h master.hantongchao.com -w /root zoo &> /dev/null ip0=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master_hadoop) serverid=;
((serverid++))
#zoo
echo $serverid > myid
sudo docker cp myid master_hadoop:/home/zookeeper-3.4./data/myid # delete old master container and start new nn1 container
sudo docker rm -f nn1_hadoop &> /dev/null
echo "start nn1 container..."
mkdir /data/hadoop/nn1 &> /dev/null
sudo docker run -d -t --dns 127.0.0.1 -p : -p : -p : -v /etc/hosts:/etc/hosts -e "HA_ID=rm1" -P -v /data/hadoop/nn1:/data/hadoop --name nn1_hadoop -h nn1.hantongchao.com -w /root zoo &> /dev/null
ip1=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" nn1_hadoop)
((serverid++))
echo $serverid > myid
sudo docker cp myid nn1_hadoop:/home/zookeeper-3.4./data/myid
#yarn slaves # delete old master container and start new nn2 container
sudo docker rm -f nn2_hadoop &> /dev/null
echo "start nn2 container..."
mkdir /data/hadoop/nn2 &> /dev/null
sudo docker run -d -t --dns 127.0.0.1 -p : -v /etc/hosts:/etc/hosts -p : -P -v /data/hadoop/nn2:/data/hadoop --name nn2_hadoop -h nn2.hantongchao.com -w /root zoo &> /dev/null
ip2=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" nn2_hadoop) ((serverid++)) echo $serverid > myid
sudo docker cp myid nn2_hadoop:/home/zookeeper-3.4./data/myid
# get the IP address of master container
FIRST_IP=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master_hadoop) # - sudo docker rm -f master1_hadoop &> /dev/null
echo "start master1 container..."
sudo docker run -d -t --dns 127.0.0.1 -v /etc/hosts:/etc/hosts -e "HA_ID=rm2" -P -v /data/hadoop/master1:/data/hadoop --name master1_hadoop -h master1.hantongchao.com -w /root zoo &> /dev/null ip4=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" master1_hadoop) ((serverid++))
#zoo
echo $serverid > myid
sudo docker cp myid master1_hadoop:/home/zookeeper-3.4./data/myid # delete old slave containers and start new slave containers
i=
while [ $i -lt $N ]
do
sudo docker rm -f slave_hadoop$i &> /dev/null
echo "start slave_hadoop$i container..."
mkdir /data/hadoop/$i &> /dev/null
sudo docker run -d -t --dns 127.0.0.1 -v /etc/hosts:/etc/hosts -P -v /data/hadoop/$i:/data/hadoop --name slave_hadoop$i -h slave$i.hantongchao.com -e JOIN_IP=$FIRST_IP zoo &> /dev/null
in_ip=$(docker inspect --format="{{.NetworkSettings.IPAddress}}" slave_hadoop$i)
in_url=slave$i.hantongchao.com
((serverid++))
echo $serverid > myid
sudo docker cp myid slave_hadoop$i:/home/zookeeper-3.4./data/myid
sudo docker cp ha_etc/slaves_datanode.txt slave_hadoop$i:/home/hadoop/etc/hadoop/slaves
updateHost
((i++))
done
echo $in_ip
in_ip=$ip0
in_url="master.hantongchao.com"
updateHost
#in_url="mycluster"
#updateHost
in_ip=$ip1
in_url="nn1.hantongchao.com"
updateHost
in_ip=$ip2
in_url="nn2.hantongchao.com"
updateHost in_ip=$ip4
in_url="master1.hantongchao.com"
updateHost #sudo docker cp ha_etc/slaves_nodemanager.txt master_hadoop:/home/hadoop/etc/hadoop/slaves
#sudo docker cp ha_etc/slaves_nodemanager.txt master1_hadoop:/home/hadoop/etc/hadoop/slaves sudo docker cp ha_etc/slaves_datanode.txt master_hadoop:/home/hadoop/etc/hadoop/slaves
sudo docker cp ha_etc/slaves_datanode.txt master1_hadoop:/home/hadoop/etc/hadoop/slaves sudo docker cp ha_etc/slaves_datanode.txt nn1_hadoop:/home/hadoop/etc/hadoop/slaves
sudo docker cp ha_etc/slaves_datanode.txt nn2_hadoop:/home/hadoop/etc/hadoop/slaves # create a new Bash session in the master container
sudo docker exec -it master_hadoop /home/zookeeper-3.4./bin/zkServer.sh start
sudo docker exec -it nn1_hadoop /home/zookeeper-3.4./bin/zkServer.sh start
sudo docker exec -it nn2_hadoop /home/zookeeper-3.4./bin/zkServer.sh start sudo docker exec -it master_hadoop /home/zookeeper-3.4./bin/zkServer.sh status echo "journalnode"
sudo docker exec -it master_hadoop /home/hadoop/sbin/hadoop-daemon.sh start journalnode
sudo docker exec -it nn1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start journalnode
sudo docker exec -it nn2_hadoop /home/hadoop/sbin/hadoop-daemon.sh start journalnode sudo docker exec -it nn1_hadoop bash -c "/home/hadoop/bin/hdfs namenode -format -clusterid mycluster"
#sudo docker exec -it nn1_hadoop scp -r /data/hadoop/tmp/dfs/namedir nn2.hantongchao.com:/data/hadoop/tmp/dfs/
echo namenode -format
#read what
sudo docker exec -it nn1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start namenode
#sudo docker exec -it nn1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start secondarynamenode #echo nn1 start namenode secondarynamenode
#read what sudo docker exec -it nn2_hadoop /home/hadoop/bin/hdfs namenode -bootstrapStandby
sudo docker exec -it nn2_hadoop /home/hadoop/sbin/hadoop-daemon.sh start namenode sudo docker exec -it nn1_hadoop /home/hadoop/bin/hdfs zkfc -formatZK
sudo docker exec -it nn1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start zkfc
sudo docker exec -it nn2_hadoop /home/hadoop/sbin/hadoop-daemon.sh start zkfc sudo docker exec -it nn1_hadoop /home/hadoop/bin/hdfs haadmin -getServiceState nn1
sudo docker exec -it nn2_hadoop /home/hadoop/bin/hdfs haadmin -getServiceState nn2 sudo docker exec -it master_hadoop bash -c ' /usr/bin/sed -i "s/{HA_ID}/rm1/g" /home/hadoop/etc/hadoop/yarn-site.xml '
sudo docker exec -it master1_hadoop bash -c ' /usr/bin/sed -i "s/{HA_ID}/rm2/g" /home/hadoop/etc/hadoop/yarn-site.xml ' #start-yarn
sudo docker exec -it master_hadoop /home/hadoop/sbin/yarn-daemon.sh start resourcemanager
sudo docker exec -it master1_hadoop /home/hadoop/sbin/yarn-daemon.sh start resourcemanager
sleep
sudo docker exec -it master_hadoop /home/hadoop/sbin/yarn-daemon.sh start nodemanager
sudo docker exec -it master1_hadoop /home/hadoop/sbin/yarn-daemon.sh start nodemanager
sudo docker exec -it nn1_hadoop /home/hadoop/sbin/yarn-daemon.sh start nodemanager
sudo docker exec -it nn2_hadoop /home/hadoop/sbin/yarn-daemon.sh start nodemanager
sleep sudo docker exec -it master_hadoop /home/hadoop/sbin/hadoop-daemon.sh start datanode
sudo docker exec -it master1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start datanode
sudo docker exec -it nn1_hadoop /home/hadoop/sbin/hadoop-daemon.sh start datanode
sudo docker exec -it nn2_hadoop /home/hadoop/sbin/hadoop-daemon.sh start datanode sudo docker exec -it master_hadoop /home/hadoop/sbin/yarn-daemon.sh start proxyserver
sudo docker exec -it master_hadoop /home/hadoop/sbin/mr-jobhistory-daemon.sh start historyserver
echo "nn1_hadoop jps "
docker exec -it nn1_hadoop /usr/sbin/jdk1..0_65/bin/jps
echo "nn2_hadoop jps "
docker exec -it nn2_hadoop /usr/sbin/jdk1..0_65/bin/jps
echo "master_hadoop jps "
docker exec -it master_hadoop /usr/sbin/jdk1..0_65/bin/jps
echo "master1_hadoop jps "
docker exec -it master1_hadoop /usr/sbin/jdk1..0_65/bin/jps i=
echo $N
while [ $i -lt $N ]
do
sudo docker cp nn1_hadoop:/home/hadoop/etc/hadoop/slaves tmp_slaves_datanode.txt
echo -e "slave$i.hantongchao.com" >>tmp_slaves_datanode.txt
sudo docker cp tmp_slaves_datanode.txt nn1_hadoop:/home/hadoop/etc/hadoop/slaves
sudo docker cp tmp_slaves_datanode.txt nn2_hadoop:/home/hadoop/etc/hadoop/slaves
sudo docker cp tmp_slaves_datanode.txt master_hadoop:/home/hadoop/etc/hadoop/slaves
sudo docker cp tmp_slaves_datanode.txt master1_hadoop:/home/hadoop/etc/hadoop/slaves sudo docker cp tmp_slaves_datanode.txt nn1_hadoop:/home/hbase-1.0./conf/regionservers
sudo docker cp tmp_slaves_datanode.txt nn2_hadoop:/home/hbase-1.0./conf/regionservers sudo docker exec -it slave_hadoop$i /home/hadoop/sbin/yarn-daemon.sh start nodemanager
sudo docker exec -it slave_hadoop$i /home/hadoop/sbin/hadoop-daemon.sh start datanode
echo "slave_hadoop$i jps "
docker exec -it slave_hadoop$i /usr/sbin/jdk1..0_65/bin/jps
((i++))
echo $i
done sudo docker exec -it nn1_hadoop ssh nn2.hantongchao.com ls
sudo docker exec -it nn1_hadoop ssh master1.hantongchao.com ls
sudo docker exec -it nn1_hadoop ssh master.hantongchao.com ls sudo docker exec -it nn2_hadoop ssh nn1.hantongchao.com ls
sudo docker exec -it nn2_hadoop ssh master1.hantongchao.com ls
sudo docker exec -it nn2_hadoop ssh master.hantongchao.com ls sudo docker exec -it nn1_hadoop bash
core-site.xml <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>fs.default.name</name>
<value>hdfs://mycluster</value>
</property> <property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/hadoop/tmp/dfs/data</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/tmp/dfs/journal</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>nn1.hantongchao.com:2181,nn2.hantongchao.com:2181,master.hantongchao.com:2181</value>
</property>
<property>
<name>hadoop.proxyuser.spark.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.spark.groups</name>
<value>*</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/hadoop/tmp/dfs/name</value>
</property> <!--
<property>
<name>dfs.hosts</name>
<value>/data/hadoop/tmp/hosts.allow</value>
</property>
-->
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/hadoop/tmp/dfs/data</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/data/hadoop/tmp/dfs/namedir</value>
</property> <property>
<name>dfs.data.dir</name>
<value>/data/hadoop/tmp/dfs/hdsfdata</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property> <property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>nn1.hantongchao.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>nn1.hantongchao.com:50070</value>
</property> <property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>nn2.hantongchao.com:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>nn2.hantongchao.com:50070</value>
</property> <property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://nn1.hantongchao.com:8485;nn2.hantongchao.com:8485;master.hantongchao.com:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/data/hadoop/tmp/dfs/journal</value>
</property> <!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>shell(/bin/true)</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>mapreduce.map.memory.mb</name>
<value>2046</value>
</property> <property>
<name>mapreduce.reduce.memory.mb</name>
<value>2046</value>
</property> <property>
<name>mapred.child.java.opts</name>
<value>-Xmx1024m</value>
</property>
<property>
<name>mapred.reduce.child.java.opts</name>
<value>-Xmx1024m</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master.hantongchao.com:10020</value>
</property> <property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master.hantongchao.com:19888</value>
</property> <property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/data/hadoop/tmp/mr_history</value>
</property> <property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/data/hadoop/tmp/mr_history</value>
</property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration> <!-- Site specific YARN configuration properties -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> <property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>2024</value>
</property>
<property> <name>yarn.scheduler.maximum-allocation-mb</name>
<value>8096</value>
</property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property> <property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property> </property>
<!-- 开启RM高可靠 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>rm-cluster</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master.hantongchao.com</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>master1.hantongchao.com</value>
</property> <property>
<name>yarn.resourcemanager.resource-tracker.address.rm1</name>
<value>master.hantongchao.com:8031</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address.rm2</name>
<value>master1.hantongchao.com:8031</value>
</property>
<property>
<name>yarn.resourcemanager.ha.id</name>
<value>{HA_ID}</value>
</property> <property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property> <property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>nn1.hantongchao.com:2181,nn2.hantongchao.com:2181,master.hantongchao.com:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
</configuration>
salve
master.hantongchao.com
master1.hantongchao.com
nn1.hantongchao.com
nn2.hantongchao.com
企业内部从零开始安装docker hadoop 提纲的更多相关文章
- 从零开始安装Hadoop视频教程
从零开始安装Hadoop视频教程 Hadoop 是一个能够对大量数据进行分布式处理的软件框架,用这种技术使得普通的PC服务器甚至一些近过时的服务器也能够发挥余热,组成大型集群系统,由于它的可伸缩性能够 ...
- IOS以无线方式安装企业内部应用(开发者)
请先阅读:http://help.apple.com/deployment/ios/#/apda0e3426d7 操作系统:osx yosemite 10.10.5 (14F1509) xcode:V ...
- centos6.5/centos7安装部署企业内部知识管理社区系统wecenter
企业内部知识系统wecenter社区系统安装及部署 centos 6.5环境安装 因为是公司内部使用在线人数不会太多,使用yum安装lamp环境即可 1.安装lamp基本环境 yum -y insta ...
- Windows server 2016 支持容器 ,安装docker 搭建Ubuntu+hadoop (docker为服务器)
一.Windows server 2016 是肯定要安装的, 关于如何启动容器功能那就是控制面板中增加与删除里面的启用了,很多地方可以百度到 二. 安装Ubuntu hadoop 等 待续 注意: ...
- 从零开始安装 Ambari (4) -- 通过 Ambari 部署 hadoop 集群
1. 打开 http://192.168.242.181:8080 登陆的用户名/密码是 : admin/admin 2. 点击 “LAUNCH INSTALL WIZARD”,开始创建一个集群 3 ...
- 从零开始安装、编译、部署 Docker
简介 主要介绍如何从基础系统debian部署docker关于docker基础知识在 相关资料 里有链接 安装docker 1.使用root用户身份添加apt源添加public key使docker的安 ...
- 从零开始一步一步搭建Ubuntu Server服务器、修改数据源、安装Docker、配置镜像加速器、Compose部署Gitlab服务
场景 最终目的是使用Docker Compose部署一个Gitlab服务. 效果 注: 博客: https://blog.csdn.net/badao_liumang_qizhi关注公众号 霸道的程序 ...
- 【从零开始搭建K8S】【第一篇】CentOS7.6离线安装Docker(手动安装以及基于yum本地源安装)
下载CentOS7.6以及最小化安装CentOS7.6版本.由于CentOS属于开源软件,在国内也有很多的mirror站点可供下载,我选择的是华为站点进行下载:http://mirrors.huawe ...
- 从零开始学习docker之docker的安装
一.Docker 使用 Google 公司推出的 Go 语言 进行开发实现,基于 Linux 内核的 cgroup,namespace,以及 OverlayFS 类的 Union FS 等技术,对进程 ...
随机推荐
- UILabel 的高度根据文字内容调整
1.UILabel 对文字的自适应有两种方法. 1)将label的numberOfLines设为0;并添加自适应方法[titleLabel sizeToFit],但是这种方法并不理想. 2)根据文字的 ...
- mysql存储过程详解(入门)
delimiter // #修改结束符号为// create procedure pro_wyx() /*创建存储过程*/ begin declare i int ; #定义变量 set i=1 ...
- 深入C#中get与set的详解(转)
转自:http://www.jb51.net/article/37960.htm 释一:属性的访问器包含与获取(读取或计算)或设置(写)属性有关的可执行语句.访问器声明可以包含 get 访问器或 se ...
- Learning to write a compiler
http://stackoverflow.com/questions/1669/learning-to-write-a-compiler?rq=1 Big List of Resources: A N ...
- JSBinding+SharpKit / 脚本加密(JSC或Bytecode,参考cocos2d-js)
现在已经可以编译JSC,目前只能在 Windows 下编译 JSC.这个功能是从 cocos2d-js 抄过来的,他应该也支持在Mac编译,但是我没有试过. 菜单:JSB | Compile all ...
- 封装对NPOIExcel的操作,方便的设置导出Excel的样式
下载: http://pan.baidu.com/s/1boTpT5l 使用方法: 导入: 使用 ReadToDataTable方法 导出: NPOIExcel.ExcelManager manger ...
- 【其它】 MathJax - 网页中显示数学公式的终极武器
最近在学习一些数学课程.但时间一长,发现很多东西又都忘了.而且过程中的很多心得没有留下记录,觉得挺可惜的.所以决定开个博客来记录一些东西,也希望能同数学爱好者们一起学习. 但写数学博客首先得解决显示数 ...
- Mac OS X使用快捷键改善窗口管理的六个方法
http://www.macx.cn/thread-2085916-1-1.html 窗口全屏 ctrl+command+f
- gulp.js简单操作
一.安装gulp 1.深入设置任务之前,需先安装gulp: $ npm install gulp -g 2.这会将gulp安装到全域环境下,让你可以存取gulp的CLI.接著,需要在本地端的专案进行安 ...
- H5 App开发用WeX5垃圾 试用一周,我果断放弃了wex5
上个月,和朋友一起参加wex5的分享会,因为对cordova有些了解,始终不相信wex5的广告.五一假期,小试一下,果然不出我所料,有不少坑. 想下载IDE,竟然有1.7G,虽然现在网速快但是文件太大 ...