搭建两个节点的大数据集群-1.hdfs集群
|
ip
|
部署的程序
|
备注
|
|
192.168.56.2/bigdata.lzf
|
namenode,datanode,NodeManager,hive,presto,mysql,hive-metastore,presto-cli
|
主节点 |
|
192.168.56.3/bigdata.dn1.lzf
|
secondarynode,resourceManager,NodeManager,hive,presto,presto-cli
|
资源管理节点
|
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
1.2.2 把3的id_rsa.pub 复制到 2的id_rsa.pub_sl
<name>dfs.namenode.http-address</name>
<value>bigdata.lzf:50070</value>
<description>
The address and the base port where the dfs namenode web ui will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata.dn1.lzf:50090</value>
</property>
<name>dfs.namenode.http-address</name>
<value>bigdata.lzf:50070</value>
<description>
The address and the base port where the dfs namenode web ui will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata.dn1.lzf:50090</value>
</property>
<name>yarn.resourcemanager.address</name>
<value>bigdata.dn1.lzf:8032</value>
<description>资源管理器地址</description>
</property>
17/07/21 17:28:44 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
Starting namenodes on [bigdata.lzf]
bigdata.lzf: starting namenode, logging to /home/hadoop/hadoop-2.8.0/logs/hadoop-hadoop-namenode-bigdata.lzf.out
bigdata.dn1.lzf: starting datanode, logging to /home/hadoop/hadoop-2.8.0/logs/hadoop-hadoop-datanode-bigdata.dn1.lzf.out
bigdata.lzf: starting datanode, logging to /home/hadoop/hadoop-2.8.0/logs/hadoop-hadoop-datanode-bigdata.lzf.out
Starting secondary namenodes [bigdata.dn1.lzf]
bigdata.dn1.lzf: starting secondarynamenode, logging to /home/hadoop/hadoop-2.8.0/logs/hadoop-hadoop-secondarynamenode-bigdata.dn1.lzf.out
17/07/21 17:29:05 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/07/21 17:29:05 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
5239 NameNode
5373 DataNode
4261 Jps
4167 SecondaryNameNode
17/07/21 17:33:03 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/07/21 17:33:03 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
[hadoop@bigdata sbin]$ hadoop fs -ls /
17/07/21 17:33:16 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/07/21 17:33:16 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2017-07-21 17:33 /tmp
17/07/21 17:35:29 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/07/21 17:35:29 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
[hadoop@bigdata ~]$ hadoop fs -tail hdfs://bigdata.lzf:9001/tmp/start-dfs.sh
17/07/21 17:41:18 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/07/21 17:41:18 DEBUG util.NativeCodeLoader: Loaded the native-hadoop library
----------------------------------------
# quorumjournal nodes (if any) SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-) case "$SHARED_EDITS_DIR" in
qjournal://*)
JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')
echo "Starting journal nodes [$JOURNAL_NODES]"
"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
--config "$HADOOP_CONF_DIR" \
--hostnames "$JOURNAL_NODES" \
--script "$bin/hdfs" start journalnode ;;
esac #---------------------------------------------------------
# ZK Failover controllers, if auto-HA is enabled
AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)
if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; then
echo "Starting ZK Failover Controllers on NN hosts [$NAMENODES]"
"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
--config "$HADOOP_CONF_DIR" \
--hostnames "$NAMENODES" \
--script "$bin/hdfs" start zkfc
fi # eof
17/07/24 11:44:04 INFO mapreduce.Job: Counters: 29
File System Counters
FILE: Number of bytes read=604458
FILE: Number of bytes written=1252547
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1519982
HDFS: Number of bytes written=0
HDFS: Number of read operations=12
HDFS: Number of large read operations=0
HDFS: Number of write operations=5
Map-Reduce Framework
Combine input records=0
Combine output records=0
Reduce input groups=0
Reduce shuffle bytes=0
Reduce input records=0
Reduce output records=0
Spilled Records=0
Shuffled Maps =0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=0
Total committed heap usage (bytes)=169222144
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Output Format Counters
Bytes Written=0
根据测试,输出的内容会放在/user/hadoop/output目录下,事先不创建也没有关系。
rm -Rf /home/hadoop/data_hadoop/hdfs/name
rm -Rf /home/hadoop/data_hadoop/hdfs/data
rm -Rf /home/hadoop/data_hadoop/hdfs/tmp
mkdir -p /home/hadoop/data_hadoop/hdfs/name
mkdir -p /home/hadoop/data_hadoop/hdfs/data
mkdir -p /home/hadoop/data_hadoop/tmp
6.2 区分namenode和secondarynamenode 的关键
hdfs-site.xml中改配置如下
<property>
<name>dfs.namenode.http-address</name>
<value>bigdata.lzf:50070</value>
<description>
The address and the base port where the dfs namenode web ui will listen on.
If the port is 0 then the server will start on a free port.
</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata.dn1.lzf:50090</value>
</property>
搭建两个节点的大数据集群-1.hdfs集群的更多相关文章
- 大数据学习之hdfs集群安装部署04
1-> 集群的准备工作 1)关闭防火墙(进行远程连接) systemctl stop firewalld systemctl -disable firewalld 2)永久修改设置主机名 vi ...
- vivo 万台规模 HDFS 集群升级 HDFS 3.x 实践
vivo 互联网大数据团队-Lv Jia Hadoop 3.x的第一个稳定版本在2017年底就已经发布了,有很多重大的改进. 在HDFS方面,支持了Erasure Coding.More than 2 ...
- HDFS集群和YARN集群
Hadoop集群环境搭建(一) 1集群简介 HADOOP集群具体来说包含两个集群:HDFS集群和YARN集群,两者逻辑上分离,但物理上常在一起 HDFS集群: 负责海量数据的存储,集群中的角色主要 ...
- 大数据-HDFS 集群搭建的配置文件
1.HDFS简单版集群搭建相关配置文件 1.core-site.xml文件 <property> <name>fs.defaultFS</name> <val ...
- 大数据集群环境 zookeeper集群环境安装
大数据集群环境 zookeeper集群环境准备 zookeeper集群安装脚本,如果安装需要保持zookeeper保持相同目录,并且有可执行权限,需要准备如下 编写脚本: vi zkInstall.s ...
- 大数据(2)---HDFS集群搭建
一.准备工作 1.准备几台机器,我这里使用VMware准备了四台机器,一个name node,三个data node. VMware安装虚拟机:https://www.cnblogs.com/niju ...
- elasticsearch系列八:ES 集群管理(集群规划、集群搭建、集群管理)
一.集群规划 搭建一个集群我们需要考虑如下几个问题: 1. 我们需要多大规模的集群? 2. 集群中的节点角色如何分配? 3. 如何避免脑裂问题? 4. 索引应该设置多少个分片? 5. 分片应该设置几个 ...
- 大数据【一】集群配置及ssh免密认证
八月迷情,这个月会对大数据进行一个快速的了解学习. 一.所需工具简介 首先我是在大数据实验一体机上进行集群管理学习,管理五台实验机,分别为master,slave1,slave2,slave3,cli ...
- elasticsearch 集群管理(集群规划、集群搭建、集群管理)
一.集群规划 搭建一个集群我们需要考虑如下几个问题: 1. 我们需要多大规模的集群? 2. 集群中的节点角色如何分配? 3. 如何避免脑裂问题? 4. 索引应该设置多少个分片? 5. 分片应该设置几个 ...
随机推荐
- 纯CSS的三角符号
项目中经常用到三角形,现在给大家讲下用纯CSS写的下三角实心图形 .triangle{ border-width: 5px 4px 0 4px; border-color: #454A7B trans ...
- Elipse plugin or Bundle & OSGI
Develop and register service, lookup and use service! Android Design on service's publish-find-bind ...
- WebSettings 最全属性说明
setAllowContentAccess (boolean allow) 是否允许在WebView中访问内容URL(Content Url),默认允许.内容Url访问允许WebView从安装在系统中 ...
- 检查SQL Server 2005的索引密度和碎片信息(转)
查询数据库中所有表的索引密度和碎片信息,以便为索引的重建和整理提供依据,也可以参考DBCC SHOWCONTIG,通常FRAGMENTATIOIN在30%以上建议重建,否则建议整理 SELECT i. ...
- monkeyrunner多点触摸
思路是:在屏幕上某个位置按着不放:device.touch(x,y,md.DOWN) 然后再做一个滑动的操作:device.drap((x1,y1),(x2,y2),0.2,10) 然后再松开按键:d ...
- js的作用域与作用域链
JavaScript的作用域和作用域链.在初学JavaScript时,觉得它就和其他语言没啥区别,尤其是作用域这块,想当然的以为“全局变量就是在整个程序的任何地方都可以访问,也就是写在函数外的变量,局 ...
- Dll注入:修改PE文件 IAT注入
PE原理就不阐述了, 这个注入是PE感染的一种,通过添加一个新节注入,会改变PE文件的大小,将原有的导入表复制到新节中,并添加自己的导入表描述符,最后将数据目录项中指向的导入表的入口指向新节. 步骤: ...
- std::string::find_last_not_of
public member function <string> std::string::find_last_not_of C++98 C++11 string (1) size_t fi ...
- ACM-ICPC (10/15) Codeforces Round #440 (Div. 2, based on Technocup 2018 Elimination Round 2)
A. Search for Pretty Integers You are given two lists of non-zero digits. Let's call an integer pret ...
- 「SDOI2008沙拉公主的困惑」
题目 看着有点可怕 求 \[\sum_{i=1}^{n!}[(i,m!)=1]\] 考虑一下\(m=n\)的时候的答案 非常显然就是\(\varphi(m!)\) 而如果\(n>m\) 非常显然 ...