集群规划:

主机名 IP 安装的软件 运行的进程

hadooop 192.168.1.69 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)

hadoop 192.168.1.70 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)

RM01 192.168.1.71 jdk、hadoop ResourceManager

RM02 192.168.1.72 jdk、hadoop ResourceManager

DN01 192.168.1.73 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain

DN02 192.168.1.74 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain

DN03 192.168.1.75 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain

1.1修改主机名

[root@NM03 conf]# vim /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=DN03

1.2设置静态IP地址

[root@NM03 conf]# vim /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

HWADDR=

TYPE=Ethernet

UUID=3304f091-1872-4c3b-8561-a70533743d88

ONBOOT=yes

NM_CONTROLLED=yes

BOOTPROTO=none

IPADDR=192.168.1.75

NETMASK=255.255.255.0

GATEWAY=192.168.1.1

DNS1=8.8.8.8

IPV6INIT=no

USERCTL=no

1.3修改/etc/hosts里面要配置的是内网IP地址和主机名的映射关系

192.168.1.69 hadoop

192.168.1.70 hadoop

192.168.1.71 RM01

192.168.1.72 RM02

192.168.1.73 NM01

192.168.1.74 NM02

192.168.1.75 NM03

1.4关闭防火墙及清空防火墙规则,设置NetworkManager开机关闭(注意:NetworkManager如果是最小化安装的服务器默认没有安装也就不用设置了,有的话就关了它)

[root@NM03 conf]# iptables –F

[root@NM03 conf]# /etc/init.d/iptables save

iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]

[root@NM03 conf]# vim /etc/selinux/config

SELINUX=disabled #这里改为disabled

1.5添加hadoop用户组实现ssh免密码登录

[root@NM03 conf]# groupadd hadoop

[root@NM03 conf]# useradd -g hadoop DN03

[root@NM03 conf]# echo 123456 | passwd --stdin DN03

Changing password for user DN03.

passwd: all authentication tokens updated successfully.

[DN03@NM03 ~]$ vim /etc/ssh/sshd_config

DN03@192.168.1.74's password:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). #这里出错

解决办法

[DN03@NM03 ~]$ vim /etc/ssh/sshd_config

HostKey /etc/ssh/ssh_host_rsa_key #找到这几项,去掉注释启用!

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile .ssh/authorized_keys

[root@NM03 ~]# chown -R DN03:hadoop /home/DN03/.ssh/

[root@NM03 ~]# chown -R DN03:hadoop /home/DN03/

[root@NM03 ~]# chmod 700 /home/DN03/

[root@NM03 ~]# chmod 700 /home/DN03/.ssh/

[root@NM03 ~]# chmod 644 /home/DN03/.ssh/

id_rsa id_rsa.pub known_hosts

[root@NM03 ~]# chmod 600 /home/DN03/.ssh/id_rsa

[root@NM03 ~]# mkdir /home/DN03/.ssh/authorized_keys

[root@NM03 ~]# chmod 644 !$

chmod 644 /home/DN03/.ssh/authorized_keys

问题

[root@w1 ~]# hadoop fs -put /etc/profile /profile

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/src/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.

15/12/09 13:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

15/12/09 13:40:08 WARN retry.RetryInvocationHandler: Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo over w2/192.168.1.75:9000. Not retrying because failovers (15) exceeded maximum allowed (15)

java.net.ConnectException: Call From w1/192.168.1.74 to w2:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:422)

at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)

at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)

at org.apache.hadoop.ipc.Client.call(Client.java:1414)

at org.apache.hadoop.ipc.Client.call(Client.java:1363)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)

at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)

at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)

at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)

at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)

at org.apache.hadoop.fs.Globber.glob(Globber.java:248)

at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)

at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)

at org.apache.hadoop.fs.shell.CommandWithDestination.getRemoteDestination(CommandWithDestination.java:113)

at org.apache.hadoop.fs.shell.CopyCommands$Put.processOptions(CopyCommands.java:199)

at org.apache.hadoop.fs.shell.Command.run(Command.java:153)

at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)

at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

at org.apache.hadoop.fs.FsShell.main(FsShell.java:308)

Caused by: java.net.ConnectException: Connection refused

at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)

at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)

at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)

at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)

at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)

at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)

at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)

at org.apache.hadoop.ipc.Client.call(Client.java:1381)

... 27 more

put: Call From w1/192.168.1.74 to w2:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

求指教--hadoop2.4.1集群搭建及管理遇到的问题的更多相关文章

  1. 懒人记录 Hadoop2.7.1 集群搭建过程

    懒人记录 Hadoop2.7.1 集群搭建过程 2016-07-02 13:15:45 总结 除了配置hosts ,和免密码互连之外,先在一台机器上装好所有东西 配置好之后,拷贝虚拟机,配置hosts ...

  2. hadoop2.7.2集群搭建

    hadoop2.7.2集群搭建 1.修改hadoop中的配置文件 进入/usr/local/src/hadoop-2.7.2/etc/hadoop目录,修改hadoop-env.sh,core-sit ...

  3. hadoop2.8 ha 集群搭建

    简介: 最近在看hadoop的一些知识,下面搭建一个ha (高可用)的hadoop完整分布式集群: hadoop的单机,伪分布式,分布式安装 hadoop2.8 集群 1 (伪分布式搭建 hadoop ...

  4. centos下hadoop2.6.0集群搭建详细过程

    一 .centos集群环境配置 1.创建一个namenode节点,5个datanode节点 主机名 IP namenodezsw 192.168.129.158 datanode1zsw 192.16 ...

  5. Hadoop2.0 HA集群搭建步骤

    上一次搭建的Hadoop是一个伪分布式的,这次我们做一个用于个人的Hadoop集群(希望对大家搭建集群有所帮助): 集群节点分配: Park01 Zookeeper NameNode (active) ...

  6. hadoop2.6.0集群搭建

    p.MsoNormal { margin: 0pt; margin-bottom: .0001pt; text-align: justify; font-family: Calibri; font-s ...

  7. Hadoop2.6.5集群搭建

    一. Hadoop的分布式模型 Hadoop通常有三种运行模式:本地(独立)模式.伪分布式(Pseudo-distributed)模式和完全分布式(Fully distributed)模式.安装完成后 ...

  8. redis集群搭建与管理

    集群简介: Redis 集群是一个可以在多个 Redis 节点之间进行数据共享的设施(installation). Redis 集群不支持那些需要同时处理多个键的 Redis 命令, 因为执行这些命令 ...

  9. vmware10上三台虚拟机的Hadoop2.5.1集群搭建

    由于官方版本的Hadoop是32位,若在64位Linux上安装,则必须先重新在64位环境下编译Hadoop源代码.本环境采用编译后的hadoop2.5.1 . 安装参考博客: 1 http://www ...

随机推荐

  1. Codeforces Round #135 (Div. 2)---A. k-String

    k-String time limit per test 2 seconds memory limit per test 256 megabytes input standard input outp ...

  2. SPOJ QTREE6 lct

    题目链接 岛娘出的题.还是比較easy的 #include <iostream> #include <fstream> #include <string> #inc ...

  3. POJ1195 Mobile phones 【二维树状数组】

    Mobile phones Time Limit: 5000MS   Memory Limit: 65536K Total Submissions: 14288   Accepted: 6642 De ...

  4. iOS 开发小常识 开发笔记

    一   自定义push方法 /*  参数说明 *  controllerName : push的目标页 例:@“testcontroll”    ---注意不带.h *  isNibPage     ...

  5. android ImageUtils 图片处理工具类

    /** * 加入文字到图片.相似水印文字. * @param gContext * @param gResId * @param gText * @return */ public static Bi ...

  6. 使用JDBC 插入向数据库插入对象

    package com.ctl.util; import java.io.IOException; import java.lang.reflect.Field; import java.lang.r ...

  7. 【网站支付PHP篇】thinkPHP集成支付宝支付(担保交易)

    目录 系列说明 开发环境 部署支付宝 支付请求 支付宝返回处理 系列说明 最近在帮朋友的系统安装支付模块(兑换网站积分),现在总结一些开发心得,希望对大家有用.这个系列会讲以下第三方支付平台的集成: ...

  8. 解决oracle 表被锁住问题

    想修改Oracle下的某一张表,提示 "资源正忙, 但指定以 NOWAIT 方式获取资源, 或者超时失效" 看上去是锁住了. 用系统管理员登录进数据库,然后 SELECT sid, ...

  9. 变量的命名和if语句

    1. 计算机是什么 基本组成: cpu: 主频, 核数(16) 内存:大小(8G, 16G, 32G) 型号: DDR3, DDR4, DDR5,  主频(海盗船,玩家国度) 显卡: 显存.型号(N- ...

  10. Hibernate 之 一级缓存

    本篇文章主要是总结Hibernate中关于缓存的相关内容. 先来看看什么是缓存,我们这里所说的缓存主要是指应用程序与物流数据源之间(例如硬盘),用于存放临时数据的内存区域,这样做的目的是为了减少应用程 ...