今天尝试在Hadoop 2.x开发集群上配置Kerberos,遇到一些问题,记录一下

设置hadoop security

core-site.xml

        <property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>

hadoop.security.authentication默认是simple方式,也就是基于文件系统的验证方式,这里我们改为kerberos

设置hdfs security
hdfs-site.xml
        <property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.https.enable</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.https-address</name>
<value>dev80.hadoop:50470</value>
</property>
<property>
<name>dfs.https.port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>dev80.hadoop:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.secondary.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop.keytab</value>
<description>
The Kerberos keytab file with the credentials for the
HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
</description>
</property>

dfs.datanode.address表示data transceiver RPC server所绑定的hostname或IP地址,如果开启security,端口号必须小于1024,否则的话启动datanode时候会报“Cannot start secure cluster without privileged resources”错误

namenode和secondary namenode都是以hadoop用户身份启动
datanode需要以root用户身份用jsvc来启动,而Hadoop 2.x自身带的jsvc是32位版本的,需要去jsvc官网上重新下载编译
1. wget http://mirror.esocc.com/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz
2. cd src/native/unix; configure; make
生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec
[hadoop@dev80 unix]$ file jsvc
jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
3. mvn package
编译commons-daemon-1.0.15.jar,拷贝到$HADOOP_HOME/share/hadoop/hdfs/lib下,同时删除自带版本的commons-daemon jar包

hadoop-env.sh中修改
# The jsvc implementation to use. Jsvc is required to run secure datanodes.
export JSVC_HOME=/usr/local/hadoop/hadoop-2.1.0-beta/libexec
# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=hadoop
# The directory where pid files are stored. /tmp by default
export HADOOP_SECURE_DN_PID_DIR=/usr/local/hadoop
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=/data/logs

分发配置和jar到整个集群
用hadoop帐号启动namenode,然后切换到root,再启动datanode,发现namenode web页面上有显示"

Security is 
ON

"


设置yarn security
yarn-site.xml
        <property>
<name>yarn.resourcemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.nodemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>hadoop</value>
</property>
container-executor默认是DefaultContainerExecutor,是以起Nodemanager的用户身份启动container的,切换为LinuxContainerExecutor会以提交application的用户身份来启动,它使用一个setuid可执行文件来启动和销毁container
这个可执行文件在bin/container-executor,不过Hadoop默认带的还是32位版本,所以需要重新编译
下载Hadoop 2.x source code
mvn package -Pdist,native -DskipTests -Dtar -Dcontainer-executor.conf.dir=/etc
注:container-executor.conf.dir必须显示注明,它表示setuid可执行文件依赖的配置文件路径,默认会在$HADOOP_HOME/etc/hadoop下,不过由于该文件需要父目录和以上的目录的owner都为root,要不然会有以下报错,所以为了方便我们设置为/etc
Caused by: org.apache.hadoop.util.Shell$ExitCodeException: File /usr/local/hadoop/hadoop-2.1.0-beta/etc/hadoop must be owned by root, but is owned by 500
at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:147)
默认的寻找configuration路径
[root@dev80 bin]# strings container-executor  | grep etc

../etc/hadoop/container-executor.cfg
看出来是默认加载$HADOOP_HOME/etc/hadoop
/container-executor.cfg
加上container-executor.conf.dir=/etc 再编译后
[hadoop@dev80 bin]$ strings container-executor | grep etc

/etc/container-executor.cfg

container-executor.cfg中设置
yarn.nodemanager.linux-container-executor.group=hadoop

min.user.id=499

将container-executor拷贝到$HADOOP_HOME/bin
chown root:hadoop container-executor /etc/container-executor.cfg
chmod 4750 container-executor

chmod 400 /etc/container-executor.cfg
同步配置文件到整个集群,用hadoop帐号启动ResourceManager和Nodemanager

设置jobhistory server security
mapred-site.xml
        <property>
<name>mapreduce.jobhistory.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>mapreduce.jobhistory.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
启动JobHistoryServer
sbin/mr-jobhistory-daemon.sh start historyserver

执行命令kinit,获得一张tgt(ticket granting ticket)
[hadoop@dev80 hadoop]$ kinit -r 24l -k -t /home/hadoop/.keytab hadoop
[hadoop@dev80 hadoop]$ klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: hadoop@DIANPING.COM
Valid starting Expires Service principal
09/11/13 15:25:34 09/12/13 15:25:34 krbtgt/DIANPING.COM@DIANPING.COM
renew until 09/12/13 15:25:34

其中/tmp/krb5cc_500就是ticket cache file, 500表示hadoop帐号的uid,默认会读取

用户也可以通过设置export KRB5CCNAME=/tmp/krb5cc_500来指定ticket cache路径
用完之后可以kdestroy销毁掉该ticket cache

如果本地没有ticket cache,会报如下错误
13/09/11 16:21:35 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

附上keytab中的principal

[hadoop@dev80 hadoop]$ klist -k -t /etc/hadoop.keytab
Keytab name: WRFILE:/etc/hadoop.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM

YARN & HDFS2 安装和配置Kerberos的更多相关文章

  1. (转)RedHat/CentOS安装和配置kerberos

    RedHat/CentOS安装和配置kerberos 需要在kerberos server和客户端都先安装ntp (Internet时间协议,保证服务器和客户机时间同步 ) 1  kerberos 服 ...

  2. CentOS6安装各种大数据软件 第九章:Hue大数据可视化工具安装和配置

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  3. Hadoop-2.6.0 集群的 安装与配置

    1.  配置节点bonnie1 hadoop环境 (1) 下载hadoop- 2.6.0 并解压缩 [root@bonnie1 ~]# wget http://apache.fayea.com/had ...

  4. Spark(三): 安装与配置

    参见 HDP2.4安装(五):集群及组件安装 ,安装配置的spark版本为1.6, 在已安装HBase.hadoop集群的基础上通过 ambari 自动安装Spark集群,基于hadoop yarn ...

  5. 在虚拟机VM中安装的Ubuntu上安装和配置Hadoop

    一.系统环境: 我使用的Ubuntu版本是:ubuntu-12.04-desktop-i386.iso jdk版本:jdk1.7.0_67 hadoop版本:hadoop-2.5.0 二.下载jdk和 ...

  6. 完全分布式Hadoop2.3安装与配置

    一.Hadoop基本介绍 Hadoop优点 1.高可靠性:Hadoop按位存储和处理数据 2.高扩展性:Hadoop是在计算机集群中完成计算任务,这个集群可以方便的扩展到几千台 3.高效性:Hadoo ...

  7. Mysql多实例 安装以及配置

    MySQL多实例 1.什么是MySQL多实例 简单地说,Mysql多实例就是在一台服务器上同时开启多个不同的服务端口(3306.3307),同时运行多个Mysql服务进程,这些服务进程通过不同的soc ...

  8. 在Linux上怎么安装和配置Apache Samza

    samza是一个分布式的流式数据处理框架(streaming processing),它是基于Kafka消息队列来实现类实时的流式数据处理的.(准确的说,samza是通过模块化的形式来使用kafka的 ...

  9. 浅谈 zookeeper 原理,安装和配置

    当前云计算流行, 单一机器额的处理能力已经不能满足我们的需求,不得不采用大量的服务集群.服务集群对外提供服务的过程中,有很多的配置需要随时更新,服务间需要协调工作,那么这些信息如何推送到各个节点?并且 ...

随机推荐

  1. Sublime Text2

    Ctrl+L选择整行(按住-继续选择下行) Ctrl+KK 从光标处删除至行尾 Ctrl+Shift+K 删除整行 Ctrl+Shift+D 复制光标所在整行,插入在该行之前 Ctrl+J 合并行(已 ...

  2. FormView分页显示数据的例子

    %@ Page Language="C#" AutoEventWireup="true" CodeBehind="FormView控件.aspx.cs ...

  3. HDU 2152 Fruit

    系数为1的母函数…… #include <cstdio> #include <cstring> using namespace std; int n,m,size[105][2 ...

  4. HDU 2108 Shape of HDU

    题解:按照输入顺序依次将点连接起来,对于连续的三个点p0,p1,p2,令向量a=p1-p0,b=p2-p1 若是凸多边形,那么b相对于a一定是向逆时针方向旋转的 判断两向量的旋转方向,可以使用向量的叉 ...

  5. python字符串格式化符号含义及转义字符含义

    字符串格式化符号含义    符   号    说     明      %c    格式化字符及其ASCII码      %s    格式化字符串      %d    格式化整数      %o   ...

  6. WEB ICON 的探讨

    一般考虑到webicon 就是cssSprite和自定义font:本文基于下述而总结和进行分析,如有笔误有望指出,谢谢 在线教程:用字体在网页中画ICON图标 http://www.imooc.com ...

  7. 纯css画哆啦A梦

    今天有点无聊,照着网上的图写了个哆啦A梦,无技术可言,纯考耐心. <!doctype html> <html lang="en"> <head> ...

  8. python 字典有序无序及查找效率,hash表

    刚学python的时候认为字典是无序,通过多次插入,如di = {}, 多次di['testkey']='testvalue' 这样测试来证明无序的.后来接触到了字典查找效率这个东西,查了一下,原来字 ...

  9. 完全掌握KMP算法思想

    文档下载页面http://download.csdn.net/detail/yedeqixian/4209500      80页在讲KMP算法的开始先举了个例子,让我们对KMP的基本思想有了最初的认 ...

  10. sql server单表导入、导出

    sql server单表导入.导出(通过CSV文件) 导出:直接打开查询分析器查询要导出表的信息(select *  from 表),得到的结果全选,右键另存为 xxx.csv文件  (得到该表的所有 ...