[Kerberos] How to Kerberize an Hadoop Cluster
Overview
Kerberos是一个第三方认证机制,用户和服务(known as principals)通过kerberos server (known as the Key Distribution Center, or KDC)认证彼此。KDC有三部分:
- A database of principals and their Kerberos passwords
- An Authentication Server (AS) which performs the initial authentication and isssues a TGT
- A Ticket Granting Server (TGS) that issues subsequent service tickets based on the initial TGT
User principal向AS请求认证。AS返回用用户kerberos password加密的TGT。直到ticket过期,用户都可以用该TGT向TGS申请service tickets。
由于cluster resource(hosts or services)不能每次提供password来解密TGT,他们使用keytab,一种从数据库中导出并存储在本地、包含resource principal's authentication credentials的特殊文件。
安装和配置 KDC
- 启动Kerberos 认证需要安装 KDC 服务器和必要的软件。安装KDC 的命令可以在任何机器上执行。
yum -y install krb5-server krb5-lib krb5-auth-dialog krb5-workstation
- 接着,在集群中的其他节点上安装Kerberos client和命令
yum -y install krb5-lib krb5-auth-dialog krb5-workstation
- 编辑 KDC 配置的realms,AD(active directory)
krb5.conf 文件包含 KDCs、admin 服务器的地址,是当前 realm 和 Kerberos 应用的默认配置,该配置将主机名映射到 Kerberos realms。krb5.conf一般在/etc/krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log [libdefaults]
default_realm = HADOOP.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true [realms]
HADOOP.COM = {
kdc = node1.hadoop.com
admin_server = node1.hadoop.com
} AD.COM = {
kdc = windc.ad.com
admin_server = windc.ad.com
} [domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM
.ad.com = AD.COM
ad.com = AD.COM [capaths]
AD.COM = {
HADOOP.COM = .
}
realms: HADOOP_COM下的 kdc, admin_server是我们安装KDC的主机地址,AD.COM下的是 Domain Controller主机地址。
domain_realm: 提供domain name 或者主机名字到kerberos realms名字的转换。两者都必须小写。
capaths: cross-realm authentication中,不同 realms 之间需要数据库去创建authentication paths。 这部分定义存储。
- 编辑 kdc.conf,默认在 /var/Kerberos/krb5kdc/kdc.conf。包含 KDC 配置信息,包括发放 Kerberos tickets 时的默认值。
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
创建数据库和第一个administrator
- 创建 Kerberos database. -s表明将数据库的master server key存在隐藏文件stash file中
kdb5_util create -s
- 添加admin到ACL文件 (编辑 /var/kerberos/krb5kdc/kadm5.acl)
*/admin@EXAMPLE.COM *
- 创建第一个administrator.
kadmin.local -q "addprinc <username>/admin"
- 启动 Kerberos
service krb5kdc start
service kadmin start
验证安装
# enter kadmin shell
kadmin.local
kadmin.local: listprincs # enter kadmin
kadmin
kadmin: Client not found in Kerberos database kinit <username>/admin klist kadmin
其他Kerberos管理命令
- kadmin: kadmin provides for the maintenance of Kerberos principals, password policies, and service key tables (keytabs).
- kadmind
- kdb5_util: allows an administrator to perform maintenance procedures on the KDC database
- kdb5_ldap_util
- krb5kdc
- kprop
- kpropd
- kproplog
- ktutil:The ktutil command invokes a command interface from which an administrator can read, write, or edit entries in a keytab or Kerberos V4 srvtab file.
- k5srvutil
- sserver
kadmin and kadmin.local are command-line interfaces to the Kerberos V5 administration system. They provide nearly identical functionalities; the difference is that kadmin.local directly accesses the KDC database, while kadmin performs operations using kadmind
[Kerberos] How to Kerberize an Hadoop Cluster的更多相关文章
- Command-line tools can be 235x faster than your Hadoop cluster
原文链接:http://aadrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html Introduc ...
- kettle添加hadoop cluster时报错Caused by: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: hadoop:password@node56:9000
完整报错是: Caused by: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: ...
- Hadoop Cluster 安装
本篇源自Hadoop官网,先将中文翻译如下. 目标 本文章主要是描述如何安装和配置几个节点的Hadoop clusters,甚至于数以千计的节点数.为了了解详细的安装步骤,需要先了解如何安装在单台机器 ...
- Hadoop:操作 Hadoop Cluster
启动Hadoop 当完成所有的必要配置后,将HADOOP_CONF_DIR目录中的所有配置文件复制到所有机器,建议将HDFS和YARN后台进程一不同的用户身份运行,比如运行HDFS进程们的用户为hdf ...
- Hadoop: Hadoop Cluster配置文件
Hadoop配置文件 Hadoop的配置文件: 只读的默认配置文件:core-default.xml, hdfs-default.xml, yarn-default.xml 和 mapred-defa ...
- HADOOP cluster some issue for installation
给namenode搭建了HA,然后根据网上的配置也配置了secondary namenode, 但是一直没有从日志中看到启动secondnary namenode,当然进程也没有. 找了很多资料,按照 ...
- How to install Hadoop Cluster
https://dwbi.org/etl/bigdata/183-setup-hadoop-cluster https://www.linode.com/docs/databases/hadoop/h ...
- Setting up a Hadoop cluster - Part 1: Manual Installation
http://gbif.blogspot.com/2011/01/setting-up-hadoop-cluster-part-1-manual.html
- a completely rewritten architecture of Hadoop cluster
https://www.ibm.com/developerworks/library/bd-yarn-intro/
随机推荐
- ecshop /includes/lib_base.php、/includes/fckeditor/editor/dialog/fck_spellerpages/spellerpages/server-scripts/spellchecker.php Backdoor Vul
catalog . 漏洞描述 . 漏洞触发条件 . 漏洞影响范围 . 漏洞代码分析 . 防御方法 . 攻防思考 1. 漏洞描述 ECShop是国内一款流行的网店管理系统软件,其2.7.3版本某个补丁存 ...
- C#做窗体皮肤
网上有很好的皮肤控件 SkinEnigne可供使用: 具体步骤: 添加控件SkinEngine. 1.右键“工具箱”.“添加选项卡”,取名“皮肤”. 2.右键“皮肤”,“选择项”弹出对话框. 3.点击 ...
- java Base64算法的使用
Base64是常见的网络加密算法,Base64编码可用于在HTTP环境下传递较长的标识信息.详见 Base64介绍 1 自定义的base64算法 Base64Encrypt.java public c ...
- ng-controller event data
$emit只能向parent controller传递event与data $broadcast只能向child controller传递event与data $on用于接收event与data 例子 ...
- JavaWeb---总结(四)Tomcat服务器学习和使用(二)
一.打包JavaWeb应用 在Java中,使用"jar"命令来对将JavaWeb应用打包成一个War包,jar命令的用法如下: 范例:将JavaWebDemoProject这个Ja ...
- webView(简单的浏览器)
#import "MJViewController.h" @interface MJViewController () <UISearchBarDelegate, UIWeb ...
- uC/OS-II队列(OS_q)块
/*************************************************************************************************** ...
- xfce4 dev tools的一些说明
xfce4 dev tools实际上基本是封装了一些autoconf的宏函数 比如XDT_I18N: AC_DEFUN([XDT_I18N], [ dnl Substitute GETTEXT_PAC ...
- 3秒后自动跳转页面【js】
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- ORACLE ORA-01157: 无法标识/锁定数据文件
create undo tablespace MOZI datafile 'E:\oracle\product\10.2.0\oradata\orcl\MOZI.DBF' size 2048M ext ...