kerberos系列之kerberos安装
最近搞了一下kerberos,准备写一个系列,介绍kerberos的安装,和常用组件kerberos配置,今天进入第一篇:kerberOS安装
具体kerberos是什么东西,大家可以百度查一下,这里就不做介绍,直接上干货!
这里kerberos有个大坑,大家一定要注意
1、主机名不能有大写
2、主机名不能有下划线
具体还是否有其他限制,我还不清楚,这个是我踩的坑,所以大家配置kerberos尽量把主机名搞的正常的一些,不要特殊
大数据安全系列的其它文章
https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberos
https://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584732.html-----------hive的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12584880.html-----------es的search-guard认证
https://www.cnblogs.com/bainianminguo/p/12639821.html-----------flink的kerberos认证
https://www.cnblogs.com/bainianminguo/p/12639887.html-----------spark的kerberos认证
1、通过yum的方式安装kerberos
yum install krb5-workstation krb5-libs krb5-auth-dialog krb5-server
2、执行完命令后,会生成kerberos配置文件,krb5.conf和kdc.conf
[root@cluster2-host1 etc]# ll krb5.conf
-rw-r--r--. 1 root root 641 Sep 13 12:40 krb5.conf
[root@cluster2-host1 etc]# pwd
/etc
[root@cluster2-host1 etc]# ll /var/kerberos/krb5kdc/kdc.conf
-rw-------. 1 root root 451 Sep 13 12:40 /var/kerberos/krb5kdc/kdc.conf
3、修改kdc.conf配置文件
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88 [realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
max_life = 1d
max_renewable_life = 7d
}
注:aes256-cts:normal这个算法需要额外的jar包支持,可以干掉
4、修改krb5.conf配置文件
[root@cluster2-host1 etc]# cat krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/ [logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log [libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM
udp_preference_limit = 1
#default_ccache_name = KEYRING:persistent:%{uid} [realms]
HADOOP.COM = {
kdc = cluster2-host1
admin_server = cluster2-host1
} [domain_realm] 指定域名和域的映射关系
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM
5、配置kerberos的数据库,这里我设置的密码都是123456
[root@cluster2-host1 etc]# !671
kdb5_util create -s -r HADOOP.COM
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/M@HADOOP.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
检查生成的数据库文件
[root@cluster2-host1 etc]# ll /var/kerberos/krb5kdc/
total 24
-rw-------. 1 root root 22 Sep 13 12:40 kadm5.acl
-rw-------. 1 root root 474 Mar 3 03:18 kdc.conf
-rw-------. 1 root root 8192 Mar 3 03:18 principal
-rw-------. 1 root root 8192 Mar 3 03:18 principal.kadm5
-rw-------. 1 root root 0 Mar 3 03:18 principal.kadm5.lock
-rw-------. 1 root root 0 Mar 3 03:18 principal.ok
添加database administrator并设置密码为admin
[root@cluster2-host1 etc]# /usr/sbin/kadmin.local -q "addprinc admin/admin"
Authenticating as principal root/admin@HADOOP.COM with password.
WARNING: no policy specified for admin/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "admin/admin@HADOOP.COM":
Re-enter password for principal "admin/admin@HADOOP.COM":
Principal "admin/admin@HADOOP.COM" created.
6、设置krb5kdc/kadmin 开机自启动
[root@cluster2-host1 etc]# service krb5kdc start
Redirecting to /bin/systemctl start krb5kdc.service
[root@cluster2-host1 etc]# service kadmin start
Redirecting to /bin/systemctl start kadmin.service
[root@cluster2-host1 etc]# chkconfig krb5kdc on
Note: Forwarding request to 'systemctl enable krb5kdc.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/krb5kdc.service to /usr/lib/systemd/system/krb5kdc.service.
[root@cluster2-host1 etc]# chkconfig kadmin on
Note: Forwarding request to 'systemctl enable kadmin.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/kadmin.service to /usr/lib/systemd/system/kadmin.service.
7、创建主体,密码设置为123456
[root@cluster2-host1 etc]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
kadmin.local:
kadmin.local:
kadmin.local:
kadmin.local:
kadmin.local: list_principals
K/M@HADOOP.COM
admin/admin@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/cluster2-host1@HADOOP.COM
kiprop/cluster2-host1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
kadmin.local: add_principal test/test@HADOOP.COM
WARNING: no policy specified for test/test@HADOOP.COM; defaulting to no policy
Enter password for principal "test/test@HADOOP.COM":
Re-enter password for principal "test/test@HADOOP.COM":
Principal "test/test@HADOOP.COM" created.
kadmin.local: list_principals
K/M@HADOOP.COM
admin/admin@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/cluster2-host1@HADOOP.COM
kiprop/cluster2-host1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
test/test@HADOOP.COM
8、另外两个节点配置kerberos的client
[root@cluster2-host3 bin]# yum install krb5-workstation krb5-libs krb5-auth-dialog -y
设置配置/etc/krb5.conf配置和server端保持一致
[root@cluster2-host1 etc]# scp /etc/krb5.conf root@cluster2-host2:/etc/krb5.conf
krb5.conf 100% 651 0.6KB/s 00:00
[root@cluster2-host1 etc]# scp /etc/krb5.conf root@cluster2-host3:/etc/krb5.conf
krb5.conf
9、使用用户名和密码的方式验证kerberos配置在客户端通过用户名和密码认证
[root@cluster2-host2 bin]# klist test/test
klist: No credentials cache found (filename: test/test)
[root@cluster2-host2 bin]# kinit test/test
Password for test/test@HADOOP.COM:
[root@cluster2-host2 bin]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: test/test@HADOOP.COM Valid starting Expires Service principal
03/03/2020 03:33:58 03/04/2020 03:33:58 krbtgt/HADOOP.COM@HADOOP.COM
renew until 03/10/2020 04:33:58
[root@cluster2-host2 bin]# kdestroy
[root@cluster2-host2 bin]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)
10、密钥的方式认证
在server端生成秘钥,并拷贝到client
[root@cluster2-host1 etc]# kadmin.local -q "xst -k /root/test.keytab test/test@HADOOP.COM"
Authenticating as principal root/admin@HADOOP.COM with password.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/test.keytab.
[root@cluster2-host1 etc]# scp /root/test.keytab root@cluster2-host2:/root/
在client通过秘钥登录
[root@cluster2-host2 bin]# kinit -kt /root/test.keytab test/test
[root@cluster2-host2 bin]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: test/test@HADOOP.COM Valid starting Expires Service principal
03/03/2020 03:38:00 03/04/2020 03:38:00 krbtgt/HADOOP.COM@HADOOP.COM
renew until 03/10/2020 04:38:00
这里要注意:
通过秘钥登录后,就不能通过用户名和密码登录了
至此,kerberos安装和配置完成
kerberos系列之kerberos安装的更多相关文章
- kerberos系列之hdfs&yarn认证配置
一.安装hadoop 1.解压安装包重命名安装目录 [root@cluster2_host1 data]# tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/ [ ...
- kerberos系列之hive认证配置
大数据安全系列之hive的kerberos认证配置,其它系列链接如下 https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安 ...
- Kali linux系列之 zmap 安装
Kali linux系列之 zmap 安装 官方文档地址:https://zmap.io/ 准备:保证有比较顺畅的更新源,可以更新系统,下载安装包. 安装 第一步:sudo apt-get insta ...
- Redis系列(1)之安装
Redis系列(1)之安装 由于项目的需要,最近需要研究下Redis.Redis是个很轻量级的NoSql内存数据库,它有多轻量级的呢,用C写的,源码只有3万行,空的数据库只占1M内存.它的功能很丰富, ...
- Open vSwitch系列之二 安装指定版本ovs
在ovs学习过程中,如果自己想要安装一个ovs交换机其实一条简单的命令 apt install openvswitch 就可以了,但是这种方法只能安装低版本的ovs.在特殊情况下需要安装指定版本,例 ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
- saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy的Keepalived
saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy的Keepalived 安装配置Keepalived 1.编写功能模块 #创建keepalived目录# mkdir -p ...
- saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy
saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy 下载haproxy1.6.2.tar.gz下载地址:http://www.haproxy.org/download/1. ...
- Redis系列一 Redis安装
Redis系列一 Redis安装 1.安装所使用的操作系统为Ubuntu16.04 Redis版本为3.2.9 软件一般下载存放目录为/opt,以下命令操作目录均为/opt root@ubunt ...
随机推荐
- 植物基因组|注释版本问题|重测序vs泛基因组
生命组学: 细菌和其他物种比,容易发生基因漂移,duplication和重排. 泛基因组学研究的一般思路是通过comparison找到特殊基因区域orspecific gene,研究其调控机制(即通过 ...
- <JZOJ1329>旅行
贪心大水题 #include<cstdio> #include<iostream> #include<cstring> #include<algorithm& ...
- [洛谷P3386] [模板] 二分图匹配 (匈牙利算法)
题目传送门 毒瘤出题人zzk出了个二分图匹配的题(18.10.04模拟赛T2),逼我来学二分图匹配. 网络流什么的llx讲完之后有点懵,还是匈牙利比较好理解(绿与被绿). 对于左边的点一个一个匹配,记 ...
- python 常用模块介绍
1.定义 模块:用来从逻辑上组织python代码(变量.函数.类,逻辑),本质就是.py结尾的python文件(文件名:test.py,对应的模块名:test). 包:用来从逻辑上组织模块的,本质就是 ...
- android 中webview的屏幕适配问题
两行代码解决WebView的屏幕适配问题 一个简单的方法,让网页快速适应手机屏幕,代码如下 1 2 WebSettings webSettings= webView.getSettings(); we ...
- 初识SpringIOC
初识SpringIOC 简介 IOC(Inversion of Control)控制反转.指的是获取对象方式由原来主动获取,到被动接收的转变.在Spring中,IOC就是工厂模式解耦,是Srping框 ...
- CSS——NO.2(CSS样式的基本知识)
*/ * Copyright (c) 2016,烟台大学计算机与控制工程学院 * All rights reserved. * 文件名:text.cpp * 作者:常轩 * 微信公众号:Worldhe ...
- numpy.random模块用法总结
from numpy import random numpy.random.uniform(low=0.0, high=1.0, size=None) 生出size个符合均分布的浮点数,取值范围为[l ...
- JDBC阶段总结
一.JDBC的概念:Java DataBase Connectivity用Java语言操作数据库(通过SQL)二.数据库的驱动和JDBC的关系三.编写JDBC的步骤: a.注册驱动 b.建立与数据库的 ...
- 7-29 jmu-python-不同进制数 (10 分)
输入一个十进制整数,输出其对应的八进制数和十六进制数.要求采用print函数的格式控制进行输出,八进制数要加前缀0o,十六进制数要加前缀0x. 输入格式: 输入一个十进制整数,例如:10 输出格式: ...