最近搞了一下kerberos,准备写一个系列,介绍kerberos的安装,和常用组件kerberos配置,今天进入第一篇:kerberOS安装

具体kerberos是什么东西,大家可以百度查一下,这里就不做介绍,直接上干货!

这里kerberos有个大坑,大家一定要注意

1、主机名不能有大写

2、主机名不能有下划线

具体还是否有其他限制,我还不清楚,这个是我踩的坑,所以大家配置kerberos尽量把主机名搞的正常的一些,不要特殊

大数据安全系列的其它文章

https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安装kerberos

https://www.cnblogs.com/bainianminguo/p/12548334.html-----------hadoop的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12548175.html-----------zookeeper的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12584732.html-----------hive的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12584880.html-----------es的search-guard认证

https://www.cnblogs.com/bainianminguo/p/12639821.html-----------flink的kerberos认证

https://www.cnblogs.com/bainianminguo/p/12639887.html-----------spark的kerberos认证

1、通过yum的方式安装kerberos

yum install krb5-workstation krb5-libs krb5-auth-dialog krb5-server

  

2、执行完命令后,会生成kerberos配置文件,krb5.conf和kdc.conf

[root@cluster2-host1 etc]# ll krb5.conf
-rw-r--r--. 1 root root 641 Sep 13 12:40 krb5.conf
[root@cluster2-host1 etc]# pwd
/etc

  

[root@cluster2-host1 etc]# ll /var/kerberos/krb5kdc/kdc.conf
-rw-------. 1 root root 451 Sep 13 12:40 /var/kerberos/krb5kdc/kdc.conf

  

3、修改kdc.conf配置文件

[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88 [realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
max_life = 1d
max_renewable_life = 7d
}

  

注:aes256-cts:normal这个算法需要额外的jar包支持,可以干掉

4、修改krb5.conf配置文件

[root@cluster2-host1 etc]# cat krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/ [logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log [libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM
udp_preference_limit = 1
#default_ccache_name = KEYRING:persistent:%{uid} [realms]
HADOOP.COM = {
kdc = cluster2-host1
admin_server = cluster2-host1
} [domain_realm] 指定域名和域的映射关系
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM

  

5、配置kerberos的数据库,这里我设置的密码都是123456

[root@cluster2-host1 etc]# !671
kdb5_util create -s -r HADOOP.COM
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/M@HADOOP.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:

 

检查生成的数据库文件

[root@cluster2-host1 etc]# ll /var/kerberos/krb5kdc/
total 24
-rw-------. 1 root root 22 Sep 13 12:40 kadm5.acl
-rw-------. 1 root root 474 Mar 3 03:18 kdc.conf
-rw-------. 1 root root 8192 Mar 3 03:18 principal
-rw-------. 1 root root 8192 Mar 3 03:18 principal.kadm5
-rw-------. 1 root root 0 Mar 3 03:18 principal.kadm5.lock
-rw-------. 1 root root 0 Mar 3 03:18 principal.ok

  

添加database administrator并设置密码为admin

[root@cluster2-host1 etc]# /usr/sbin/kadmin.local -q "addprinc admin/admin"
Authenticating as principal root/admin@HADOOP.COM with password.
WARNING: no policy specified for admin/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "admin/admin@HADOOP.COM":
Re-enter password for principal "admin/admin@HADOOP.COM":
Principal "admin/admin@HADOOP.COM" created.

  

6、设置krb5kdc/kadmin 开机自启动

[root@cluster2-host1 etc]# service krb5kdc start
Redirecting to /bin/systemctl start krb5kdc.service
[root@cluster2-host1 etc]# service kadmin start
Redirecting to /bin/systemctl start kadmin.service
[root@cluster2-host1 etc]# chkconfig krb5kdc on
Note: Forwarding request to 'systemctl enable krb5kdc.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/krb5kdc.service to /usr/lib/systemd/system/krb5kdc.service.
[root@cluster2-host1 etc]# chkconfig kadmin on
Note: Forwarding request to 'systemctl enable kadmin.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/kadmin.service to /usr/lib/systemd/system/kadmin.service.

  

 

7、创建主体,密码设置为123456

[root@cluster2-host1 etc]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
kadmin.local:
kadmin.local:
kadmin.local:
kadmin.local:
kadmin.local: list_principals
K/M@HADOOP.COM
admin/admin@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/cluster2-host1@HADOOP.COM
kiprop/cluster2-host1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
kadmin.local: add_principal test/test@HADOOP.COM
WARNING: no policy specified for test/test@HADOOP.COM; defaulting to no policy
Enter password for principal "test/test@HADOOP.COM":
Re-enter password for principal "test/test@HADOOP.COM":
Principal "test/test@HADOOP.COM" created.
kadmin.local: list_principals
K/M@HADOOP.COM
admin/admin@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
kadmin/cluster2-host1@HADOOP.COM
kiprop/cluster2-host1@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
test/test@HADOOP.COM

  

8、另外两个节点配置kerberos的client

[root@cluster2-host3 bin]# yum install krb5-workstation krb5-libs krb5-auth-dialog -y

  

设置配置/etc/krb5.conf配置和server端保持一致

[root@cluster2-host1 etc]# scp /etc/krb5.conf root@cluster2-host2:/etc/krb5.conf
krb5.conf 100% 651 0.6KB/s 00:00
[root@cluster2-host1 etc]# scp /etc/krb5.conf root@cluster2-host3:/etc/krb5.conf
krb5.conf

  

9、使用用户名和密码的方式验证kerberos配置在客户端通过用户名和密码认证

[root@cluster2-host2 bin]# klist test/test
klist: No credentials cache found (filename: test/test)
[root@cluster2-host2 bin]# kinit test/test
Password for test/test@HADOOP.COM:
[root@cluster2-host2 bin]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: test/test@HADOOP.COM Valid starting Expires Service principal
03/03/2020 03:33:58 03/04/2020 03:33:58 krbtgt/HADOOP.COM@HADOOP.COM
renew until 03/10/2020 04:33:58
[root@cluster2-host2 bin]# kdestroy
[root@cluster2-host2 bin]# klist
klist: No credentials cache found (filename: /tmp/krb5cc_0)

  

10、密钥的方式认证

在server端生成秘钥,并拷贝到client

[root@cluster2-host1 etc]# kadmin.local -q "xst -k /root/test.keytab test/test@HADOOP.COM"
Authenticating as principal root/admin@HADOOP.COM with password.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/test.keytab.
Entry for principal test/test@HADOOP.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/test.keytab.
[root@cluster2-host1 etc]# scp /root/test.keytab root@cluster2-host2:/root/

  

在client通过秘钥登录

[root@cluster2-host2 bin]# kinit -kt /root/test.keytab test/test
[root@cluster2-host2 bin]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: test/test@HADOOP.COM Valid starting Expires Service principal
03/03/2020 03:38:00 03/04/2020 03:38:00 krbtgt/HADOOP.COM@HADOOP.COM
renew until 03/10/2020 04:38:00

  

这里要注意:

通过秘钥登录后,就不能通过用户名和密码登录了

至此,kerberos安装和配置完成

kerberos系列之kerberos安装的更多相关文章

  1. kerberos系列之hdfs&yarn认证配置

    一.安装hadoop 1.解压安装包重命名安装目录 [root@cluster2_host1 data]# tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/ [ ...

  2. kerberos系列之hive认证配置

    大数据安全系列之hive的kerberos认证配置,其它系列链接如下 https://www.cnblogs.com/bainianminguo/p/12548076.html-----------安 ...

  3. Kali linux系列之 zmap 安装

    Kali linux系列之 zmap 安装 官方文档地址:https://zmap.io/ 准备:保证有比较顺畅的更新源,可以更新系统,下载安装包. 安装 第一步:sudo apt-get insta ...

  4. Redis系列(1)之安装

    Redis系列(1)之安装 由于项目的需要,最近需要研究下Redis.Redis是个很轻量级的NoSql内存数据库,它有多轻量级的呢,用C写的,源码只有3万行,空的数据库只占1M内存.它的功能很丰富, ...

  5. Open vSwitch系列之二 安装指定版本ovs

    在ovs学习过程中,如果自己想要安装一个ovs交换机其实一条简单的命令 apt  install openvswitch 就可以了,但是这种方法只能安装低版本的ovs.在特殊情况下需要安装指定版本,例 ...

  6. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

  7. saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy的Keepalived

    saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy的Keepalived 安装配置Keepalived 1.编写功能模块 #创建keepalived目录# mkdir -p ...

  8. saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy

    saltstack自动化运维系列⑥SaltStack实践安装配置HAproxy 下载haproxy1.6.2.tar.gz下载地址:http://www.haproxy.org/download/1. ...

  9. Redis系列一 Redis安装

    Redis系列一    Redis安装 1.安装所使用的操作系统为Ubuntu16.04 Redis版本为3.2.9 软件一般下载存放目录为/opt,以下命令操作目录均为/opt root@ubunt ...

随机推荐

  1. GitHub下载

  2. vue-cli多页面应用常遇到的问题

    1.TypeError: webpack.optimize.OccurenceOrderPlugin is not a constructor 此问题出现在webpack 3中,解决办法很简单,将oc ...

  3. Hadoop的存储架构介绍

    http://lxw1234.com/archives/2016/04/638.htm 该文章介绍了Hadoop的架构原理,简单易懂. 目前公司提供Hadoop的运算集群BMR,可以直接申请集群资源.

  4. (一)mybatis简易搭建

    mybatis(基础及其搭建) 声明:该文章及该分类中的内容均基于正在开发的项目和一本参考书(深入浅出MyBatis技术原理与实战    by 杨开振) 一.mybatis核心组件(简要介绍) Sql ...

  5. js javascript 获取url,获得当前页面的url,静态html文件js读取url参数

    获得当前页面的url window.location.href 静态html文件js读取url参数 location.search; //获取url中"?"符后的字串 下边为转载的 ...

  6. SpringMVC之添加照片并修改照片名字

    @RequestMapping(value="/addIdcardsSubmit",method={RequestMethod.POST,RequestMethod.GET}) p ...

  7. Janet Wu price

    上次也是第一次参加百公里是2012的时候,那年的主题是一路有你,和一群同事从深圳湾走到福田,最后累了就回家了,那晚应该是睡得很好吧. 今年是提前报名了,虽然还不确定是否参加.因为说实话,我不喜欢拥堵的 ...

  8. android编译架构之添加C项目

    1.  增加一个项目与android编译中枢息息相关.特别需要告诉编译中枢的一些特别信息. 例如: A 这个项目target名字是什么 B 这个项目编译类型是什么,bin?c?lib?or jar? ...

  9. objectarx 得到有宽度的多段的轮廓

    使用到的命令是:_.wmfout和_.import以及PEdit步骤:1.先通过_.wmfout和_.import得到轮廓线,得到的轮廓线是一个块.方法如下: //ssname:选择的有宽度的多段线 ...

  10. C++扬帆远航——7(年月日)

    /* * Copyright (c) 2016,烟台大学计算机与控制工程学院 * All rights reserved. * 文件名:charizi.cpp * 作者:常轩 * 完成日期:2016年 ...