IBM developer:Kafka ACLs
Overview
In Apache Kafka, the security feature is supported from version 0.9. When Kerberos is enabled, we need to have the authorization to access Kafka resources. In this blog, you will learn how to add authorization to Kafka resources using Kafka console ACL scripts. In addition, when SSL is enabled in Kafka, ACLs (access control list) can be enabled to authorize access to Kafka resources.
Kafka ACLs are defined in the general format of “Principal P is [Allowed/Denied] Operation O From Host H On Resource R”.
Kafka resources that can be protected with ACLS are:
- Topic
- Consumer group
- Cluster
Operations on the Kafka resources are as below:
Kafka resource | Operations |
---|---|
Topic | CREATE/READ/WRITE/DESCRIBE |
Consumer Group | WRITE |
Cluster | CLUSTER_ACTION |
Cluster operations (CLUSTER_ACTION) refer to operations necessary for the management of the cluster, like updating broker and partition metadata, changing the leader and the set of in-sync replicas of a partition, and triggering a controlled shutdown.
Kafka Kerberos with ACLs
To enable Kerberos in an IOP 4.2 cluster, you can follow the steps mentioned in the link Enable Kerberos on IOP 4.2
After Kerberos is enabled, the following properties are automatically added to custom Kafka broker configuration.
Kafka console commands running as super user kafka
By default, only the super.user will have the permissions to access the Kafka resources. The default value for super.users is kafka.
The Kafka home directory in IOP is located at /usr/iop/current/kafka-broker. The Kafka console scripts referenced in this article are located under /usr/iop/current/kafka-broker.
List Kafka service keytab
[kafka@hostname kafka]# klist -k -t /etc/security/keytabs/kafka.service.keytab
Keytab name: FILE:/etc/security/keytabs/kafka.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
1 06/22/16 13:53:01 kafka/hostname.abc.com@IBM.COM
Perform kinit to obtain and cache the Kerberos ticket
[kafka@hostname kafka]# kinit -f -k -t /etc/security/keytabs/kafka.service.keytab kafka/hostname.abc.com@IBM.COM
Create a topic
[kafka@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --replication-factor 1 --partitions 1 --topic mytopic
Created topic "mytopic".
Run Kafka producer
[kafka@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic mytopic --producer.config producer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^C
[kafka@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic mytopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Sending Message to Kafka topic
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[root@hostname kafka]# cat consumer.properties
security.protocol=SASL_PLAINTEXT
As we have run the commands with super user kafka, we have access to Kafka resources without adding any ACLs.
How to add a new user as a super user?
- Update the super.users property in the “Custom kafka-broker” configuration to add additional users as super users. The list is a semicolon-separated list of user names in the format “User:”. The example shows how to configure the users kafka and kafkatest as super users.
- This will allow the user to access resources without adding any ACLs.
- Restart Kafka
How to add ACLs for new users?
The following example shows how to add ACLs for a new user “kafkatest”.
Create a user kafkatest
[root@hostname kafka]# useradd kafkatest
Note: In the example shown here the KDC server, Kafka broker and Producer/Consumer running are on the same machine. If the KDC server is setup on a different node in your environment, copy the keytab files to /etc/security/keytabs where Kafka producer and consumer are running.
Create a principal for kafkatest user
[root@hostname kafka]# kadmin.local
Authenticating as principal kafka/admin@IBM.COM with password.
kadmin.local: addprinc "kafkatest"
Create a Kerberos keytab file
kadmin.local: xst -norandkey -k /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Quit from kadmin
kadmin.local: quit
List and cache the kafkatest Kerberos ticket
[kafkatest@hostname kafka]$ klist -k -t /etc/security/keytabs/kafkatest.keytab
Keytab name: FILE:/etc/security/keytabs/kafkatest.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
1 06/22/16 16:24:15 kafkatest@IBM.COM
[kafkatest@hostname kafka]$ kinit -f -k -t /etc/security/keytabs/kafkatest.keytab kafkatest@IBM.COM
Create a topic
[kafkatest@hostname kafka]$ bin/kafka-topics.sh --create --zookeeper hostname.abc.com:2181 --partitions 1 --replication 1 --topic kafka-testtopic
Created topic "kafka-testtopic".
Add write permission for user kafkatest for topic kafka-testtopic:
[kafkatest@hostname kafka]$ bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Write --authorizer-properties zookeeper.connect=hostname.abc.com:2181
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
Run Kafka producer
[kafkatest@hostname kafka]$ bin/kafka-console-producer.sh --broker-list hostname.abc.com:6667 --topic kafka-testtopic --producer.config producer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^C
[kafkatest@hostname kafka]$ cat producer.properties
security.protocol=SASL_PLAINTEXT
Add read permission for user kafkatest for topic kafka-testtopic and consumer group kafkatestgroup
[kafkatest@hostname kafka]bin/kafka-acls.sh --topic kafka-testtopic --add -allow-host 9.30.150.22 --allow-principal User:kafkatest --operation Read --authorizer-properties zookeeper.connect=hostname.abc.com:2181 --group kafkatestgroup
Adding ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Adding ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Topic:kafka-testtopic`:
User:kafkatest has Allow permission for operations: Write from hosts: 9.30.150.22
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Current ACLs for resource `Group:kafkatestgroup`:
User:kafkatest has Allow permission for operations: Read from hosts: 9.30.150.22
Run Kafka consumer
[kafkatest@hostname kafka]$ bin/kafka-console-consumer.sh --new-consumer --zookeeper hostname.abc.com:2181 --topic kafka-testtopic --from-beginning --bootstrap-server hostname.abc.com:6667 --consumer.config consumer.properties
Hi
Writing Data as kafkatest user
Message 1
Message 2
Message 3
^CProcessed a total of 5 messages
[kafkatest@hostname kafka]$ cat consumer.properties
security.protocol=SASL_PLAINTEXT
group.id=kafkatestgroup
Information about kafka_jaas conf file:
When Kerberos is enabled in Kafka, this configuration file is passed as a security parameter (-Djava.security.auth.login.config=”/usr/iop/current/kafka-broker/conf/kafka_jaas.conf”) to Kafka console scripts.
[root@hostname kafka]# cat /usr/iop/current/kafka-broker/conf/kafka_jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/hostname.abc.com@IBM.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/hostname.abc.com@IBM.COM";
};
- The KafkaServer section is used by the Kafka broker and inter-broker communication, for example during the creation of topic.
The KafkaClient is used when running Kafka producer or consumers. Because in the example KafkaClient is using the ticket cache, we have to run the kinit command to cache the Kerberos ticket before running the Kafka producer and consumer.
- The Client section is used for Zookeeper connection. Kafka ACLs are stored in the Zookeeper.
What to do when the SASL username (operating system user name) is different from the principal name
Generally, the SASL username is the same as the primary name of the Kerberos principal. However, if that’s not the case, we need to add a property sasl.kerberos.principal.to.local.rules to the Kafka broker configuration, to map the principal name to the user name. In the following example, a mapping from the principal name ambari-qa-bh to the user name (operating system user name) ambari-qa is added.
When Kerberos is enabled from Ambari, the principal name generated for the user “ambari-qa” will be of the form ambari-qa-[Cluster Name]. In the example shown here, I have provided my cluster name as “bh”, the principal name generated for user “ambari–qa” is generated as ambari-qa-bh.
[root@hostname kafka]# klist -k -t /etc/security/keytabs/smokeuser.headless.keytab
Keytab name: FILE:/etc/security/keytabs/smokeuser.headless.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
1 06/22/16 13:53:00 ambari-qa-bh@IBM.COM
For the user ambari-qa, we need to add the following rule::
RULE:[1:$1@$0](ambari-qa-bh@IBM.COM)s/.*/ambari-qa/
- Add sasl.kerberos.principal.to.local.rules in custom Kafka-broker configuration.
- Restart Kafka.
More information about the mapping between principal and username can be found in the section auth_to_local in the following article: auth to local
Kafka SSL with ACLs
In this section, we will see how to work with ACLs when SSL is enabled. For information on how to enable SSL in Kafka, follow the steps in the sections Setup SSL and Enable SSL in the Kafka Security Blog
There is an issue in IOP 4.2 when setting up SSL is enabled in Kafka with ACLs. Follow the steps mentioned in the technote, to resolve the issue.
Add the below properties in custom-kafka-broker section to enable authorization with SSL.
- authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
- super.users=User:CN=hostname.ibm.com,OU=iop,O=ibm,L=san jose,ST=california,C=US
Restart the Kafka service from Ambari UI for the changes to take effect.
Note: Add the output of the command below, used to generate the key and certificate for the broker, to the list of super users in Kafka. This allows the Kafka broker to access all Kafka resources. As mentioned above, by default only the super user has access to all Kafka resources. The output of the below command provides the SSL username which is used as the value for super.users.
[root@hostname security]# keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: iop
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=iop, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
By default, the SSL username will be of the form “CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown”. This can be changed by adding the property principal.builder.class to the Kafka broker configuration in the Ambari UI, and setting the value to a class that needs to implement the interface PrincipalBuilder interface (org.apache.kafka.common.security.auth.PrincipalBuilder).
How to add ACLs for a new SSL user?
Create a topic
[root@hostname kafka]# bin/kafka-topics.sh --create --zookeeper hostname.ibm.com:2181 --replication-factor 1 --partitions 1 --topic ssltopic
Created topic "ssltopic".
Add write permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation Write --authorizer-properties zookeeper.connect=hostname.ibm.com:2181
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
The user name provided above is the output when running the below command, which is used to generate Key and Certificate for Kafka Client (Producer/Consumer).
[root@hostname security]# keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365 -genkey
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: hostname.ibm.com
What is the name of your organizational unit?
[Unknown]: biginsights
What is the name of your organization?
[Unknown]: ibm
What is the name of your City or Locality?
[Unknown]: san jose
What is the name of your State or Province?
[Unknown]: california
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=hostname.ibm.com, OU=biginsights, O=ibm, L=san jose, ST=california, C=US correct?
[no]: yes
Enter key password for
(RETURN if same as keystore password):
Run Kafka producer
[root@hostname kafka]# bin/kafka-console-producer.sh --broker-list hostname.ibm.com:6667 --topic ssltopic --producer.config client-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^C
[root@hostname kafka]# cat client-ssl.properties
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
Add read permission for SSL user (CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US) for topic ssltopic and consumer group ssl-group
[root@hostname kafka]# bin/kafka-acls.sh --topic ssltopic --add -allow-host 9.30.150.20 --allow-principal "User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US" --operation read --authorizer-properties zookeeper.connect=hostname.ibm.com:2181 --group ssl-group
Adding ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Adding ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Current ACLs for resource `Topic:ssltopic`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Write from hosts: 9.30.150.20
Current ACLs for resource `Group:ssl-group`:
User:CN=hostname.ibm.com,OU=biginsights,O=ibm,L=san jose,ST=california,C=US has Allow permission for operations: Read from hosts: 9.30.150.20
Run Kafka consumer
[root@hostname kafka]# bin/kafka-console-consumer.sh --zookeeper hostname.ibm.com:2181 --topic ssltopic --from-beginning --new-consumer --bootstrap-server hostname.ibm.com:6667 --consumer.config client-consumer-ssl.properties
Testing Acl with SSl
Message 1
Message 2
^CProcessed a total of 3 messages
[root@hostname kafka]# cat consumer-client-ssl.properties
group.id=ssl-group
security.protocol=SSL
ssl.truststore.location=/etc/kafka/conf/security/kafka.client.truststore.jks
ssl.truststore.password=bigdata
ssl.keystore.location=/etc/kafka/conf/security/kafka.client.keystore.jks
ssl.keystore.password=bigdata
ssl.key.password=bigdata
How to give everyone permission to access a resource if no ACLs are set for the resource.
- Add allow.everyone.if.no.acl.found=true in the “Custom kafka-broker” configuration.
- Restart Kafka
Conclusion:
This blog described how to configure ACLs in Kafka when SSL and Kerberos are enabled in IOP 4.2. For more information, see the Kafka documentation
IBM developer:Kafka ACLs的更多相关文章
- IBM Developer:Java 9 新特性概述
Author: 成富 Date: Dec 28, 2017 Category: IBM-Developer (20) Tags: Java (27) 原文地址:https://www.ibm.com/ ...
- IBM developer:Setting up the Kafka plugin for Ranger
Follow these steps to enable and configure the Kafka plugin for Ranger. Before you begin The default ...
- 分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- 最牛分布式消息系统:Kafka
Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一个分布式的,可划分的,冗余备份的持久性的日志服务.它主要用于处理活跃的流式数据. ...
- Kafka深入理解-3:Kafka如何删除数据(日志)文件
Kafka作为消息中间件,数据需要按照一定的规则删除,否则数据量太大会把集群存储空间占满. 参考:apache Kafka是如何实现删除数据文件(日志)的 Kafka删除数据有两种方式 按照时间,超过 ...
- Kafka深入理解-1:Kafka高效的文件存储设计
文章摘自:美团点评技术团队 Kafka文件存储机制那些事 Kafka是什么 Kafka是最初由Linkedin公司开发,是一个分布式.分区的.多副本的.多订阅者,基于zookeeper协调的分布式日 ...
- Kafka 集群消息监控系统:Kafka Eagle
Kafka Eagle 1.概述 在开发工作当中,消费 Kafka 集群中的消息时,数据的变动是我们所关心的,当业务并不复杂的前提下,我们可以使用 Kafka 提供的命令工具,配合 Zookeeper ...
- kafka集群中常见错误的解决方法:kafka.common.KafkaException: Should not set log end offset on partition
问题描述:kafka单台机器做集群操作是没有问题的,如果分布多台机器并且partitions或者备份的个数大于1都会报kafka.common.KafkaException: Should not s ...
- IM系统的MQ消息中间件选型:Kafka还是RabbitMQ?
1.前言 在IM这种讲究高并发.高消息吞吐的互联网场景下,MQ消息中间件是个很重要的基础设施,它在IM系统的服务端架构中担当消息中转.消息削峰.消息交换异步化等等角色,当然MQ消息中间件的作用远不止于 ...
随机推荐
- 在word中如何美观地插入代码
打开这个网站 http://www.planetb.ca/syntax-highlight-word 进去后我们看到下面的界面 中间的空白文本框,可以插入代码,下面可以选择代码种类,最后点击Show ...
- Luogu P5285 [十二省联考2019]骗分过样例
Preface ZJOI一轮被麻将劝退的老年选手看到这题就两眼放光,省选也有乱搞题? 然后狂肝了3~4天终于打完了,期间还补了一堆姿势 由于我压缩技术比较菜,所以用的都是非打表算法,所以一共写了5K- ...
- eShopOnContainers 知多少[4]:Catalog microservice
引言 Catalog microservice(目录微服务)维护着所有产品信息,包括库存.价格.所以该微服务的核心业务为: 产品信息的维护 库存的更新 价格的维护 架构模式 如上图所示,本微服务采用简 ...
- 【JavaScript动画基础】学习笔记(一)-- 旋转箭头
随着鼠标的移动旋转箭头. requestAnimationFrame 在requestAnimationFrame之前我们可以用setInterval来实现动画的循环: function drawFr ...
- 关于单元测试的思考--Asp.Net Core单元测试最佳实践
在我们码字过程中,单元测试是必不可少的.但在从业过程中,很多开发者却对单元测试望而却步.有些时候并不是不想写,而是常常会碰到下面这些问题,让开发者放下了码字的脚步: 这个类初始数据太麻烦,你看:new ...
- 微软跨平台ORM框架之EFCore — 约定与属性映射
EFCore中的约定简单来说就是规则,CodeFirst基于模型的约定来映射表结构.除此之外还有Fluent API.Data Annotations(数据注释) 可以帮助我们进一步配置模型. 按照这 ...
- 使用Update Strategy组件无法进行delete操作
问题: Update Strategy组件根据字段值对目标表进行DD_DELETE操作时失效 同时,session log中报错:Target table [XXXXXXXX] does not al ...
- 使用 Moq 测试.NET Core 应用 -- Mock 方法
第一篇文章, 关于Mock的概念介绍: https://www.cnblogs.com/cgzl/p/9294431.html 本文介绍使用Moq来Mock方法. 使用的代码: https://git ...
- 6.Flask-WTForms
Flask-WTF是简化了WTForms操作的一个第三方库.WTForms表单的两个主要功能是验证用户提交数据的合法性以及渲染模板.还有其它一些功能:CSRF保护, 文件上传等.安装方法:pip in ...
- springboot~添加新模块的方法
在springboot项目框架里,把一个项目两大模块,主项目main和测试项目test,而我们的测试项目根据功能又可以再分,比如可以有单元测试,集成测试,业务测试等等. 对于一个初学者来说,建立模块的 ...