转载请注明出处:http://www.cnblogs.com/xiaodf/

本文旨在展示CDH基于Kerberos身份认证和基于Sentry的权限控制功能的测试示例。

1. 准备测试数据

cat /tmp/events.csv
10.1.2.3,US,android,createNote
10.200.88.99,FR,windows,updateNote
10.1.2.3,US,android,updateNote
10.200.88.77,FR,ios,createNote
10.1.4.5,US,windows,updateTag

2. 创建用户
2.1. 创建系统用户
在集群所有节点创建系统用户并设置密码

useradd user1
passwd user1
useradd user2
passwd user2
useradd user3
passwd user3

2.2. 创建kerberos用户

kadmin.local -q "addprinc user1"
kadmin.local -q "addprinc user2"
kadmin.local -q "addprinc user3"

3. 创建数据库和表
3.1. 创建数据库
admin为sentry的超级管理员,该用户配置权限时已设置

kinit admin

通过beeline连接 hiveserver2,运行下面命令创建hive库的超级管理员角色, 并将该角色赋予admin组,使admin有操作hive库的权力

beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
create role admin_role;
GRANT ALL ON SERVER server1 TO ROLE admin_role;
GRANT ROLE admin_role TO GROUP admin;

创建两个测试数据库

create database db1;
create database db2;

3.2. 创建表
在两个测试数据库中各创建一张测试表,并导入测试数据

create table db1.table1 (
ip STRING, country STRING, client STRING, action STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; create table db2.table1 (
ip STRING, country STRING, client STRING, action STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
create table db2.table2 (
ip STRING, country STRING, client STRING, action STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','; load data local inpath '/home/iie/events.csv' overwrite into table db1.table1;
load data local inpath '/home/iie/events.csv' overwrite into table db2.table1;
load data local inpath '/home/iie/events.csv' overwrite into table db2.table2;

  

4. 赋予用户权限
4.1. 给user1赋予db1的所有权限

create role user1_role;
GRANT ALL ON DATABASE db1 TO ROLE user1_role;
GRANT ROLE user1_role TO GROUP user1;

4.2. 给user2赋予db2的所有权限

create role user2_role;
GRANT ALL ON DATABASE db2 TO ROLE user2_role;
GRANT ROLE user2_role TO GROUP user2;

4.3. 给user3赋予db2.table1的所有权限

create role user3_role;
use db2;
GRANT select ON table table1 TO ROLE user3_role;
GRANT ROLE user3_role TO GROUP user3;

5. 测试用户权限
5.1. Hive测试
5.1.1. admin用户拥有整个hive库的权限

kinit admin
beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
show databases;
5.1.2. user1用户只具有db1和default的权限
kinit user1
beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
0: jdbc:hive2://vmw208:10000/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| db1 |
| default |
+----------------+--+

5.1.3. user2用户只具有db2和default的权限

kinit user2
beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
0: jdbc:hive2://vmw208:10000/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| db2 |
| default |
+----------------+--+

5.1.4. user3用户只具有db2.table1和default的权限

kinit user2
beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
0: jdbc:hive2://vmw208:10000/> show databases;
+----------------+--+
| database_name |
+----------------+--+
| db2 |
| default |
+----------------+--+
0: jdbc:hive2://node0a17:10000/> use db2;
0: jdbc:hive2://node0a17:10000/> show tables;
INFO : OK
+-----------+--+
| tab_name |
+-----------+--+
| table1 |
+-----------+--+

5.2. Hdfs测试
配置hdfs acl与sentry同步后,hdfs权限与sentry监控的目录(/user/hive/warehouse)的权限同步
5.2.1. 切换到hive用户,查看hive库文件的权限
设置hdfs acl与sentry同步后,sentry监控的hive库的权限改动会同步到对应的hdfs文件权限

[root@vmw208 home]# kinit hive
[root@vmw208 home]# hdfs dfs -getfacl -R /user/hive/warehouse/
# file: /user/hive/warehouse
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db1.db
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user1:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db1.db/table1
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user1:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db1.db/table1/events.csv
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user1:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db2.db
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user2:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db2.db/table1
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user2:rwx
group::---
group:hive:rwx
mask::rwx
other::--x # file: /user/hive/warehouse/db2.db/table1/events.csv
# owner: hive
# group: hive
user::rwx
user:hive:rwx
group:user2:rwx
group::---
group:hive:rwx
mask::rwx
other::--x

  

5.2.2. 切换到user1用户,查看hdfs文件

[root@vmw208 home]# kinit user1
Password for user1@HADOOP.COM:
[root@vmw208 home]# hdfs dfs -ls /user/hive/warehouse/db2.db
ls: Permission denied: user=user1, access=READ_EXECUTE, inode="/user/hive/warehouse/db2.db":hive:hive:drwxrwx—x
[root@vmw208 home]# hdfs dfs -cat /user/hive/warehouse/db2.db/table1/events.csv
cat: Permission denied: user=user1, access=READ, inode="/user/hive/warehouse/db2.db/table1/events.csv":hive:hive:-rwxrwx--x [root@vmw208 home]# hdfs dfs -ls /user/hive/warehouse/db1.db
Found 1 items
drwxrwx--x+ - hive hive 0 2016-09-29 16:54 /user/hive/warehouse/db1.db/table1
[root@vmw208 home]# hdfs dfs -cat /user/hive/warehouse/db1.db/table1/events.csv
10.1.2.3,US,android,createNote
10.200.88.99,FR,windows,updateNote
10.1.2.3,US,android,updateNote
10.200.88.77,FR,ios,createNote
10.1.4.5,US,windows,updateTag

  

5.2.3. 切换到user2用户,查看hdfs文件

[root@vmw208 home]# kinit user2
Password for user2@HADOOP.COM:
[root@vmw208 home]# hdfs dfs -cat /user/hive/warehouse/db1.db/table1/events.csv
cat: Permission denied: user=user2, access=READ, inode="/user/hive/warehouse/db1.db/table1/events.csv":hive:hive:-rwxrwx--x
[root@vmw208 home]# hdfs dfs -cat /user/hive/warehouse/db2.db/table1/events.csv
10.1.2.3,US,android,createNote
10.200.88.99,FR,windows,updateNote
10.1.2.3,US,android,updateNote
10.200.88.77,FR,ios,createNote
10.1.4.5,US,windows,updateTag

5.3. Spark测试
5.3.1. Spark读hive表数据并打印到控制台
(1) 切换到user1用户测试

[root@vmw209 xdf]# kinit user1
Password for user1@HADOOP.COM:
[root@vmw209 xdf]# spark-submit --class iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db2 table1
……
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=user1, access=READ_EXECUTE, inode="/user/hive/warehouse/db2.db/table1":hive:hive:drwxrwx—x
[root@vmw209 xdf]# spark-submit --class iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db1 table1
……
+------------+-------+-------+----------+
| ip|country| client| action|
+------------+-------+-------+----------+
| 10.1.2.3| US|android|createNote|
|10.200.88.99| FR|windows|updateNote|
| 10.1.2.3| US|android|updateNote|
|10.200.88.77| FR| ios|createNote|
| 10.1.4.5| US|windows| updateTag|
+------------+-------+-------+----------+

  

(2) 切换到user2用户测试

[root@vmw209 xdf]# kinit user2
Password for user2@HADOOP.COM:
[root@vmw209 xdf]# spark-submit --class iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db1 table1
……
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=user2, access=READ_EXECUTE, inode="/user/hive/warehouse/db1.db/table1":hive:hive:drwxrwx—x
[root@vmw209 xdf]# spark-submit --class iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db2 table1
……
+------------+-------+-------+----------+
| ip|country| client| action|
+------------+-------+-------+----------+
| 10.1.2.3| US|android|createNote|
|10.200.88.99| FR|windows|updateNote|
| 10.1.2.3| US|android|updateNote|
|10.200.88.77| FR| ios|createNote|
| 10.1.4.5| US|windows| updateTag|
+------------+-------+-------+----------+

5.3.2. Spark读文件数据写入hive表中
调用工具程序spark.jar读本地文件/home/xdf/events.csv数据写到db2.table2
切换到user2用户测试

kinit user2
beeline -u "jdbc:hive2://vmw208:10000/;principal=hive/vmw208@HADOOP.COM"
use db2;
create table table2 (
ip STRING, country STRING, client STRING, action STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
[root@vmw209 xdf]# spark-submit --class iie.hadoop.permission.HCatWriterTest --master local /home/xdf/spark.jar /home/xdf/events.csv db2 table2

成功!
写到db1.table1报错,没有权限!

Exception in thread "main" org.apache.hive.hcatalog.common.HCatException : 2004 : HCatOutputFormat not initialized, setOutput has to be called. Cause : org.apache.hadoop.security.AccessControlException: Permission denied: user=user2, access=WRITE, inode="/user/hive/warehouse/db1.db/table1":hive:hive:drwxrwx--x

上面只是测试环境,因为kinit + 密码的方式有时效限制,不适合在生产环境运行,幸好spark提供了相关的参数:

spark-submit
……
--principal # 用户对应的kerberos principle
--keytab # 对应用户principle生成的密钥文件

spark的权限管理通过对hdfs/hive的文件目录设置权限来管理,不同的用户拥有不同的权限,用户在提交spark任务时,指定对应用户的kerberos principle和keytab来实现权限管理。任务提交命令如下:

spark-submit --class iie.hadoop.permission.QueryTable --master yarn-cluster --principal=user1@HADOOP.COM --keytab=/home/user1/user1.keytab /home/user1/spark.jar db1 table1

其中--principal 和--keytab与用户一一对应

注意:spark-submit只有在yarn-cluster模式下,--principal 和--keytab才有效

5.4. Kafka测试
5.4.1. 认证
用户kafka为kafka权限控制的超级管理员

[root@node10 iie]#kinit -kt /home/iie/kafka.keytab kafka

5.4.2. 创建topic
创建topic1和topic2

[root@node10 iie]#kafka-topics --zookeeper node11:2181/kafka --create --topic topic1 --partitions 2 --replication-factor 1
[root@node10 iie]#kafka-topics --zookeeper node11:2181/kafka --create --topic topic2 --partitions 2 --replication-factor 1

5.4.3. 赋权
给user1用户附topic1的读写权限

[root@node10 iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --add --allow-principal User:user1 --allow-host node10  --producer --topic topic1 --group console-consumer-9175
[root@node10 iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --add --allow-principal User:user1 --allow-host node10 --consumer --topic topic1 --group console-consumer-9175

给user2用户附topic2的读写权限

[root@node10 iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --add --allow-principal User:user2 --allow-host node10  --producer --topic topic2 --group console-consumer-9175
[root@node10 iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --add --allow-principal User:user2 --allow-host node10 --consumer --topic topic2 --group console-consumer-9175

5.4.4. 查看权限

[root@node10 iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --list
Current ACLs for resource `Topic:topic1`:
User:user1 has Allow permission for operations: Write from hosts: node10
User:user1 has Allow permission for operations: Read from hosts: node10
Current ACLs for resource `Topic:topic2`:
User:user2 has Allow permission for operations: Read from hosts: node10
User:user2 has Allow permission for operations: Write from hosts: node10

5.4.5. 创建生产消费配置文件
创建consumer.properties

cat /etc/kafka/conf/consumer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
group.id=console-consumer-9175

创建producer.properties

cat /etc/kafka/conf/producer.properties
security.protocol=SASL_PLAINTEXT
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka

5.4.6. 生产数据
命令行生产数据

[root@node10 iie]#kinit user1
[root@node10 iie]#kafka-console-producer --broker-list node12:9092 --topic topic1 --producer.config /etc/kafka/conf/producer.properties
123123
123123

5.4.7. 消费数据
命令行消费数据

[root@node10 iie]#kinit user1
[root@node10 iie]#kafka-console-consumer --bootstrap-server node12:9092 --topic topic1 --new-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties
123123
123123

用户对topic没有权限时报错

[root@node10 iie]# kinit user2
Password for user2@HADOOP.COM:
[root@node10 iie]# kafka-console-consumer --bootstrap-server node12:9092 --topic topic1 --new-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties
[2016-10-12 15:38:01,599] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [topic1]

5.4.8. 移除权限
登陆管理员用户移除权限

[root@node10 iie]#kinit -kt /home/iie/kafka.keytab kafka

删除user1对topic1的消费权限

[root@node10 iie]# kafka-acls --authorizer-properties zookeeper.connect=node11:2181/kafka --remove --allow-principal User:user1 --allow-host node10 --consumer --topic topic1 --group console-consumer-92175
Are you sure you want to remove ACLs:
User:user1 has Allow permission for operations: Read from hosts: node10
User:user1 has Allow permission for operations: Describe from hosts: node10
from resource `Topic:topic1`? (y/n)
y
Are you sure you want to remove ACLs:
User:user1 has Allow permission for operations: Read from hosts: node10
from resource `Group:console-consumer-92175`? (y/n)
y
Current ACLs for resource `Topic:topic1`:
User:Aluser1 has Allow permission for operations: Read from hosts: node10
User:Aluser1 has Allow permission for operations: Describe from hosts: node10
User:user1 has Allow permission for operations: Write from hosts: node10 Current ACLs for resource `Group:console-consumer-92175`:

  

测试user1消费topic1报错,说明权限已经移除

[root@node10 iie]# kinit user1
Password for user1@HADOOP.COM:
[root@node10 iie]# kafka-console-consumer --bootstrap-server node12:9092 --topic topic1 --new-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties
[2016-10-12 15:45:11,572] WARN The configuration sasl.mechanism = GSSAPI was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)
[2016-10-12 15:45:11,914] WARN Not authorized to read from topic topic1. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2016-10-12 15:45:11,916] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [topic1]
[2016-10-12 15:45:11,920] WARN Not authorized to read from topic topic1. (org.apache.kafka.clients.consumer.internals.Fetcher)
[2016-10-12 15:45:11,921] ERROR Not authorized to commit to topics [topic1] for group console-consumer-9175 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2016-10-12 15:45:11,922] WARN Auto offset commit failed for group console-consumer-9175: Not authorized to access topics: [topic1] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2016-10-12 15:45:11,927] WARN TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.Login)
Processed a total of 0 messages

  

cdh5.7权限测试示例的更多相关文章

  1. 【转】Oracle索引列NULL值引发执行计划该表的测试示例

    有时开发进行表结构设计,对表字段是否为空过于随意,出现诸如id1=id2,如果允许字段为空,因为Oracle中空值并不等于空值,有可能得到意料之外的结果.除此之外,最关键的是,NULL会影响oracl ...

  2. Struts2使用拦截器完成权限控制示例

    http://aumy2008.iteye.com/blog/146952 Struts2使用拦截器完成权限控制示例 示例需求:    要求用户登录,且必须为指定用户名才可以查看系统中某个视图资源:否 ...

  3. WPF命中测试示例(二)——几何区域命中测试

    原文:WPF命中测试示例(二)--几何区域命中测试 接续上次的命中测试,这次来做几何区域测试示例. 示例 首先新建一个WPF项目,在主界面中拖入一个按钮控件,并修改代码中的以下高亮位置: 当前设计视图 ...

  4. WPF命中测试示例(一)——坐标点命中测试

    原文:WPF命中测试示例(一)--坐标点命中测试 命中测试也可被称为碰撞测试,在WPF中使用VisualTreeHelper.HitTest()方法实现,该方法用于获取给定的一个坐标点或几何形状内存在 ...

  5. ansible hosts文件编写,简单使用测试(普通用户、sudo用户、root用户登录权限测试)

    一.配置文件修改: 1.备份原配置文件: cp /etc/ansible/hosts /etc/ansible/hosts.bak 2.修改hosts配置文件: cat <<EOF> ...

  6. JUnit4 测试示例

    1. JUnit4 测试示例 // Calculator.java public class Calculator{ public int add(int a, int b){ return a + ...

  7. ranger-hdfs 插件组权限测试

    当hdfs文件对外是公开的则该其他用户就算没有配置相关的权限一样可以进行相关的操作.当hdfs文件对外权限是没有开放的,其他用户若需要进行相关操作则需要通过Ranger进行相关权限的配置. 首先  / ...

  8. ASP.NET MVC:窗体身份验证及角色权限管理示例

    ASP.NET MVC 建立 ASP.NET 基础之上,很多 ASP.NET 的特性(如窗体身份验证.成员资格)在 MVC 中可以直接使用.本文旨在提供可参考的代码,不会涉及这方面太多理论的知识. 本 ...

  9. Python+Selenium+Unittest+HTMLTestRunner生成测试报告+发送至邮箱,记一次完整的cnblog登录测试示例,

    测试思路:单个测试集.单个测试汇成多个测试集.运行测试集.生成测试报告.发送至邮箱. 第一步:建立单个测试集,以cnblog登录为例. 测试用例: cnblog的登录测试,简单分下面几种情况:(1)用 ...

随机推荐

  1. SQL Prompt自定义代码片段

    新增代码片段: 代码片段管理: 代码片段中可以使用以下占位符:详见参考: $DATE$ 插入当前日期. $TIME$ 插入当前时间 $USER$ 插入当前电脑的用户名 $PASTE$ 插入剪切板内容 ...

  2. Ubuntu 14.04下安装功能强大的屏幕截图软件 Shutter

    [注释]试用了一下,果然很强大,牛逼 一款功能强大的屏幕截图软件——Shutter,Shutter最基本的就是截图功能了,在设计上可以自由选定区域,同时选定之 后依然可以通过上下左右四个地方来改变选区 ...

  3. [HIHO1062] 最近公共祖先·一(lca, 并查集, 二分, 神trick)

    题目链接:http://hihocoder.com/problemset/problem/1062 题意裸,有个trick,导致我当年做的时候一直在WA... 那就是出现这种没有出现在关系中,但是依然 ...

  4. metasploit模块功能介绍

    metasploit的模块构成及功能分析  转载自----http://forum.cnsec.org/thread-94704-1-1.html 今天我们介绍一下metasploit的基础架构和 市 ...

  5. consul笔记-集群加入的问题

    加入集群的问题 1 只有2个server的时候,一个挂掉,不会选举出新的leader. 2 使用 -bootstrap 可以直接启动为leader,这和-bootstrap-expect 是有区别的 ...

  6. JQuery_过滤选择器

    一.基本过滤器 1.基本过滤器 过滤器主要通过特定的过滤规则来筛选所需的 DOM 元素,和 CSS 中的伪类的语法类似:使用冒号(:)开头. <script type="text/ja ...

  7. [Selenium] 数字显示的月份转换为英文显示

    如果只需要英文的前三位字母,可以自己截取.

  8. .AndroidRuntimeException: requestFeature() must be called before adding content

    有以下几种情况: 1. setContentView(R.layout.activity_list); requestWindowFeature(Window.FEATURE_NO_TITLE); 修 ...

  9. 数据库 基础篇4(mysql语法---表)

    5 表管理 选择数据库   5.1 查看所有表 mysql> show tables; +-----------------+ | Tables_in_day15 | +------------ ...

  10. CentOS 7 中设置启动模式

    1.命令行模式systemctl set-default multi-user.target2.图形模式systemctl set-default graphical.target