一、准备环境: 创建Kafka Topic和HBase表

1. 在kerberos环境下创建Kafka Topic

1.1 因为kafka默认使用的协议为PLAINTEXT,在kerberos环境下需要变更其通信协议: 在${KAFKA_HOME}/config/producer.propertiesconfig/consumer.properties下添加

security.protocol=SASL_PLAINTEXT

1.2 在执行前,需要在环境变量中添加KAFKA_OPT选项,否则kafka无法使用keytab:

export KAFKA_OPTS="$KAFKA_OPTS -Djava.security.auth.login.config=/usr/ndp/current/kafka_broker/conf/kafka_jaas.conf"

其中kafka_jaas.conf内容如下:

cat /usr/ndp/current/kafka_broker/conf/kafka_jaas.conf

KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/hzadg-mammut-platform3.server.163.org@BDMS.163.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=true
renewTicket=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="zookeeper"
principal="kafka/hzadg-mammut-platform3.server.163.org@BDMS.163.COM";
};

1.3 创建新的topic:

bin/kafka-topics.sh --create --zookeeper hzadg-mammut-platform2.server.163.org:2181,hzadg-mammut-platform3.server.163.org:2181 --replication-factor 1 --partitions 1 --topic spark-test

1.4 创建生产者:

bin/kafka-console-producer.sh  --broker-list hzadg-mammut-platform2.server.163.org:6667,hzadg-mammut-platform3.server.163.org:6667,hzadg-mammut-platform4.server.163.org:6667 --topic spark-test --producer.config ./config/producer.properties

1.5 测试消费者:

bin/kafka-console-consumer.sh --zookeeper hzadg-mammut-platform2.server.163.org:2181,hzadg-mammut-platform3.server.163.org:2181 --bootstrap-server hzadg-mammut-platform2.server.163.org:6667 --topic spark-test --from-beginning --new-consumer  --consumer.config ./config/consumer.properties

2. 创建HBase表

2.1 kinit到hbase账号,否则无法创建hbase表

kinit -kt /etc/security/keytabs/hbase.service.keytab hbase/hzadg-mammut-platform2.server.163.org@BDMS.163.COM
 ./bin/hbase shell
> create 'recsys_logs', 'f'

二、编写Spark代码

编写简单的Spark Java程序,功能为: 从Kafka消费信息,同时将batch内统计的数量写入Hbase中,具体可以参考项目:

https://github.com/LiShuMing/spark-demos

/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ package com.netease.spark.streaming.hbase; import com.netease.spark.utils.Consts;
import com.netease.spark.utils.JConfig;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.HConnectionManager;
import org.apache.hadoop.hbase.client.HTableInterface;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.streaming.Duration;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.io.IOException;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set; public class JavaKafkaToHBaseKerberos {
private final static Logger LOGGER = LoggerFactory.getLogger(JavaKafkaToHBaseKerberos.class); private static HConnection connection = null;
private static HTableInterface table = null; public static void openHBase(String tablename) throws IOException {
Configuration conf = HBaseConfiguration.create();
synchronized (HConnection.class) {
if (connection == null)
connection = HConnectionManager.createConnection(conf);
} synchronized (HTableInterface.class) {
if (table == null) {
table = connection.getTable("recsys_logs");
}
}
} public static void closeHBase() {
if (table != null)
try {
table.close();
} catch (IOException e) {
LOGGER.error("关闭 table 出错", e);
}
if (connection != null)
try {
connection.close();
} catch (IOException e) {
LOGGER.error("关闭 connection 出错", e);
}
} public static void main(String[] args) throws Exception {
String hbaseTable = JConfig.getInstance().getProperty(Consts.HBASE_TABLE);
String kafkaBrokers = JConfig.getInstance().getProperty(Consts.KAFKA_BROKERS);
String kafkaTopics = JConfig.getInstance().getProperty(Consts.KAFKA_TOPICS);
String kafkaGroup = JConfig.getInstance().getProperty(Consts.KAFKA_GROUP); // open hbase
try {
openHBase(hbaseTable);
} catch (IOException e) {
LOGGER.error("建立HBase 连接失败", e);
System.exit(-1);
} SparkConf conf = new SparkConf().setAppName("JavaKafakaToHBase");
JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(1000)); Set<String> topicsSet = new HashSet<>(Arrays.asList(kafkaTopics.split(",")));
Map<String, Object> kafkaParams = new HashMap<>();
kafkaParams.put("bootstrap.servers", kafkaBrokers);
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", kafkaGroup);
kafkaParams.put("auto.offset.reset", "earliest");
kafkaParams.put("enable.auto.commit", false);
// 在kerberos环境下,这个配置需要增加
kafkaParams.put("security.protocol", "SASL_PLAINTEXT"); // Create direct kafka stream with brokers and topics
final JavaInputDStream<ConsumerRecord<String, String>> stream =
KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(Arrays.asList(topicsSet.toArray(new String[0])), kafkaParams)
); JavaDStream<String> lines = stream.map(new Function<ConsumerRecord<String, String>, String>() {
private static final long serialVersionUID = -1801798365843350169L; @Override
public String call(ConsumerRecord<String, String> record) {
return record.value();
}
}).filter(new Function<String, Boolean>() {
private static final long serialVersionUID = 7786877762996470593L; @Override
public Boolean call(String msg) throws Exception {
return msg.length() > 0;
}
}); JavaDStream<Long> nums = lines.count(); nums.foreachRDD(new VoidFunction<JavaRDD<Long>>() {
private SimpleDateFormat sdf = new SimpleDateFormat("yyyyMMdd HH:mm:ss"); @Override
public void call(JavaRDD<Long> rdd) throws Exception {
Long num = rdd.take(1).get(0);
String ts = sdf.format(new Date());
Put put = new Put(Bytes.toBytes(ts));
put.add(Bytes.toBytes("f"), Bytes.toBytes("nums"), Bytes.toBytes(num));
table.put(put);
}
}); ssc.start();
ssc.awaitTermination();
closeHBase();
}
}

三、 编译并在Yarn环境下运行

3.1 切到项目路径下,编译项目:

mvn clean package

3.2 运行Spark环境

  • 由于executor需要访问kafka,所以需要将Kafka授权过的kerberos用户下发至executor中;
  • 由于集群环境的hdfs也是kerberos加密的,需要通过spark.yarn.keytab/spark.yarn.principal配置可以访问Hdfs/HBase的keytab信息;

在项目目录下执行如下:

/usr/ndp/current/spark2_client/bin/spark-submit \
--files ./kafka_client_jaas.conf,./kafka.service.keytab \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=./kafka_client_jaas.conf" \
--driver-java-options "-Djava.security.auth.login.config=./kafka_client_jaas.conf" \
--conf spark.yarn.keytab=/etc/security/keytabs/hbase.service.keytab \
--conf spark.yarn.principal=hbase/hzadg-mammut-platform1.server.163.org@BDMS.163.COM \
--class com.netease.spark.streaming.hbase.JavaKafkaToHBaseKerberos \
--master yarn \
--deploy-mode client \
./target/spark-demo-0.1.0-jar-with-dependencies.jar

其中kafka_client_jaas.conf文件具体内容如下:

cat kafka_client_jaas.conf

KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
renewTicket=true
keyTab="./kafka.service.keytab"
storeKey=true
useTicketCache=false
serviceName="kafka"
principal="kafka/hzadg-mammut-platform1.server.163.org@BDMS.163.COM";
};

3.2 执行结果

 
 
 

kerberos环境下spark消费kafka写入到Hbase的更多相关文章

  1. 17-Flink消费Kafka写入Mysql

    戳更多文章: 1-Flink入门 2-本地环境搭建&构建第一个Flink应用 3-DataSet API 4-DataSteam API 5-集群部署 6-分布式缓存 7-重启策略 8-Fli ...

  2. 本机spark 消费kafka失败(无法连接)

    本机spark 消费kafka失败(无法连接) 终端也不报错 就特么不消费:  但是用console的consumer  却可以 经过各种改版本 ,测试配置,最后发现 只要注释掉 kafka 配置se ...

  3. Centos 6.5 x64环境下 spark 1.6 maven 编译-- 已验证

    Centos 6.5 x64 jdk 1.7 scala 2.10 maven 3.3.3 cd spark-1.6 export MAVEN_OPTS="-Xmx2g -XX:MaxPer ...

  4. Spark消费Kafka如何实现精准一次性消费?

    1.定义 精确一次消费(Exactly-once) 是指消息一定会被处理且只会被处理一次.不多不少就一次处理. 如果达不到精确一次消费,可能会达到另外两种情况: 至少一次消费(at least onc ...

  5. kerberos环境下flume写hbase

    直接看官网 http://flume.apache.org/releases/content/1.9.0/FlumeUserGuide.html#hbasesinks a1.channels = c1 ...

  6. Spark Streaming消费Kafka Direct方式数据零丢失实现

    使用场景 Spark Streaming实时消费kafka数据的时候,程序停止或者Kafka节点挂掉会导致数据丢失,Spark Streaming也没有设置CheckPoint(据说比较鸡肋,虽然可以 ...

  7. spark streaming - kafka updateStateByKey 统计用户消费金额

    场景 餐厅老板想要统计每个用户来他的店里总共消费了多少金额,我们可以使用updateStateByKey来实现 从kafka接收用户消费json数据,统计每分钟用户的消费情况,并且统计所有时间所有用户 ...

  8. Spark streaming消费Kafka的正确姿势

    前言 在游戏项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark streaming从kafka中不 ...

  9. spark streaming从指定offset处消费Kafka数据

    spark streaming从指定offset处消费Kafka数据 -- : 770人阅读 评论() 收藏 举报 分类: spark() 原文地址:http://blog.csdn.net/high ...

随机推荐

  1. 不同数据库的表迁移SqlServer

    INSERT INTO table  SELECT *  FROM  OPENDATASOURCE ('SQLOLEDB', 'Data Source=172.168.44.146;User ID=s ...

  2. How to resolve CSRF protection error while adding service through Ambari api

    Short Description: This article will describe on how to disable CSRF protection in Ambari. Article A ...

  3. Linux 桌面玩家指南:15. 深度学习可以这样玩

    特别说明:要在我的随笔后写评论的小伙伴们请注意了,我的博客开启了 MathJax 数学公式支持,MathJax 使用$标记数学公式的开始和结束.如果某条评论中出现了两个$,MathJax 会将两个$之 ...

  4. 【原创开源】网络版二代双通道示波器开源发布,支持电脑,手机和Pad等各种OS平台访问

    前言感谢大家的支持,提前奉上今年的国庆福利. 一代示波器发布于3年前,去年年底的时候发布了二代示波器,软件性能已经比较强劲,但依然有值得升级改进的地方,经过今年这半年多努力,在二代示波器的基础上再推出 ...

  5. Pytorch实战1:线性回归(Linear Regresion)

    GitHub代码练习地址:https://github.com/Neo-ML/MachineLearningPractice/blob/master/Pytorch01_LinearRegressio ...

  6. win10下 anaconda 环境下python2和python3版本转换

    在cmd的环境下,输入以下命令安装Python2.7的环境 conda create -n python27 python=2.7 anaconda 上面的代码创建了一个名为python27的pyth ...

  7. .NET Core微服务之基于Consul实现服务治理(续)

    Tip: 此篇已加入.NET Core微服务基础系列文章索引 上一篇发布之后,很多人点赞和评论,不胜惶恐,这一篇把上一篇没有弄到的东西补一下,也算是给各位前来询问的朋友的一些回复吧. 一.Consul ...

  8. Java~命名规范

    下面总结以点java命名规范 虽然感觉这些规范比起C#来说有点怪,但还是应该尊重它的命名! 命名规范 项目名全部小写 包名全部小写 类名首字母大写,如果类名由多个单词组成,每个单词的首字母都要大写. ...

  9. vue 回到页面顶部

    模仿Element-UI 回到页面顶部 BackToTop.vue <template> <transition :name="transitionName"&g ...

  10. 搞懂MySQL分区

    一.InnoDB逻辑存储结构 首先要先介绍一下InnoDB逻辑存储结构和区的概念,它的所有数据都被逻辑地存放在表空间,表空间又由段,区,页组成. 段 段就是上图的segment区域,常见的段有数据段. ...