1. Produer

1.1. 基本 Producer

首先使用 maven 构建相关依赖,这里我们服务器kafka 版本为 2.12-2.3.0,pom.xml 文件为:

 <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.github.tang</groupId>
<artifactId>kafka-beginner</artifactId>
<version>1.0</version> <dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency> <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-simple -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.26</version>
</dependency> </dependencies> </project>

然后创建一个 Producer:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; public class ProducerDemo { private static String bootstrapServers = "server_xxx:9092"; public static void main(String[] args) { /**
* create Producer properties
*
* Properties are available in official document:
* https://kafka.apache.org/documentation/#producerconfigs
*
*/
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName()); // create the producer
KafkaProducer<String, String> produer = new KafkaProducer<String, String>(properties); // create a producer record
ProducerRecord<String, String> record =
new ProducerRecord<String, String>("first_topic", "message from java"); // send data - asynchronous
/**
* asynchronous means the data would not send immediately
* however, the program would terminate immediately after run the send() method
* hence the data would not send to kafka topic
* and the consumer would not receive the data
*
* so we need flush()
*/
produer.send(record); /**
* use flush() to wait sending complete
*/
produer.flush();
produer.close(); }
}

运行此程序可以在consumer-console-cli 下看到发送的消息。

1.2. 带Callback() 的Producer

Callback() 函数会在每次发送record 后执行,例如:

首先实例化一个 logger() 对象:

 // create a logger
final Logger logger = LoggerFactory.getLogger(ProducerDemoCallback.class);

使用 Callback():

 /**
* send data with Callback()
*/
for(int i = 0; i < 10; i++) {
// create a producer record
ProducerRecord<String, String> record =
new ProducerRecord<String, String>("first_topic", "message from java" + Integer.toString(i)); produer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// execute every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record is sent successfully
logger.info("Received new metadata. \n" +
"Topic: " + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}

部分输出结果如下:

[kafka-producer-network-thread | producer-1] INFO com.github.tang.kafka.tutorial1.ProducerDemoCallback - Received new metadata.

Topic: first_topic

Partition: 2

Offset: 21

Timestamp: 1565501879059

[kafka-producer-network-thread | producer-1] INFO com.github.tang.kafka.tutorial1.ProducerDemoCallback - Received new metadata.

Topic: first_topic

Partition: 2

Offset: 22

Timestamp: 1565501879075

1.3. 发送带key的records

上面的例子均是未带key,所以消息是按轮询的方式发送到partition。下面是带key的producer例子,重载send() 方法即可:

 String key = "id_" + Integer.toString(i);

 ProducerRecord<String, String> record =
new ProducerRecord<String, String>(topic, key,"message from java" + Integer.toString(i));

2. Consumer

2.1. 基本Consumer

下面是一个基本的consumer 例子:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties; public class ConsumerDemo {
private static String bootstrapServers = "server:9092";
private static String groupId = "my-forth-app";
private static String topic = "first_topic"; public static void main(String[] args) {
Logger logger = LoggerFactory.getLogger(ConsumerDemo.class); /**
* create Consumer properties
*
* Properties are available in official document:
* https://kafka.apache.org/documentation/#consumerconfigs
*
*/
Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // create consumer
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties); // subscribe consumer to our topic(s)
consumer.subscribe(Arrays.asList(topic)); // poll for new data
while(true){
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for(ConsumerRecord record : records){
logger.info("Key: " + record.key() + "\t" + "Value: " + record.value() +
"Topic: " + record.partition() + "\t" + "Partition: " + record.partition()
); }
} }
}

部分输出结果如下:

从输出结果可以看到,consumer 在读取时,(在指定offset为earliest的情况下)是先读完一个partition后,再读下一个partition。

2.2. Consumer balancing

之前提到过,在一个consumer group中的consumers可以自动做负载均衡。下面我们启动一个consumer后,再启动一个consumer。

下面是第一个consumer的日志:

在第二个consumer加入后,第一个consumer 重新分配 partition,从之前负责三个partition(0,1,2)到现在负责一个partition(2)。

对于第二个consumer的日志:

可以看到第二个consumer在加入后,开始负责2个partition(0与1)的读

2.3 Consumer 多线程方式:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.CountDownLatch; public class ConsumerDemoWithThreads { private static Logger logger = LoggerFactory.getLogger(ConsumerDemoWithThreads.class); public static void main(String[] args) {
String bootstrapServers = "server:9092";
String groupId = "my-fifth-app";
String topic = "first_topic"; // latch for dealing with multiple threads
CountDownLatch latch = new CountDownLatch(1); ConsumerRunnable consumerRunnable = new ConsumerRunnable(latch,
bootstrapServers,
groupId,
topic); Thread myConsumerThread = new Thread(consumerRunnable);
myConsumerThread.start(); // add a shutdown hook
Runtime.getRuntime().addShutdownHook(new Thread(() ->{
logger.info("Caught shutdown hook");
consumerRunnable.shutdown(); try{
latch.await();
} catch (InterruptedException e){
e.printStackTrace();
}
logger.info("Application has exited"); })); try{
latch.await();
} catch (InterruptedException e){
logger.error("Application got interrupted", e);
} finally {
logger.info("Application is closing");
} } private static class ConsumerRunnable implements Runnable{ private CountDownLatch latch;
KafkaConsumer<String, String> consumer;
private String bootstrapServers;
private String topic;
private String groupId; public ConsumerRunnable(CountDownLatch latch,
String bootstrapServers,
String groupId,
String topic){
this.latch = latch;
this.bootstrapServers = bootstrapServers;
this.topic = topic;
this.groupId = groupId;
} @Override
public void run() { Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); consumer = new KafkaConsumer<String, String>(properties);
consumer.subscribe(Arrays.asList(topic)); // poll for new data
try {
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for (ConsumerRecord record : records) {
logger.info("Key: " + record.key() + "\t" + "Value: " + record.value());
logger.info("Partition: " + record.partition() + "\t" + "Offset: " + record.offset()
); }
}
} catch (WakeupException e){
logger.info("Received shutdown signal!");
} finally {
consumer.close(); // tell our main code we're done with the consumer
latch.countDown();
}
} public void shutdown(){
// the wakeup() method is a special method to interrupt consumer.poll()
// it will throw the exceptioin WakeUpException
consumer.wakeup();
}
}
}

2.4. Consumer使用 Assign and Seek

Consumer 中可以使用Assign 分配一个topic的partition,然后用seek方法从给定offset读取records。一般此方式用于replay数据或是获取一条特定的record。

在实现时,基于上一个例子,修改run()方法部分代码如下:

 // assign and seek are most used to replay data or fetch a specific message

 // assign
TopicPartition partitionToReadFrom = new TopicPartition(topic, 0);
long offsetToReadFrom = 15L;
consumer.assign(Arrays.asList(partitionToReadFrom)); // seek
consumer.seek(partitionToReadFrom, offsetToReadFrom); int numberOfMessagesToRead = 5;
boolean keepOnReading = true;
int numberOfMessagesReadSoFar = 0; // poll for new data
try {
while (keepOnReading) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for (ConsumerRecord record : records) {
numberOfMessagesReadSoFar += 1; logger.info("Key: " + record.key() + "\t" + "Value: " + record.value());
logger.info("Partition: " + record.partition() + "\t" + "Offset: " + record.offset()
); if (numberOfMessagesReadSoFar >= numberOfMessagesToRead){
keepOnReading = false;
break;
}
}
}
} catch (WakeupException e){
logger.info("Received shutdown signal!");
} finally {
consumer.close(); // tell our main code we're done with the consumer
latch.countDown();
}

需要注意的是,使用此方法时,不需要指定consumer group。

3. 客户端双向兼容

在Kafka 0.10.2 版本之后,Kafka 客户端与Kafka brokers可以实现双向兼容(通过将API版本化实现,也就是说:不同的版本客户端发送的API版本不一样,且服务端可以处理不同版本API的请求)。

也就是说:

  • 一个老版本的客户端(1.1之前版本)可以与更新版本的broker(2.0版本)正常交互
  • 一个新版本的客户端(2.0之前版本)可以与一个老版本的broker(1.1版本)正常交互

对此的建议是:在任何时候都是用最新的客户端lib版本。

Apache Kafka(四)- 使用 Java 访问 Kafka的更多相关文章

  1. Java访问kafka的时候java.nio.channels.ClosedChannelException解决办法

    import java.util.Properties; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMess ...

  2. 《Apache kafka实战》读书笔记-kafka集群监控工具

    <Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...

  3. kafka集群搭建和使用Java写kafka生产者消费者

    1 kafka集群搭建 1.zookeeper集群  搭建在110, 111,112 2.kafka使用3个节点110, 111,112 修改配置文件config/server.properties ...

  4. JAVA版Kafka代码及配置解释

    伟大的程序员版权所有,转载请注明:http://www.lenggirl.com/bigdata/java-kafka.html.html 一.JAVA代码 kafka是吞吐量巨大的一个消息系统,它是 ...

  5. 4 kafka集群部署及kafka生产者java客户端编程 + kafka消费者java客户端编程

    本博文的主要内容有   kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zo ...

  6. _00017 Kafka的体系结构介绍以及Kafka入门案例(0基础案例+Java API的使用)

    博文作者:妳那伊抹微笑 itdog8 地址链接 : http://www.itdog8.com(个人链接) 博客地址:http://blog.csdn.net/u012185296 博文标题:_000 ...

  7. Java版Kafka使用及配置解释

    Java版Kafka使用及配置解释 一.Java示例 kafka是吞吐量巨大的一个消息系统,它是用scala写的,和普通的消息的生产消费还有所不同,写了个demo程序供大家参考.kafka的安装请参考 ...

  8. K8S环境快速部署Kafka(K8S外部可访问)

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  9. ActiveMQ、RabbitMQ、RocketMQ、Kafka四种消息中间件分析介绍

    ActiveMQ.RabbitMQ.RocketMQ.Kafka四种消息中间件分析介绍 我们从四种消息中间件的介绍到基本使用,以及高可用,消息重复性,消息丢失,消息顺序性能方面进行分析介绍! 一.消息 ...

随机推荐

  1. Pycharm控制台乱码问题

    PS:如我般的小白都会遇到中文乱码问题,那么怎么解决呢?其实非常简单,鼠标点点就好,请看下面: 如下乱码: 解决方法: 按如下步骤File→Settings→Editor→File Encodings ...

  2. 针对mysql8.0报错:com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create

    折腾了好久,后来发现是版本问题,驱动和数据库不匹配导致. 原来用的是5.1.37的驱动.数据库是mysql5.7,可以连接成功. 就在我把数据库换成了8.0之后,所有的买点啥都报标题里的错误了.   ...

  3. Linux 编译安装python3

    编译安装python3的步骤 1.很重要,必须执行此操作,安装好编译环境,c语言也是编译后运行,需要gcc编译器golang,对代码先编译,再运行,python是直接运行yum install gcc ...

  4. Web简单小结

    一.HTML DOM 使 JavaScript 有能力对 HTML 事件做出反应:<h1 onclick="this.innerHTML='你点我干啥'">请点击这里& ...

  5. Visual Studio Code搭建Python开发环境方法总结

    更新:目前VSCode官方Python插件已经支持代码运行与调试,无需安装Code Runner插件. 1.下载安装Python,地址 https://www.python.org/downloads ...

  6. react-native简单使用

    基本组件的使用介绍 View: Text: TextInput: Image: Button: ActivityIndicator: ScrollView:这是一个列表滚动的组件 ListView:也 ...

  7. Hadoop报错:org.apache.hadoop.security.AccessControlException: Permission denied: user=xxxx

    问题出现原因: 因为远程提交hadoop的任务的情况下如果,没有hadoop 的系统环境变量,就会读取当前主机的用户名,所以Hadoop集群的节点中没有该用户名的权限,所以出现的异常. 解决方法: S ...

  8. 蓝桥杯第十届C组试题C

    从0开始,从右到左给这些字符串的每一位字母起个名字. 比如:A(1位)A(0位) A(2位)A(1位)A(0位) AA = 27, 可以看成(26 * 1)+ A(1) 因为:字母每经过一个轮回,可就 ...

  9. archlinux下安装nvidia驱动解决Nvidia显卡显示问题

    root下使用以下命令: sudo pacman -S nvidia nvidia-libgl

  10. 5G将至,4G降速:是谣言还是真相?

    畅用移动智能终端设备,早已成为大众日常生活中的一部分.卫报专栏作家伯克曼提到,"据估计,70%的人会抱着手机或iPad刷资讯入睡."当移动智能终端变得如此重要时,与之息息相关的网络 ...