1. Produer

1.1. 基本 Producer

首先使用 maven 构建相关依赖,这里我们服务器kafka 版本为 2.12-2.3.0,pom.xml 文件为:

 <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.github.tang</groupId>
<artifactId>kafka-beginner</artifactId>
<version>1.0</version> <dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency> <!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-simple -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.26</version>
</dependency> </dependencies> </project>

然后创建一个 Producer:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer; import java.util.Properties; public class ProducerDemo { private static String bootstrapServers = "server_xxx:9092"; public static void main(String[] args) { /**
* create Producer properties
*
* Properties are available in official document:
* https://kafka.apache.org/documentation/#producerconfigs
*
*/
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName()); // create the producer
KafkaProducer<String, String> produer = new KafkaProducer<String, String>(properties); // create a producer record
ProducerRecord<String, String> record =
new ProducerRecord<String, String>("first_topic", "message from java"); // send data - asynchronous
/**
* asynchronous means the data would not send immediately
* however, the program would terminate immediately after run the send() method
* hence the data would not send to kafka topic
* and the consumer would not receive the data
*
* so we need flush()
*/
produer.send(record); /**
* use flush() to wait sending complete
*/
produer.flush();
produer.close(); }
}

运行此程序可以在consumer-console-cli 下看到发送的消息。

1.2. 带Callback() 的Producer

Callback() 函数会在每次发送record 后执行,例如:

首先实例化一个 logger() 对象:

 // create a logger
final Logger logger = LoggerFactory.getLogger(ProducerDemoCallback.class);

使用 Callback():

 /**
* send data with Callback()
*/
for(int i = 0; i < 10; i++) {
// create a producer record
ProducerRecord<String, String> record =
new ProducerRecord<String, String>("first_topic", "message from java" + Integer.toString(i)); produer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// execute every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record is sent successfully
logger.info("Received new metadata. \n" +
"Topic: " + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}

部分输出结果如下:

[kafka-producer-network-thread | producer-1] INFO com.github.tang.kafka.tutorial1.ProducerDemoCallback - Received new metadata.

Topic: first_topic

Partition: 2

Offset: 21

Timestamp: 1565501879059

[kafka-producer-network-thread | producer-1] INFO com.github.tang.kafka.tutorial1.ProducerDemoCallback - Received new metadata.

Topic: first_topic

Partition: 2

Offset: 22

Timestamp: 1565501879075

1.3. 发送带key的records

上面的例子均是未带key,所以消息是按轮询的方式发送到partition。下面是带key的producer例子,重载send() 方法即可:

 String key = "id_" + Integer.toString(i);

 ProducerRecord<String, String> record =
new ProducerRecord<String, String>(topic, key,"message from java" + Integer.toString(i));

2. Consumer

2.1. 基本Consumer

下面是一个基本的consumer 例子:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.Duration;
import java.util.Arrays;
import java.util.Properties; public class ConsumerDemo {
private static String bootstrapServers = "server:9092";
private static String groupId = "my-forth-app";
private static String topic = "first_topic"; public static void main(String[] args) {
Logger logger = LoggerFactory.getLogger(ConsumerDemo.class); /**
* create Consumer properties
*
* Properties are available in official document:
* https://kafka.apache.org/documentation/#consumerconfigs
*
*/
Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // create consumer
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(properties); // subscribe consumer to our topic(s)
consumer.subscribe(Arrays.asList(topic)); // poll for new data
while(true){
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for(ConsumerRecord record : records){
logger.info("Key: " + record.key() + "\t" + "Value: " + record.value() +
"Topic: " + record.partition() + "\t" + "Partition: " + record.partition()
); }
} }
}

部分输出结果如下:

从输出结果可以看到,consumer 在读取时,(在指定offset为earliest的情况下)是先读完一个partition后,再读下一个partition。

2.2. Consumer balancing

之前提到过,在一个consumer group中的consumers可以自动做负载均衡。下面我们启动一个consumer后,再启动一个consumer。

下面是第一个consumer的日志:

在第二个consumer加入后,第一个consumer 重新分配 partition,从之前负责三个partition(0,1,2)到现在负责一个partition(2)。

对于第二个consumer的日志:

可以看到第二个consumer在加入后,开始负责2个partition(0与1)的读

2.3 Consumer 多线程方式:

 package com.github.tang.kafka.tutorial1;

 import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory; import java.time.Duration;
import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.CountDownLatch; public class ConsumerDemoWithThreads { private static Logger logger = LoggerFactory.getLogger(ConsumerDemoWithThreads.class); public static void main(String[] args) {
String bootstrapServers = "server:9092";
String groupId = "my-fifth-app";
String topic = "first_topic"; // latch for dealing with multiple threads
CountDownLatch latch = new CountDownLatch(1); ConsumerRunnable consumerRunnable = new ConsumerRunnable(latch,
bootstrapServers,
groupId,
topic); Thread myConsumerThread = new Thread(consumerRunnable);
myConsumerThread.start(); // add a shutdown hook
Runtime.getRuntime().addShutdownHook(new Thread(() ->{
logger.info("Caught shutdown hook");
consumerRunnable.shutdown(); try{
latch.await();
} catch (InterruptedException e){
e.printStackTrace();
}
logger.info("Application has exited"); })); try{
latch.await();
} catch (InterruptedException e){
logger.error("Application got interrupted", e);
} finally {
logger.info("Application is closing");
} } private static class ConsumerRunnable implements Runnable{ private CountDownLatch latch;
KafkaConsumer<String, String> consumer;
private String bootstrapServers;
private String topic;
private String groupId; public ConsumerRunnable(CountDownLatch latch,
String bootstrapServers,
String groupId,
String topic){
this.latch = latch;
this.bootstrapServers = bootstrapServers;
this.topic = topic;
this.groupId = groupId;
} @Override
public void run() { Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, groupId);
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); consumer = new KafkaConsumer<String, String>(properties);
consumer.subscribe(Arrays.asList(topic)); // poll for new data
try {
while (true) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for (ConsumerRecord record : records) {
logger.info("Key: " + record.key() + "\t" + "Value: " + record.value());
logger.info("Partition: " + record.partition() + "\t" + "Offset: " + record.offset()
); }
}
} catch (WakeupException e){
logger.info("Received shutdown signal!");
} finally {
consumer.close(); // tell our main code we're done with the consumer
latch.countDown();
}
} public void shutdown(){
// the wakeup() method is a special method to interrupt consumer.poll()
// it will throw the exceptioin WakeUpException
consumer.wakeup();
}
}
}

2.4. Consumer使用 Assign and Seek

Consumer 中可以使用Assign 分配一个topic的partition,然后用seek方法从给定offset读取records。一般此方式用于replay数据或是获取一条特定的record。

在实现时,基于上一个例子,修改run()方法部分代码如下:

 // assign and seek are most used to replay data or fetch a specific message

 // assign
TopicPartition partitionToReadFrom = new TopicPartition(topic, 0);
long offsetToReadFrom = 15L;
consumer.assign(Arrays.asList(partitionToReadFrom)); // seek
consumer.seek(partitionToReadFrom, offsetToReadFrom); int numberOfMessagesToRead = 5;
boolean keepOnReading = true;
int numberOfMessagesReadSoFar = 0; // poll for new data
try {
while (keepOnReading) {
ConsumerRecords<String, String> records =
consumer.poll(Duration.ofMinutes(100)); for (ConsumerRecord record : records) {
numberOfMessagesReadSoFar += 1; logger.info("Key: " + record.key() + "\t" + "Value: " + record.value());
logger.info("Partition: " + record.partition() + "\t" + "Offset: " + record.offset()
); if (numberOfMessagesReadSoFar >= numberOfMessagesToRead){
keepOnReading = false;
break;
}
}
}
} catch (WakeupException e){
logger.info("Received shutdown signal!");
} finally {
consumer.close(); // tell our main code we're done with the consumer
latch.countDown();
}

需要注意的是,使用此方法时,不需要指定consumer group。

3. 客户端双向兼容

在Kafka 0.10.2 版本之后,Kafka 客户端与Kafka brokers可以实现双向兼容(通过将API版本化实现,也就是说:不同的版本客户端发送的API版本不一样,且服务端可以处理不同版本API的请求)。

也就是说:

  • 一个老版本的客户端(1.1之前版本)可以与更新版本的broker(2.0版本)正常交互
  • 一个新版本的客户端(2.0之前版本)可以与一个老版本的broker(1.1版本)正常交互

对此的建议是:在任何时候都是用最新的客户端lib版本。

Apache Kafka(四)- 使用 Java 访问 Kafka的更多相关文章

  1. Java访问kafka的时候java.nio.channels.ClosedChannelException解决办法

    import java.util.Properties; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMess ...

  2. 《Apache kafka实战》读书笔记-kafka集群监控工具

    <Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...

  3. kafka集群搭建和使用Java写kafka生产者消费者

    1 kafka集群搭建 1.zookeeper集群  搭建在110, 111,112 2.kafka使用3个节点110, 111,112 修改配置文件config/server.properties ...

  4. JAVA版Kafka代码及配置解释

    伟大的程序员版权所有,转载请注明:http://www.lenggirl.com/bigdata/java-kafka.html.html 一.JAVA代码 kafka是吞吐量巨大的一个消息系统,它是 ...

  5. 4 kafka集群部署及kafka生产者java客户端编程 + kafka消费者java客户端编程

    本博文的主要内容有   kafka的单机模式部署 kafka的分布式模式部署 生产者java客户端编程 消费者java客户端编程 运行kafka ,需要依赖 zookeeper,你可以使用已有的 zo ...

  6. _00017 Kafka的体系结构介绍以及Kafka入门案例(0基础案例+Java API的使用)

    博文作者:妳那伊抹微笑 itdog8 地址链接 : http://www.itdog8.com(个人链接) 博客地址:http://blog.csdn.net/u012185296 博文标题:_000 ...

  7. Java版Kafka使用及配置解释

    Java版Kafka使用及配置解释 一.Java示例 kafka是吞吐量巨大的一个消息系统,它是用scala写的,和普通的消息的生产消费还有所不同,写了个demo程序供大家参考.kafka的安装请参考 ...

  8. K8S环境快速部署Kafka(K8S外部可访问)

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  9. ActiveMQ、RabbitMQ、RocketMQ、Kafka四种消息中间件分析介绍

    ActiveMQ.RabbitMQ.RocketMQ.Kafka四种消息中间件分析介绍 我们从四种消息中间件的介绍到基本使用,以及高可用,消息重复性,消息丢失,消息顺序性能方面进行分析介绍! 一.消息 ...

随机推荐

  1. Pycharm的项目文件名是红色的原因及解决办法

    今天在继续学习Python时,打开Pycharm后,发现有一个项目下的项目文件名是红色的,如下图: 刚开始我以为是我升级 Pycharm导致的,但我并没有急着去解决,因为并不会影响我执行代码等.当我修 ...

  2. [HNOI2015]接水果[整体二分]

    [HNOI2015]接水果 给出一个树上路径集合\(S\) 多次询问\(x,y\)中的\(k\)小值 如果你问我数列上那么我会 树上的话 树上差分了吧直接?- 令 \(st_x<st_y\) 1 ...

  3. kali linux中mariadb加上密码

    kali自带mysql.2019.4 中带得是:MariaDB.据说跟Mysql差不多.简单用了一下发现root用户可以不要密码进入Mysql! 这极不习惯,不输入密码感觉好像少了点什么.这肯定是权限 ...

  4. redis 4.x及以上的未授权访问

    00x01 环境搭建 选择在kali中复现 选择了redis5.0.5版本 1.下载并安装: $ wget http://download.redis.io/releases/redis-5.0.5. ...

  5. Secondary NameNode:它究竟有什么作用?

    前言 最近刚接触Hadoop, 一直没有弄明白NameNode和Secondary NameNode的区别和关系.很多人都认为,Secondary NameNode是NameNode的备份,是为了防止 ...

  6. PAT (Basic Level) Practice (中文)1033 旧键盘打字 (20 分)

    旧键盘上坏了几个键,于是在敲一段文字的时候,对应的字符就不会出现.现在给出应该输入的一段文字.以及坏掉的那些键,打出的结果文字会是怎样? 输入格式: 输入在 2 行中分别给出坏掉的那些键.以及应该输入 ...

  7. 封装 axios

    大家是否有印象,在开发项目中,我们往往会把axios给封装起来,写在一个js文件夹里,最后引入的也是js文件夹,而不是直接对axios进行操作,那为什么? 1. 如果一个组件需要请求数据,就要用到ax ...

  8. ansible笔记(14):循环(一)

    在使用ansible的过程中,我们经常需要处理一些返回信息,而这些返回信息中,通常可能不是单独的一条返回信息,而是一个信息列表,如果我们想要循环的处理信息列表中的每一条信息,我们该怎么办呢?这样空口白 ...

  9. numpy学习(五)

    练习篇(Part 5) 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆) ar ...

  10. js集合

    var list = {};//声明 List[0] = 52;//赋值 List[1] = 57;//赋值