【Azure 事件中心】azure-spring-cloud-stream-binder-eventhubs客户端组件问题, 实践消息非顺序可达
问题描述
查阅了Azure的官方文档( 将事件发送到特定分区: https://docs.azure.cn/zh-cn/event-hubs/event-hubs-availability-and-consistency?tabs=java#send-events-to-a-specific-partition),在工程里引用组件“azure-spring-cloud-stream-binder-eventhubs”来连接EventHub发送和消费消息事件。在发送端一个For循环中发送带顺序号的消息,编号从0开始,并且在消息的header中指定了 "Partition Key",相同PartitionKey的消息会被发送到相同的Partition,来保证这些消息的顺序。
但是在消费端的工程中消费这些消息时,看到打印到日志中的结果并不是从0递增的。所以想知道是发送端在发送时就已经乱序发送了?还是消息到达EventHub后乱序保存了?还是消费端的消费方式的问题,导致打印出的结果是乱序的?
下面是发送端的代码:
public void testPushMessages(int mcount, String partitionKey) {
String message = "Message ";
for (int i=0; i <mcount; i++) {
source.output().send(MessageBuilder.withPayload(partitionKey + mcount + i).setHeaderIfAbsent(AzureHeaders.PARTITION_KEY,partitionKey).build());
}
}
下面是消费端代码:
@StreamListener(Sink.INPUT)
public void onEvent(String message, @Header(AzureHeaders.CHECKPOINTER) Checkpointer checkpointer,
@Header(AzureHeaders.RAW_PARTITION_ID) String rawPartitionId,
@Header(AzureHeaders.PARTITION_KEY) String partitionKey) {
checkpointer.success()
.doOnSuccess(s -> log.info("Message '{}' successfully check pointed.rawPartitionId={},partitionKey={}", message, rawPartitionId, partitionKey))
.doOnError(s -> log.error("Checkpoint message got exception."))
.subscribe();
下面是打印的日志
......,"data":"Message 'testKey4testMessage1' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage29' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage27' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage26' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage25' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage28' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage14' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage13' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage15' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage5' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage7' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage20' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage19' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage18' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage0' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage9' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage12' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
......,"data":"Message 'testKey5testMessage8' successfully check pointed.rawPartitionId=1,partition<*****>","xcptn":""}
从日志中可以看到,消息确实都被发送到了同一个分区(rawPartitionId=1),但是从消息体的序号上看,是乱序的
问题分析
这个是和这个配置相关的fixedDelay,指定默认轮询器的固定延迟,是一个周期性触发器,之前代码会根据这个轮询器进行发送和接受消息的。使用Send发送的方法,现在最新的SDK 不使用这个方法,所以需要使用新的sdk 发送数据测试一下。
新sdk 参考文档您可以参考一下:https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-samples/azure-spring-cloud-sample-eventhubs-binder
SDK版本为
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>azure-spring-cloud-stream-binder-eventhubs</artifactId>
<version>2.4.0</version>
</dependency>
在参考官网的示例后,使用Supplier方法发送消息,代替Send。经过多次测试,指定partitionkey 之后,发送消息是顺序发送的,消费的时候也是按照顺序消费的,下面是测试的代码和结果
发送端的代码
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. package com.azure.spring.sample.eventhubs.binder; import com.azure.spring.integration.core.EventHubHeaders;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder; import java.util.function.Supplier; import static com.azure.spring.integration.core.EventHubHeaders.SEQUENCE_NUMBER; @Configuration
public class EventProducerConfiguration { private static final Logger LOGGER = LoggerFactory.getLogger(EventProducerConfiguration.class); private int i = 0; @Bean
public Supplier<Message<String>> supply() {
return () -> {
//LOGGER.info("Sending message, sequence " + i);
String partitionKey="info"; LOGGER.info("Send message " + MessageBuilder.withPayload("hello world, "+i).setHeaderIfAbsent(EventHubHeaders.PARTITION_KEY, partitionKey).build());
return MessageBuilder.withPayload("hello world, "+ i++).
setHeaderIfAbsent(EventHubHeaders.PARTITION_KEY, partitionKey).build(); };
} }
接收端的代码
package com.ywt.demoEventhub; import com.azure.spring.integration.core.EventHubHeaders;
import com.azure.spring.integration.core.api.reactor.Checkpointer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.Message; import java.util.function.Consumer; import static com.azure.spring.integration.core.AzureHeaders.CHECKPOINTER; @Configuration
public class EventConsume { private static final Logger LOGGER = LoggerFactory.getLogger(EventConsume.class);
@Bean
public Consumer<Message<String>> consume() {
return message -> {
Checkpointer checkpointer = (Checkpointer) message.getHeaders().get(CHECKPOINTER);
LOGGER.info("New message received: '{}', partition key: {}, sequence number: {}, offset: {}, enqueued time: {}",
message.getPayload(),
message.getHeaders().get(EventHubHeaders.PARTITION_KEY),
message.getHeaders().get(EventHubHeaders.SEQUENCE_NUMBER),
message.getHeaders().get(EventHubHeaders.OFFSET),
message.getHeaders().get(EventHubHeaders.ENQUEUED_TIME)
); checkpointer.success()
.doOnSuccess(success -> LOGGER.info("Message '{}' successfully checkpointed number '{}' ", message.getPayload(), message.getHeaders().get(EventHubHeaders.CHECKPOINTER)))
.doOnError(error -> LOGGER.error("Exception found", error))
.subscribe();
};
}
}
发送消息的日志
消费消息的日志
参考资料
Azure Spring Cloud Stream Binder for Event Hub Code Sample shared library for Java:https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/spring/azure-spring-boot-samples/azure-spring-cloud-sample-eventhubs-binder
How to create a Spring Cloud Stream Binder application with Azure Event Hubs - Add sample code to implement basic event hub functionality : https://docs.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-azure-event-hub#add-sample-code-to-implement-basic-event-hub-functionality
[END]
【Azure 事件中心】azure-spring-cloud-stream-binder-eventhubs客户端组件问题, 实践消息非顺序可达的更多相关文章
- 整合Spring Cloud Stream Binder与GCP Pubsub进行消息发送与接收
我最新最全的文章都在南瓜慢说 www.pkslow.com,欢迎大家来喝茶! 1 前言 之前的文章<整合Spring Cloud Stream Binder与RabbitMQ进行消息发送与接收& ...
- 整合Spring Cloud Stream Binder与RabbitMQ进行消息发送与接收
我最新最全的文章都在南瓜慢说 www.pkslow.com,欢迎大家来喝茶! 1 前言 Spring Cloud Stream专门用于事件驱动的微服务系统,使用消息中间件来收发信息.使用Spring ...
- 【进阶技术】一篇文章搞掂:Spring Cloud Stream
本文总结自官方文档http://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.0.RC3/single/spring-clou ...
- 官方文档中文版!Spring Cloud Stream 快速入门
本文内容翻译自官方文档,spring-cloud-stream docs,对 Spring Cloud Stream的应用入门介绍. 一.Spring Cloud Stream 简介 官方定义 Spr ...
- Kafka及Spring Cloud Stream
安装 下载kafka http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz kafka最为重要三个配置依次为:broke ...
- 消息驱动式微服务:Spring Cloud Stream & RabbitMQ
1. 概述 在本文中,我们将向您介绍Spring Cloud Stream,这是一个用于构建消息驱动的微服务应用程序的框架,这些应用程序由一个常见的消息传递代理(如RabbitMQ.Apache Ka ...
- Spring Cloud Stream 进行服务之间的通讯
Spring Cloud Stream Srping cloud Bus的底层实现就是Spring Cloud Stream,Spring Cloud Stream的目的是用于构建基于消息驱动(或事件 ...
- 【Azure 事件中心】为应用程序网关(Application Gateway with WAF) 配置诊断日志,发送到事件中心
问题描述 在Application Gateway中,开启WAF(Web application firewall)后,现在需要把访问的日志输出到第三方分析代码中进行分析,如何来获取WAF的诊断日志呢 ...
- 【事件中心 Azure Event Hub】在Linux环境中(Ubuntu)安装Logstash的简易步骤及配置连接到Event Hub
在文章([事件中心 Azure Event Hub]使用Logstash消费EventHub中的event时遇见的几种异常(TimeoutException, ReceiverDisconnected ...
随机推荐
- 1. Mybatis 参数传递
方法1:顺序传参法 public User selectUser(String name, int deptId); <select id="selectUser" resu ...
- ISODateTimeFormat 转换2019-08-15T00:36:49.366456463Z 日期格式
import java.text.*; import java.text.SimpleDateFormat; import java.util.*; import org.joda.time.Date ...
- 有了CopyOnWrite为何又要有ReadWriteLock?
引言 前文我们有介绍<看了CopyOnWriteArrayList后自己实现了一个CopyOnWriteHashMap> 关于CopyOnWrite容器的,但是它也有一些缺点: 内存占用问 ...
- linux gcc命令参数
gcc命令参数笔记 1. gcc -E source_file.c -E,只执行到预处理.直接输出预处理结果. 2. gcc -S source_file.c -S,只执行到汇编,输出汇编代码. 3. ...
- SpringCloud(六)Bus消息总线
Bus 消息总线 概述 分布式自动刷新配置功能 Spring Cloud Bus 配合 Spring Cloud Config使用可以实现配置的动态刷新 Bus支持两种消息代理:RabbitMQ和Ka ...
- 【Idea】实用的快捷键清单
1.Ctrl + Shift +i:快速查看某个类/方法 2.Ctrl +:(Ace Jump插件启动) 3.alt+F1:快速查看某个类/方法 所在的包 4.Ctrl +w :选中某个单词 5.Ct ...
- hdu3724 字典树(商品条形码)
题意: 给你一堆商品的名字,然后给你一些条形码,问你这些条形码转换成的字符串的 前缀在商品中出现的个数,条形码的每个字母是八个二进制数字,有两种数,大的是小的2倍,小的是0,大的是1,这里面 ...
- hdu4829 带权并查集(题目不错)
题意: Information Time Limit: 6000/3000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) Tot ...
- hdu1529 差分约束(好题)
题意: 超市在每个时间都有需要的人数(24小时)比如 1 0 0 0 0 ....也就是说在第0个小时的时候要用一个人,其他的时间都不用人,在给你一些人工作的起始时间,如果雇佣了这个人,那 ...
- Android so加固的简单脱壳
本文博客地址:http://blog.csdn.net/qq1084283172/article/details/78077603 Android应用的so库文件的加固一直存在,也比较常见,特地花时间 ...