spring boot+kafka整合
springboot版本是2.0.4
首先,在maven中引入spring-kafka的jar包
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
在application-dev.properties配置生产者
#=============== producer =======================
spring.kafka.producer.bootstrap-servers=127.0.0.1:9092,127.0.0.1:9093
spring.kafka.producer.retries=1
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.properties.max.requst.size=2097152
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
生产者向kafka发送消息
@Component
@Slf4j
public class KafkaSender { @Autowired
private KafkaTemplate<String, Object> kafkaTemplate; // 发送消息方法
public void send(String body) { kafkaTemplate.send("testTopic", body);
log.info("发送消息完成,内容 为:" + body);
} }
配置消费者
#=============== consumer =======================
spring.kafka.consumer.bootstrap-servers=127.0.0.1:9092,127.0.0.1:9093
spring.kafka.consumer.group-id=0
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
#=======set comsumer max fetch.byte 2*1024*1024=============
spring.kafka.consumer.properties.max.partition.fetch.bytes=2097152
消费者监听topic=testTopic的消息
@KafkaListener(topics = { "testTopic" })
public void listen(ConsumerRecord<?, ?> record) {
log.info("接收消息为:" + record.value());
}
在整合过程中,spring boot帮我们把kafka的大部分属性直接带出来了,但是有些不常用的属性,需要通过
spring.kafka.consumer.properties.*
来设置,例如max.partition.fetch.bytes,一次fetch请求,从一个partition中取得的records最大值.
在application.properties中添加kafka扩展属性,
#设置一次fetch记录的最大值2M(2*1024*1024),默认值为1M
spring.kafka.consumer.properties.max.partition.fetch.bytes=2097152
更多配置请参考kafka属性大全
# APACHE KAFKA (KafkaProperties)
spring.kafka.admin.client-id= # ID to pass to the server when making requests. Used for server-side logging.
spring.kafka.admin.fail-fast=false # Whether to fail fast if the broker is not available on startup.
spring.kafka.admin.properties.*= # Additional admin-specific properties used to configure the client.
spring.kafka.admin.ssl.key-password= # Password of the private key in the key store file.
spring.kafka.admin.ssl.keystore-location= # Location of the key store file.
spring.kafka.admin.ssl.keystore-password= # Store password for the key store file.
spring.kafka.admin.ssl.keystore-type= # Type of the key store.
spring.kafka.admin.ssl.protocol= # SSL protocol to use.
spring.kafka.admin.ssl.truststore-location= # Location of the trust store file.
spring.kafka.admin.ssl.truststore-password= # Store password for the trust store file.
spring.kafka.admin.ssl.truststore-type= # Type of the trust store.
spring.kafka.bootstrap-servers= # Comma-delimited list of host:port pairs to use for establishing the initial connection to the Kafka cluster.
spring.kafka.client-id= # ID to pass to the server when making requests. Used for server-side logging.
spring.kafka.consumer.auto-commit-interval= # Frequency with which the consumer offsets are auto-committed to Kafka if 'enable.auto.commit' is set to true.
spring.kafka.consumer.auto-offset-reset= # What to do when there is no initial offset in Kafka or if the current offset no longer exists on the server.
spring.kafka.consumer.bootstrap-servers= # Comma-delimited list of host:port pairs to use for establishing the initial connection to the Kafka cluster.
spring.kafka.consumer.client-id= # ID to pass to the server when making requests. Used for server-side logging.
spring.kafka.consumer.enable-auto-commit= # Whether the consumer's offset is periodically committed in the background.
spring.kafka.consumer.fetch-max-wait= # Maximum amount of time the server blocks before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by "fetch.min.bytes".
spring.kafka.consumer.fetch-min-size= # Minimum amount of data, in bytes, the server should return for a fetch request.
spring.kafka.consumer.group-id= # Unique string that identifies the consumer group to which this consumer belongs.
spring.kafka.consumer.heartbeat-interval= # Expected time between heartbeats to the consumer coordinator.
spring.kafka.consumer.key-deserializer= # Deserializer class for keys.
spring.kafka.consumer.max-poll-records= # Maximum number of records returned in a single call to poll().
spring.kafka.consumer.properties.*= # Additional consumer-specific properties used to configure the client.
spring.kafka.consumer.ssl.key-password= # Password of the private key in the key store file.
spring.kafka.consumer.ssl.keystore-location= # Location of the key store file.
spring.kafka.consumer.ssl.keystore-password= # Store password for the key store file.
spring.kafka.consumer.ssl.keystore-type= # Type of the key store.
spring.kafka.consumer.ssl.protocol= # SSL protocol to use.
spring.kafka.consumer.ssl.truststore-location= # Location of the trust store file.
spring.kafka.consumer.ssl.truststore-password= # Store password for the trust store file.
spring.kafka.consumer.ssl.truststore-type= # Type of the trust store.
spring.kafka.consumer.value-deserializer= # Deserializer class for values.
spring.kafka.jaas.control-flag=required # Control flag for login configuration.
spring.kafka.jaas.enabled=false # Whether to enable JAAS configuration.
spring.kafka.jaas.login-module=com.sun.security.auth.module.Krb5LoginModule # Login module.
spring.kafka.jaas.options= # Additional JAAS options.
spring.kafka.listener.ack-count= # Number of records between offset commits when ackMode is "COUNT" or "COUNT_TIME".
spring.kafka.listener.ack-mode= # Listener AckMode. See the spring-kafka documentation.
spring.kafka.listener.ack-time= # Time between offset commits when ackMode is "TIME" or "COUNT_TIME".
spring.kafka.listener.client-id= # Prefix for the listener's consumer client.id property.
spring.kafka.listener.concurrency= # Number of threads to run in the listener containers.
spring.kafka.listener.idle-event-interval= # Time between publishing idle consumer events (no data received).
spring.kafka.listener.log-container-config= # Whether to log the container configuration during initialization (INFO level).
spring.kafka.listener.monitor-interval= # Time between checks for non-responsive consumers. If a duration suffix is not specified, seconds will be used.
spring.kafka.listener.no-poll-threshold= # Multiplier applied to "pollTimeout" to determine if a consumer is non-responsive.
spring.kafka.listener.poll-timeout= # Timeout to use when polling the consumer.
spring.kafka.listener.type=single # Listener type.
spring.kafka.producer.acks= # Number of acknowledgments the producer requires the leader to have received before considering a request complete.
spring.kafka.producer.batch-size= # Default batch size in bytes.
spring.kafka.producer.bootstrap-servers= # Comma-delimited list of host:port pairs to use for establishing the initial connection to the Kafka cluster.
spring.kafka.producer.buffer-memory= # Total bytes of memory the producer can use to buffer records waiting to be sent to the server.
spring.kafka.producer.client-id= # ID to pass to the server when making requests. Used for server-side logging.
spring.kafka.producer.compression-type= # Compression type for all data generated by the producer.
spring.kafka.producer.key-serializer= # Serializer class for keys.
spring.kafka.producer.properties.*= # Additional producer-specific properties used to configure the client.
spring.kafka.producer.retries= # When greater than zero, enables retrying of failed sends.
spring.kafka.producer.ssl.key-password= # Password of the private key in the key store file.
spring.kafka.producer.ssl.keystore-location= # Location of the key store file.
spring.kafka.producer.ssl.keystore-password= # Store password for the key store file.
spring.kafka.producer.ssl.keystore-type= # Type of the key store.
spring.kafka.producer.ssl.protocol= # SSL protocol to use.
spring.kafka.producer.ssl.truststore-location= # Location of the trust store file.
spring.kafka.producer.ssl.truststore-password= # Store password for the trust store file.
spring.kafka.producer.ssl.truststore-type= # Type of the trust store.
spring.kafka.producer.transaction-id-prefix= # When non empty, enables transaction support for producer.
spring.kafka.producer.value-serializer= # Serializer class for values.
spring.kafka.properties.*= # Additional properties, common to producers and consumers, used to configure the client.
spring.kafka.ssl.key-password= # Password of the private key in the key store file.
spring.kafka.ssl.keystore-location= # Location of the key store file.
spring.kafka.ssl.keystore-password= # Store password for the key store file.
spring.kafka.ssl.keystore-type= # Type of the key store.
spring.kafka.ssl.protocol= # SSL protocol to use.
spring.kafka.ssl.truststore-location= # Location of the trust store file.
spring.kafka.ssl.truststore-password= # Store password for the trust store file.
spring.kafka.ssl.truststore-type= # Type of the trust store.
spring.kafka.template.default-topic= # Default topic to which messages are sent.
官方文档地址:
https://docs.spring.io/spring-boot/docs/2.0.4.RELEASE/reference/htmlsingle/#common-application-properties
spring boot+kafka整合的更多相关文章
- Spring Boot Security 整合 JWT 实现 无状态的分布式API接口
简介 JSON Web Token(缩写 JWT)是目前最流行的跨域认证解决方案.JSON Web Token 入门教程 - 阮一峰,这篇文章可以帮你了解JWT的概念.本文重点讲解Spring Boo ...
- Spring boot Mybatis 整合(完整版)
个人开源项目 springboot+mybatis+thymeleaf+docker构建的个人站点开源项目(集成了个人主页.个人作品.个人博客) 朋友自制的springboot接口文档组件swagge ...
- Spring boot Mybatis 整合
PS: 参考博客 PS: spring boot配置mybatis和事务管理 PS: Spring boot Mybatis 整合(完整版) 这篇博客里用到了怎样 生成 mybatis 插件来写程 ...
- spring boot 学习(二)spring boot 框架整合 thymeleaf
spring boot 框架整合 thymeleaf spring boot 的官方文档中建议开发者使用模板引擎,避免使用 JSP.因为若一定要使用 JSP 将无法使用. 注意:本文主要参考学习了大神 ...
- Spring Boot 应用系列 5 -- Spring Boot 2 整合logback
上一篇我们梳理了Spring Boot 2 整合log4j2的配置过程,其中讲到了Spring Boot 2原装适配logback,并且在非异步环境下logback和log4j2的性能差别不大,所以对 ...
- Spring boot Mybatis 整合(注解版)
之前写过一篇关于springboot 与 mybatis整合的博文,使用了一段时间spring-data-jpa,发现那种方式真的是太爽了,mybatis的xml的映射配置总觉得有点麻烦.接口定义和映 ...
- Spring Boot:整合Spring Security
综合概述 Spring Security 是 Spring 社区的一个顶级项目,也是 Spring Boot 官方推荐使用的安全框架.除了常规的认证(Authentication)和授权(Author ...
- Spring Boot:整合Swagger文档
综合概述 spring-boot作为当前最为流行的Java web开发脚手架,越来越多的开发者选择用其来构建企业级的RESTFul API接口.这些接口不但会服务于传统的web端(b/s),也会服务于 ...
- Spring Boot:整合MyBatis框架
综合概述 MyBatis 是一款优秀的持久层框架,它支持定制化 SQL.存储过程以及高级映射.MyBatis 避免了几乎所有的 JDBC 代码和手动设置参数以及获取结果集.MyBatis 可以使用简单 ...
随机推荐
- 长度不超过n的连续最大和___优先队列
题目链接: https://nanti.jisuanke.com/t/36116 题目: 在蒜厂年会上有一个抽奖,在一个环形的桌子上,有 nn 个纸团,每个纸团上写一个数字,表示你可以获得多少蒜币.但 ...
- 基于SPA的网页授权流程(微信OAuth2)
先说传统MVC网站的网页授权流程. 1.用户发起了某个需要登录执行的操作 2.收集AppId等信息重定向到微信服务器 3.微信服务器回调到网站某个Controller的Action 4.在此Actio ...
- C++学习(三十六)(C语言部分)之 链表2
测试代码笔记如下: #include<stdio.h> #include<stdlib.h> typedef struct node { int data;//数据 struc ...
- PHP 打开已有图片进行编辑
<?php header("content-type:image/jpeg");//表明请求页面的内容是jpeg格式的图像 即当前页面会以图片的新式展示 去掉这行就可以显示正 ...
- vim编辑Makefile如何使用Tab
因为用vim编辑代码设置了Tab键为4个空格,但有时候我们需要编写Makefile,必须使用Tab,同时也不想设置set noexpandtab. 其实可以先Ctrl_v组合键,再按Tab键盘,这样我 ...
- 如何安装miniconda(python虚拟环境)
anaconda是用于科学计算的python发行版本(可用于python虚拟环境的管理),miniconda是简化版的anaconda 1.下载安装miniconda 下载miniconda 因为An ...
- 安卓控制LED驱动编写
开发平台 * 芯灵思SinlinxA33开发板 淘宝店铺: https://sinlinx.taobao.com/ 嵌入式linux 开发板交流 QQ:641395230 打开Android Stud ...
- HttpWebRequest post 请求超时问题
在使用curl做POST的时候, 当要POST的数据大于1024字节的时候, curl并不会直接就发起POST请求, 而是会分为俩步, 发送一个请求, 包含一个Expect:100-continue, ...
- java实现表格tr拖动
实现功能:js实现表格tr拖动,并保存因为拖动改变的等级. jsp代码 <div id="mainContainer"> <div class="con ...
- Ubuntu 16.04出现:Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/cache/app-info -a -e /usr/bin/appstreamcli; then appstreamcli refresh > /dev/null; fi'
错误: Reading package lists... Done E: Problem executing scripts APT::Update::Post-Invoke-Success 'if ...