4.1. Using Spring for Apache Kafka

This section offers detailed explanations of the various concerns that impact using Spring for Apache Kafka. For a quick but less detailed introduction, see Quick Tour for the Impatient.

4.1.1. Configuring Topics

If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic @Bean for each topic to the application context. The following example shows how to do so:

@Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
} @Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
} @Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}

By default, if the broker is not available, a message is logged, but the context continues to load.

You can programmatically invoke the admin’s initialize() method to try again later.

If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true.

The context then fails to initialize.

If the broker supports it (1.0.0 or higher),

the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.

For more advanced features, such as assigning partitions to replicas, you can use the AdminClient directly.

The following example shows how to do so:

@Autowired
private KafkaAdmin admin; ... AdminClient client = AdminClient.create(admin.getConfig());
...
client.close();
4.1.2. Sending Messages

This section covers how to send messages.

Using KafkaTemplate

This section covers how to use KafkaTemplate to send messages.

Overview

The KafkaTemplate wraps a producer and provides convenience methods to send data to Kafka topics.

The following listing shows the relevant methods from KafkaTemplate:

ListenableFuture<SendResult<K, V>> sendDefault(V data);

ListenableFuture<SendResult<K, V>> sendDefault(K key, V data);

ListenableFuture<SendResult<K, V>> sendDefault(Integer partition, K key, V data);

ListenableFuture<SendResult<K, V>> sendDefault(Integer partition, Long timestamp, K key, V data);

ListenableFuture<SendResult<K, V>> send(String topic, V data);

ListenableFuture<SendResult<K, V>> send(String topic, K key, V data);

ListenableFuture<SendResult<K, V>> send(String topic, Integer partition, K key, V data);

ListenableFuture<SendResult<K, V>> send(String topic, Integer partition, Long timestamp, K key, V data);

ListenableFuture<SendResult<K, V>> send(ProducerRecord<K, V> record);

ListenableFuture<SendResult<K, V>> send(Message<?> message);

Map<MetricName, ? extends Metric> metrics();

List<PartitionInfo> partitionsFor(String topic);

<T> T execute(ProducerCallback<K, V, T> callback);

// Flush the producer.

void flush();

interface ProducerCallback<K, V, T> {

    T doInKafka(Producer<K, V> producer);

}

See the Javadoc for more detail.

The sendDefault API requires that a default topic has been provided to the template.

The API takes in a timestamp as a parameter and stores this timestamp in the record.

How the user-provided timestamp is stored depends on the timestamp type configured on the Kafka topic.

If the topic is configured to use CREATE_TIME, the user specified timestamp is recorded (or generated if not specified).

If the topic is configured to use LOG_APPEND_TIME, the user-specified timestamp is ignored and the broker adds in the local broker time.

The metrics and partitionsFor methods delegate to the same methods on the underlying Producer.

The execute method provides direct access to the underlying Producer.

To use the template, you can configure a producer factory and provide it in the template’s constructor. The following example shows how to do so:

@Bean
public ProducerFactory<Integer, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
} @Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
// See https://kafka.apache.org/documentation/#producerconfigs for more properties
return props;
} @Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}

You can also configure the template by using standard definitions.

Then, to use the template, you can invoke one of its methods.

When you use the methods with a Message<?> parameter, the topic, partition,

and key information is provided in a message header that includes the following items:

KafkaHeaders.TOPIC

KafkaHeaders.PARTITION_ID

KafkaHeaders.MESSAGE_KEY

KafkaHeaders.TIMESTAMP

The message payload is the data.

Optionally, you can configure the KafkaTemplate with a ProducerListener to get an asynchronous callback with the results of the send (success or failure) instead of waiting for the Future to complete.

The following listing shows the definition of the ProducerListener interface:

public interface ProducerListener<K, V> {

    void onSuccess(String topic, Integer partition, K key, V value, RecordMetadata recordMetadata);

    void onError(String topic, Integer partition, K key, V value, Exception exception);

    boolean isInterestedInSuccess();

}

By default, the template is configured with a LoggingProducerListener, which logs errors and does nothing when the send is successful.

onSuccess is called only if isInterestedInSuccess returns true.

For convenience, the abstract ProducerListenerAdapter is provided in case you want to implement only one of the methods. It returns false for isInterestedInSuccess.

Notice that the send methods return a ListenableFuture. You can register a callback with the listener to receive the result of the send asynchronously. The following example shows how to do so:

ListenableFuture<SendResult<Integer, String>> future = template.send("something");
future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() { @Override
public void onSuccess(SendResult<Integer, String> result) {
...
} @Override
public void onFailure(Throwable ex) {
...
} });

SendResult has two properties, a ProducerRecord and RecordMetadata.

See the Kafka API documentation for information about those objects.

If you wish to block the sending thread to await the result, you can invoke the future’s get() method.

You may wish to invoke flush() before waiting or, for convenience, the template has a constructor with an autoFlush parameter that causes the template to flush() on each send.

Note, however, that flushing likely significantly reduces performance.

Examples

This section shows examples of sending messages to Kafka:

Example 2. Non Blocking (Async)
public void sendToKafka(final MyOutputData data) {
final ProducerRecord<String, String> record = createRecord(data); ListenableFuture<SendResult<Integer, String>> future = template.send(record);
future.addCallback(new ListenableFutureCallback<SendResult<Integer, String>>() { @Override
public void onSuccess(SendResult<Integer, String> result) {
handleSuccess(data);
} @Override
public void onFailure(Throwable ex) {
handleFailure(data, record, ex);
} });
}
Blocking (Sync)
public void sendToKafka(final MyOutputData data) {
final ProducerRecord<String, String> record = createRecord(data); try {
template.send(record).get(10, TimeUnit.SECONDS);
handleSuccess(data);
}
catch (ExecutionException e) {
handleFailure(data, record, e.getCause());
}
catch (TimeoutException | InterruptedException e) {
handleFailure(data, record, e);
}
}

6、Spring-Kafka4的更多相关文章

  1. spring自动扫描、DispatcherServlet初始化流程、spring控制器Controller 过程剖析

    spring自动扫描1.自动扫描解析器ComponentScanBeanDefinitionParser,从doScan开始扫描解析指定包路径下的类注解信息并注册到工厂容器中. 2.进入后findCa ...

  2. 1、Spring In Action 4th笔记(1)

    Spring In Action 4th笔记(1) 2016-12-28 1.Spring是一个框架,致力于减轻JEE的开发,它有4个特点: 1.1 基于POJO(Plain Ordinary Jav ...

  3. 转 Netflix OSS、Spring Cloud还是Kubernetes? 都要吧!

    Netflix OSS.Spring Cloud还是Kubernetes? 都要吧! http://www.infoq.com/cn/articles/netflix-oss-spring-cloud ...

  4. EasyUI、Struts2、Hibernate、spring 框架整合

    经历了四个月的学习,中间过程曲折离奇,好在坚持下来了,也到了最后框架的整合中间过程也只有自己能体会了. 接下来开始说一下整合中的问题和技巧: 1,  jar包导入 c3p0(2个).jdbc(1个). ...

  5. 关于Struts、hibernate、spring三大框架详解。

    struts 控制用的 hibernate 操作数据库的 spring 用解耦的 Struts . spring . Hibernate 在各层的作用 1 ) struts 负责 web 层 . Ac ...

  6. Struts2、Spring MVC4 框架下的ajax统一异常处理

    本文算是struts2 异常处理3板斧.spring mvc4:异常处理 后续篇章,普通页面出错后可以跳到统一的错误处理页面,但是ajax就不行了,ajax的本意就是不让当前页面发生跳转,仅局部刷新, ...

  7. 深入理解Spring Redis的使用 (一)、Spring Redis基本使用

    关于spring redis框架的使用,网上的例子很多很多.但是在自己最近一段时间的使用中,发现这些教程都是入门教程,包括很多的使用方法,与spring redis丰富的api大相径庭,真是浪费了这么 ...

  8. ASP.NET MVC3 中整合 NHibernate3.3、Spring.NET2.0 时 Session 关闭问题

    一.问题描述 在向ASP.NET MVC中整合NHibernate.Spring.NET后,如下管理员与角色关系: 类public class Admin { public virtual strin ...

  9. 基于struts2、spring的应用闲置一段时间后报空指针错(转)

    在做struts2.spring网站时,在系统闲置一段时间后,访问页面会出错,第二次再访问就正常了.后来查了后台日志,发现是数据库连接关闭了,导致页面访问出错.页面上报空指针错误,错误没有保留,日志中 ...

  10. 四、Spring——Spring MVC

    Spring MVC 1.Spring MVC概述 Spring MVC框架围绕DispatcherServlet这个核心展开,DispatcherServlet负责截获请求并将其分配给响应的处理器处 ...

随机推荐

  1. android控件RecyclerView中,如何显示自定义分割线以及最后一项去除分割线

    在控件RecyclerView中,分割线DividerItemDecoration类的使用经常见,如果是使用自带的分割线,只需要这样写即可 RecyclerView mRecyclerView; mR ...

  2. 【CF497E】Subsequences Return 矩阵乘法

    [CF497E]Subsequences Return 题意:设$s_k(x)$表示x在k进制下各位数的和mod k的值.给出k,现有序列$s_k(1),s_k(2),...s_k(n)$.求这个序列 ...

  3. Xcode - 因为证书问题经常报的那些错

    1.确认下证书是不是开发证书,如果是发布证书就会出现这样的提示. 2.证书失效了,去开发者中心重新生成一个. 3.包标识符不与描述文件包含的包标识符不一致,按照它的提示换一下就好了,最好不要点 Fix ...

  4. M - COURSES

    Consider a group of N students and P courses. Each student visits zero, one or more than one courses ...

  5. Solve Docker for Windows error: docker detected, A firewall is blocking file Sharing between Windows and the containers

    被这个“分享硬盘”问题烦了我好几个小时,终于在一个叫Marco Mansi外国人博客上找到解决方法了,真的很无奈 https://blog.olandese.nl/2017/05/03/solve-d ...

  6. react 中的绑定事件

    handleOpen = (e)=> { this.setState({ open: true }) } <Button color='primary' onClick={this.han ...

  7. markdown 基本语法(转载)

    最近感觉一直使用富文本编辑器写东西,感觉有点烦,所以就试着学习了一下简单的markdown编辑器的使用 原文地址:http://www.jianshu.com/p/815788f4b01d markd ...

  8. html的基本语法

  9. git使用记录_备忘

    ---恢复内容开始--- 一.将本地的文件上传到git 1.cd 本地文件目录 2.git init git init 命令使git命令可以管理当前的目录 3.git remote add origi ...

  10. awt多线程聊天

    public class ChatServer { boolean started = false; ServerSocket serverSocket = null; public void sta ...