[转帖]Welcome to the di-kafkameter wiki!
https://github.com/rollno748/di-kafkameter/wiki#producer-elements
Introduction
DI-Kafkameter is a JMeter plugin that allows you to test and measure the performance of Apache Kafka.
Components
The DI-Kafkameter comprises of 2 components, which are
- Producer Component
- Kafka Producer Config
- Kafka Producer Sampler
- Consumer Component
- Kafka Consumer Config
- Kafka Consumer Sampler
Producer Component (Publish a Message to a Topic)
To publish/send a message to a Kafka topic you need to add producer components to the testplan.
- The Kafka Producer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
- The Kakfa Producer Sampler helps to send messages to the topic with the connection established using Config element.

Right click on Test Plan -> Add -> Config Element -> Kafka Producer Config
Provide a Variable name to export the connection object (Which will be used in Sampler element)
Provide the Kafka connection configs (list of Brokers with comma separated)
Provide a Client ID (Make it unique, to define where you sending the message from)
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";

Right click on Test Plan -> Add -> Sampler -> Kafka Producer Sampler
Use the same Variable name which was defined in the config element
Define the topic name where you want to send the message (Case sensitive)
Kafka Message - The Original message which needs to be pushed to the topic
Partition String (Optional) - This option helps you to post messages to particular partition by providing the partition number
Message Headers (Optional) - This helps in adding headers to the messages which are being pushed (Supports more than one header)
Consumer Component (Read Message from a topic)
To Consume/Read a message from a Kafka topic you need to add Consumer components to the testplan.
- The Kafka Consumer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
- The Kafka Consumer Sampler helps to read messages from the topic with the connection established using Config element.

Right click on Test Plan -> Add -> Config Element -> Kafka Consumer Config
Provide a Variable name to export the connection object (Which will be used in Sampler element)
Provide the Kafka connection configs (list of Brokers with comma separated)
Provide a Group ID (Make it unique, to define the group your consumer belongs to)
Define the topic name where you want to send the message (Case sensitive)
No Of Messages to Poll - This allows you to define the number of messages to read within a request (Defaults to 1)
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
Auto Commit - This will set the offset as read, once the message is consumed
Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)
For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";

Right click on Test Plan -> Add -> Sampler -> Kafka Consumer Sampler
Use the same Variable name which was defined in the config element
Poll timeout - This helps to set the polling timeout for consumer to read from topic (Defaults to 100 ms)
Commit Type - Defines the Commit type (Sync/Async)
Producer Properties
Supported Producer properties which can be added to Additional Properties field.
| Property | Available Options | Default |
|---|---|---|
| acks | [0, 1, -1] | 1 |
| batch.size | positive integer | 16384 |
| bootstrap.servers | comma-separated host:port pairs | localhost:9092 |
| buffer.memory | positive long | 33554432 |
| client.id | string | "" |
| compression.type | [none, gzip, snappy, lz4, zstd] | none |
| connections.max.idle.ms | positive long | 540000 |
| delivery.timeout.ms | positive long | 120000 |
| enable.idempotence | [true, false] | false |
| interceptor.classes | fully-qualified class names | [] |
| key.serializer | fully-qualified class name | org.apache.kafka.common.serialization.StringSerializer |
| linger.ms | non-negative integer | 0 |
| max.block.ms | non-negative long | 60000 |
| max.in.flight.requests.per.connection | positive integer | 5 |
| max.request.size | positive integer | 1048576 |
| metadata.fetch.timeout.ms | positive long | 60000 |
| metadata.max.age.ms | positive long | 300000 |
| partitioner.class | fully-qualified class name | org.apache.kafka.clients.producer.internals.DefaultPartitioner |
| receive.buffer.bytes | positive integer | 32768 |
| reconnect.backoff.ms | non-negative long | 50 |
| request.timeout.ms | positive integer | 30000 |
| retries | non-negative integer | 0 |
| sasl.jaas.config | string | null |
| sasl.kerberos.kinit.cmd | string | /usr/bin/kinit |
| sasl.kerberos.min.time.before.relogin | positive long | 60000 |
| sasl.kerberos.service.name | string | null |
| sasl.mechanism | [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] | GSSAPI |
| security.protocol | [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] | PLAINTEXT |
| sender.flush.timeout.ms | non-negative long | 0 |
| send.buffer.bytes | positive integer | 131072 |
| value.serializer | fully-qualified class name | org.apache.kafka.common.serialization.StringSerializer |
Consumer Properties
Supported Consumer properties which can be added to Additional Properties field.
| Property | Available Options | Default |
|---|---|---|
| auto.commit.interval.ms | positive integer | 5000 |
| auto.offset.reset | [earliest, latest, none] | latest |
| bootstrap.servers | comma-separated host:port pairs | localhost:9092 |
| check.crcs | [true, false] | true |
| client.id | string | "" |
| connections.max.idle.ms | positive long | 540000 |
| enable.auto.commit | [true, false] | true |
| exclude.internal.topics | [true, false] | true |
| fetch.max.bytes | positive long | 52428800 |
| fetch.max.wait.ms | non-negative integer | 500 |
| fetch.min.bytes | non-negative integer | 1 |
| group.id | string | "" |
| heartbeat.interval.ms | positive integer | 3000 |
| interceptor.classes | fully-qualified class names | [] |
| isolation.level | [read_uncommitted, read_committed] | read_uncommitted |
| key.deserializer | fully-qualified class name | org.apache.kafka.common.serialization.StringDeserializer |
| max.partition.fetch.bytes | positive integer | 1048576 |
| max.poll.interval.ms | positive long | 300000 |
| max.poll.records | positive integer | 500 |
| metadata.max.age.ms | positive long | 300000 |
| metadata.fetch.timeout.ms | positive long | 60000 |
| receive.buffer.bytes | positive integer | 32768 |
| reconnect.backoff.ms | non-negative long | 50 |
| request.timeout.ms | positive integer | 30000 |
| retry.backoff.ms | non-negative long | 100 |
| sasl.jaas.config | string | null |
| sasl.kerberos.kinit.cmd | string | /usr/bin/kinit |
| sasl.kerberos.min.time.before.relogin | positive long | 60000 |
| sasl.kerberos.service.name | string | null |
| sasl.mechanism | [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] | GSSAPI |
| security.protocol | [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] | PLAINTEXT |
| send.buffer.bytes | positive integer | 131072 |
| session.timeout.ms | positive integer | 10000 |
| value.deserializer | fully-qualified class name | org.apache.kafka.common.serialization.StringDeserializer |
[转帖]Welcome to the di-kafkameter wiki!的更多相关文章
- 用Zim替代org-mode?
三年前我玩过Zim,当时还写了一篇<Zim - 普通人的Org-mode>,当时还说我还是会坚持使用emacs org-mode.但最近我又在考虑是不是回头用Zim来写博客文章.整理知识库 ...
- jmeter取样器之KafkaProducerSampler(往kafka插入数据)
项目背景 性能测试场景中有一个业务场景的数据抽取策略是直接使用kafka队列,该场景需要准备的测试数据是kafka队列里的数据,故需要实现插入数据到kafka队列,且需要实现控制每分钟插入多少条数据. ...
- 【好文转帖】控制反转(IOC)和依赖注入(DI)的区别
IOC inversion of control 控制反转 DI Dependency Injection 依赖注入 要理解这两个概念,首先要搞清楚以下几个问题: 参与者都有谁? 依赖:谁 ...
- [转帖]什么是IOC(控制反转)、DI(依赖注入)
什么是IOC(控制反转).DI(依赖注入) 2018-08-22 21:29:13 Ming339456 阅读数 20642 原文地址(摘要了部分内容):https://blog.csdn.net ...
- Ninject之旅之一:理解DI
摘要: DI(IoC)是当前软件架构设计中比较时髦的技术.DI(IoC)可以使代码耦合性更低,更容易维护,更容易测试.现在有很多开源的依赖反转的框架,Ninject是其中一个轻量级开源的.net DI ...
- [Android分享] 【转帖】Android ListView的A-Z字母排序和过滤搜索功能
感谢eoe社区的分享 最近看关于Android实现ListView的功能问题,一直都是小伙伴们关心探讨的Android开发问题之一,今天看到有关ListView实现A-Z字母排序和过滤搜索功能 ...
- MVC 5 + EF6 完整教程15 -- 使用DI进行解耦
如果大家研究一些开源项目,会发现无处不在的DI(Dependency Injection依赖注入). 本篇文章将会详细讲述如何在MVC中使用Ninject实现DI 文章提纲 场景描述 & 问题 ...
- [转帖][分享] 关于系统DIY--by 原罪
http://wuyou.net/forum.php?mod=viewthread&tid=399277&extra=page%3D1 前几天我发了一个帖子<Windows组件w ...
- AngularJs学习笔记--Dependency Injection(DI,依赖注入)
原版地址:http://code.angularjs.org/1.0.2/docs/guide/di 一.Dependency Injection(依赖注入) 依赖注入(DI)是一个软件设计模式,处理 ...
- 原创:Javascript DI!Angular依赖注入的实现原理
DI是Angular的特色功能,而在Angular 2.0的计划中,DI将成为一个独立的模块,参见 https://github.com/angular/di.js 这意味着它也有机会被用于nodej ...
随机推荐
- Mybatis源码3 CachingExecutor, 二级缓存,缓存的实现
Mybatis CachingExecutor, 二级缓存,缓存的实现 一丶二级缓存概述 上一章节,我们知道mybaits在构造SqlSession的时候,需要让SqlSession持有一个执行器,如 ...
- 欢迎使用CSDN-markdown编辑器测试
这里写自定义目录标题 欢迎使用Markdown编辑器 新的改变 功能快捷键 合理的创建标题,有助于目录的生成 如何改变文本的样式 插入链接与图片 如何插入一段漂亮的代码片 生成一个适合你的列表 创建一 ...
- SQL Server系列:系统函数之字符串函数
1.ascii() :返回ascii码 --返回ascii码 select ascii('a') go 2.char() :返回ascii对应的字符 --返回ascii对应的字符 select ch ...
- EDS从小白到专家丨打造数据交换的六边形卫士,让你的数据你做主
本文分享自华为云社区<[EDS从小白到专家]第4期:打造数据交换的六边形卫士,让你的数据你做主>,作者: 开天aPaaS小助手 . 你还在担心数据共享后一旦"失控"将爆 ...
- 古有诸葛亮八卦阵阻敌,今有 iptables 护网安
摘要:保障网络环境的安全,我们得"武装"起来,守住各个入口.怎么"武装"呢? 网络世界就和现实世界一样,总是会有些不怀好意的"人"出现,扫扫 ...
- 详解JQuery框架的五大选择器
摘要:今天来和大家分享一下JQuery的五种选择器的详细使用方法. 本文分享自华为云社区<[JQuery框架]五大选择器"全家桶"详解!!!>,原文作者:灰小猿 . 今 ...
- 探索开源工作流引擎Azkaban在MRS中的实践
摘要:本文主要介绍如何在华为云上从0-1搭建azkaban并指导用户如何提交作业至MRS. 本文分享自华为云社区<开源工作流引擎Azkaban在MRS中的实践>,作者:啊喔YeYe. 环境 ...
- IntelliJ IDEA lombok log 报红
pom文件中引用了 lombok 插件,但Intellij 代码里仍然是红色提示,具体操作如下 Mac
- FTP 被动模式配置
总结:FTP 21端口,可以主动连接,防火墙配置一下21端口放行就OK了.非21端口,需要设成被动连接和端口范围.防火墙要做相应的配置 原理 https://www.cnblogs.com/zjoch ...
- Linux CentOS 8 安装DHCP服务
DHCP 如果虚拟机没有 /etc/dhcp/dhcpd.conf 文件,这可能是因为 DHCP 服务器软件包尚未安装,或者安装后配置文件未创建. 要创建 DHCP 服务器配置文件 dhcpd.con ...