https://github.com/rollno748/di-kafkameter/wiki#producer-elements

Introduction

DI-Kafkameter is a JMeter plugin that allows you to test and measure the performance of Apache Kafka.

Components

The DI-Kafkameter comprises of 2 components, which are

  • Producer Component
    1. Kafka Producer Config
    2. Kafka Producer Sampler
  • Consumer Component
    1. Kafka Consumer Config
    2. Kafka Consumer Sampler

Producer Component (Publish a Message to a Topic)

To publish/send a message to a Kafka topic you need to add producer components to the testplan.

  • The Kafka Producer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
  • The Kakfa Producer Sampler helps to send messages to the topic with the connection established using Config element.

Right click on Test Plan -> Add -> Config Element -> Kafka Producer Config

Provide a Variable name to export the connection object (Which will be used in Sampler element)

Provide the Kafka connection configs (list of Brokers with comma separated)

Provide a Client ID (Make it unique, to define where you sending the message from)

Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)

For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";

Right click on Test Plan -> Add -> Sampler -> Kafka Producer Sampler

Use the same Variable name which was defined in the config element

Define the topic name where you want to send the message (Case sensitive)

Kafka Message - The Original message which needs to be pushed to the topic

Partition String (Optional) - This option helps you to post messages to particular partition by providing the partition number

Message Headers (Optional) - This helps in adding headers to the messages which are being pushed (Supports more than one header)

Consumer Component (Read Message from a topic)

To Consume/Read a message from a Kafka topic you need to add Consumer components to the testplan.

  • The Kafka Consumer config is responsible to hold the connection information, which includes security and other properties required to talk to the broker.
  • The Kafka Consumer Sampler helps to read messages from the topic with the connection established using Config element.

Right click on Test Plan -> Add -> Config Element -> Kafka Consumer Config

Provide a Variable name to export the connection object (Which will be used in Sampler element)

Provide the Kafka connection configs (list of Brokers with comma separated)

Provide a Group ID (Make it unique, to define the group your consumer belongs to)

Define the topic name where you want to send the message (Case sensitive)

No Of Messages to Poll - This allows you to define the number of messages to read within a request (Defaults to 1)

Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)

Auto Commit - This will set the offset as read, once the message is consumed

Select the right security to connect to brokers (This will be completely based on how Kafka security is defined)

For JAAS Security, You need to add the below key and value to the Additional Properties
Config key: sasl.jaas.config
Config value: org.apache.kafka.common.security.scram.ScramLoginModule required username="<USERNAME>" password="<PASSWORD>";

Right click on Test Plan -> Add -> Sampler -> Kafka Consumer Sampler

Use the same Variable name which was defined in the config element

Poll timeout - This helps to set the polling timeout for consumer to read from topic (Defaults to 100 ms)

Commit Type - Defines the Commit type (Sync/Async)

Producer Properties

Supported Producer properties which can be added to Additional Properties field.

Property Available Options Default
acks [0, 1, -1] 1
batch.size positive integer 16384
bootstrap.servers comma-separated host:port pairs localhost:9092
buffer.memory positive long 33554432
client.id string ""
compression.type [none, gzip, snappy, lz4, zstd] none
connections.max.idle.ms positive long 540000
delivery.timeout.ms positive long 120000
enable.idempotence [true, false] false
interceptor.classes fully-qualified class names []
key.serializer fully-qualified class name org.apache.kafka.common.serialization.StringSerializer
linger.ms non-negative integer 0
max.block.ms non-negative long 60000
max.in.flight.requests.per.connection positive integer 5
max.request.size positive integer 1048576
metadata.fetch.timeout.ms positive long 60000
metadata.max.age.ms positive long 300000
partitioner.class fully-qualified class name org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes positive integer 32768
reconnect.backoff.ms non-negative long 50
request.timeout.ms positive integer 30000
retries non-negative integer 0
sasl.jaas.config string null
sasl.kerberos.kinit.cmd string /usr/bin/kinit
sasl.kerberos.min.time.before.relogin positive long 60000
sasl.kerberos.service.name string null
sasl.mechanism [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] GSSAPI
security.protocol [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] PLAINTEXT
sender.flush.timeout.ms non-negative long 0
send.buffer.bytes positive integer 131072
value.serializer fully-qualified class name org.apache.kafka.common.serialization.StringSerializer

Consumer Properties

Supported Consumer properties which can be added to Additional Properties field.

Property Available Options Default
auto.commit.interval.ms positive integer 5000
auto.offset.reset [earliest, latest, none] latest
bootstrap.servers comma-separated host:port pairs localhost:9092
check.crcs [true, false] true
client.id string ""
connections.max.idle.ms positive long 540000
enable.auto.commit [true, false] true
exclude.internal.topics [true, false] true
fetch.max.bytes positive long 52428800
fetch.max.wait.ms non-negative integer 500
fetch.min.bytes non-negative integer 1
group.id string ""
heartbeat.interval.ms positive integer 3000
interceptor.classes fully-qualified class names []
isolation.level [read_uncommitted, read_committed] read_uncommitted
key.deserializer fully-qualified class name org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes positive integer 1048576
max.poll.interval.ms positive long 300000
max.poll.records positive integer 500
metadata.max.age.ms positive long 300000
metadata.fetch.timeout.ms positive long 60000
receive.buffer.bytes positive integer 32768
reconnect.backoff.ms non-negative long 50
request.timeout.ms positive integer 30000
retry.backoff.ms non-negative long 100
sasl.jaas.config string null
sasl.kerberos.kinit.cmd string /usr/bin/kinit
sasl.kerberos.min.time.before.relogin positive long 60000
sasl.kerberos.service.name string null
sasl.mechanism [GSSAPI, PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] GSSAPI
security.protocol [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] PLAINTEXT
send.buffer.bytes positive integer 131072
session.timeout.ms positive integer 10000
value.deserializer fully-qualified class name org.apache.kafka.common.serialization.StringDeserializer

[转帖]Welcome to the di-kafkameter wiki!的更多相关文章

  1. 用Zim替代org-mode?

    三年前我玩过Zim,当时还写了一篇<Zim - 普通人的Org-mode>,当时还说我还是会坚持使用emacs org-mode.但最近我又在考虑是不是回头用Zim来写博客文章.整理知识库 ...

  2. jmeter取样器之KafkaProducerSampler(往kafka插入数据)

    项目背景 性能测试场景中有一个业务场景的数据抽取策略是直接使用kafka队列,该场景需要准备的测试数据是kafka队列里的数据,故需要实现插入数据到kafka队列,且需要实现控制每分钟插入多少条数据. ...

  3. 【好文转帖】控制反转(IOC)和依赖注入(DI)的区别

    IOC   inversion of control  控制反转 DI   Dependency Injection  依赖注入 要理解这两个概念,首先要搞清楚以下几个问题: 参与者都有谁? 依赖:谁 ...

  4. [转帖]什么是IOC(控制反转)、DI(依赖注入)

    什么是IOC(控制反转).DI(依赖注入) 2018-08-22 21:29:13 Ming339456 阅读数 20642   原文地址(摘要了部分内容):https://blog.csdn.net ...

  5. Ninject之旅之一:理解DI

    摘要: DI(IoC)是当前软件架构设计中比较时髦的技术.DI(IoC)可以使代码耦合性更低,更容易维护,更容易测试.现在有很多开源的依赖反转的框架,Ninject是其中一个轻量级开源的.net DI ...

  6. [Android分享] 【转帖】Android ListView的A-Z字母排序和过滤搜索功能

      感谢eoe社区的分享   最近看关于Android实现ListView的功能问题,一直都是小伙伴们关心探讨的Android开发问题之一,今天看到有关ListView实现A-Z字母排序和过滤搜索功能 ...

  7. MVC 5 + EF6 完整教程15 -- 使用DI进行解耦

    如果大家研究一些开源项目,会发现无处不在的DI(Dependency Injection依赖注入). 本篇文章将会详细讲述如何在MVC中使用Ninject实现DI 文章提纲 场景描述 & 问题 ...

  8. [转帖][分享] 关于系统DIY--by 原罪

    http://wuyou.net/forum.php?mod=viewthread&tid=399277&extra=page%3D1 前几天我发了一个帖子<Windows组件w ...

  9. AngularJs学习笔记--Dependency Injection(DI,依赖注入)

    原版地址:http://code.angularjs.org/1.0.2/docs/guide/di 一.Dependency Injection(依赖注入) 依赖注入(DI)是一个软件设计模式,处理 ...

  10. 原创:Javascript DI!Angular依赖注入的实现原理

    DI是Angular的特色功能,而在Angular 2.0的计划中,DI将成为一个独立的模块,参见 https://github.com/angular/di.js 这意味着它也有机会被用于nodej ...

随机推荐

  1. .NET技术分享日活动-202104

    2021年4月27日下午,个人组织举办了山东地区的山东.NET技术分享日活动.围绕互联网技术.大数据.机器学习.业务实践等方向进行创新技术的实践分享. 本次技术分享日活动面向了山东地区广大的.NET ...

  2. 【华为云技术分享】网络场景AI模型训练效率实践

    [摘要] 问题 KPI异常检测项目需要对设备内多模块.多类型数据,并根据波形以及异常表现进行分析,这样的数据量往往较大,对内存和性能要求较高.同时,在设计优化算法时,需要快速得到训练及测试结果并根据结 ...

  3. Spark的分布式存储系统BlockManager全解析

    摘要:BlockManager 是 spark 中至关重要的一个组件,在spark的运行过程中到处都有 BlockManager 的身影,只有搞清楚 BlockManager 的原理和机制,你才能更加 ...

  4. 从架构设计理念到集群部署,全面认识KubeEdge

    摘要:本篇文章将从KubeEdge架构设计理念.KubeEdge代码目录概览.KubeEdge集群部署三方面带大家认识KubeEdge. KubeEdge即Kube+Edge,顾名思义就是依托K8s的 ...

  5. SparkSQL高并发:读取存储数据库

    摘要:实践解析如何利用SarkSQL高并发进行读取数据库和存储数据到数据库. 本文分享自华为云社区<SarkSQL高并发读取数据库和存储数据到数据库>,作者:Copy工程师 . 1. Sp ...

  6. 据说有人面试栽在了Thread类的stop()方法和interrupt()方法上

    摘要:今天就简单的说说Thread类的stop()方法和interrupt()方法到底有啥区别. 本文分享自华为云社区<[高并发]又一个朋友面试栽在了Thread类的stop()方法和inter ...

  7. Asp.Net Core 使用X.PagedList.Mvc.Core分页 & 搜索

    1.Nuget包添加引用: X.PagedList.Mvc.Core 2.View: @using VipSoft.Web.Model @model X.PagedList.IPagedList< ...

  8. 【python爬虫】requests高级用法 代理池搭建 爬虫实战

    目录 昨日回顾 面试题 爬虫总结 今日内容 1 requests高级用法 1.0 解析json 1.1 ssl认证(了解) 1.2 使用代理(重要) 1.3 超时设置 1.4 异常处理 1.5 上传文 ...

  9. 用 ChatGPT 写一篇《ChatGPT 会取代我们的工作吗》

    自从 ChatGPT 火爆以后,最常谈到的话题就是 ChatGPT 会取代我们的工作吗?在写这篇内容时我有个大胆的想法,那就是让 ChatGPT 来取代我的工作. 首先,我决定直接让 ChatGPT ...

  10. 创建QUERY报表

    一.SQ02创建信息集 该事务代码用于查询需要的表,及表之间的关联关系 首先设置查询区域,标准区域中所建立的信息集仅在当前客户端使用,全局区域中建立的信息集可以跨client 创建信息集 选择基础表关 ...