设置虚拟机不同的带宽来进行模拟压测

---------kafka数据压测--------------
-----1、公司生产kafka集群硬盘:单台500G、共3台、日志保留7天。
        1.1 版本:1.1.0

-----2、压测kafka。
        2.1 使用kafka自带压测工具:bin/kafka-producer-perf-test.sh
            命令参数解释:
                --num-records总共发送多少条消息
                --record-size : 每条消息是多少字节。单位字节
                --throughput  :每秒发送多少条消息,设置-1:表示不限流,可以测试出生产者最大吞吐量
                --producer-props:bootstrap.servers=s201:9092,s202:9092,s203:9092
                --print-metrics: 输出所有指标
            消费压测命令:bin/kafka-consumer-perf-test.sh
                --broker-list broker节点列表
                --fetch-size 每次fetch的数据的大小
                --messages 共消费的消息数
                --threads  处理线程数,默认10个
            消费:kafka-console-consumer.sh --topic test_producer3 --bootstrap-server s201:9092,s202:9092,s203:9092 --group aa --from-beginning
            查看消费者组信息:kafka-consumer-groups.sh --bootstrap-server s201:9092,s202:9092,s203:9092 --group aa --describe
        2.2 : 模拟场景:

2.2.1 使用4M带宽模拟(理想:512KB/s)

2.2.1.1 topic分区:3,副本数:2,共发送10w条,每秒发送1000条
                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 1000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果:producer-topic-metrics:record-error-total:{client-id=producer-1, topic=test_producer3} : 32080.000 (发送失败:32080)
                          producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 67920.000 (发送成功:67920)
                          3310 records sent, 660.2 records/sec (0.64 MB/sec), 30282.4 ms avg latency, 31107.0 max latency.
                          100000 records sent, 766.442099 records/sec (0.75 MB/sec), 17785.39 ms avg latency, 32738.00 ms max latency, 29960 ms 50th, 30679 ms 95th, 31056 ms 99th, 32285 ms 99.9th.

                          一共写入10w条消息,其中有发送超时的,失败32080,吞吐量最大为0.75MB/s,每次写入的平均延迟为17785.39毫秒,最大的延迟时间32738毫秒。
                          结论:扛不住每秒1000条消息的写入,在带宽为4M的前提下,最大每秒也就700多条。

                          
                    消费者命令:bin/kafka-consumer-perf-test.sh --new-consumer --topic test_producer3 --broker-list s201:9092,s202:9092,s203:9092 --fetch-size 10000 --messages 100000 
                    结果:
start.time, end.time,                    data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
开始时间    结束时间                      总消费        平均每秒消费MB    总消费条数        每秒消费条数   负载消耗时间       拉取消耗时间     拉取数据大小     拉取数据条数
2021-12-30 23:00:11, 2021-12-30 23:02:50, 66.3281,              0.4187,         67920,             428.7446,         10,             158406,         0.4187,     428.771

                          
                2.2.1.2 topic分区:3,副本数:2,共发送10w条,每秒发送500条
                    命令:bin/kafka-producer-perf-test.sh --topic test_producer3_500 --num-records 100000 --record-size 1024 --throughput 500 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果: producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3_500}  : 100000.000
                           100000 records sent, 459.041979 records/sec (0.45 MB/sec), 5634.44 ms avg latency, 18407.00 ms max latency, 4029 ms 50th, 15754 ms 95th, 18142 ms 99th, 18313 ms 99.9th.
                           成功写入10w,吞吐量为0.45MB/s  平均延迟5634.44毫秒,最大延迟为18407毫秒。
                
                2.2.1.3 topic分区:1,副本数:1,共发送10w条,每秒发送500条
                    命令:bin/kafka-producer-perf-test.sh --topic test_producer1_500 --num-records 100000 --record-size 1024 --throughput 500 --producer-props     bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果:50000 records sent, 465.826936 records/sec (0.45 MB/sec), 3798.95 ms avg latency, 7406.00 ms max latency, 3802 ms 50th, 6999 ms 95th, 7305 ms 99th, 7379 ms 99.9th.

2.2.2 使用10M带宽模拟(理想:1.25MB/s)

2.2.2.1 topic分区:3,副本数:2,共发送10w条,每秒发送1000条
                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 1000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    
                    结果:producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 100000.000
                        100000 records sent, 999.880014 records/sec (0.98 MB/sec), 19.48 ms avg latency, 262.00 ms max latency, 11 ms 50th, 54 ms 95th, 61 ms 99th, 123 ms 99.9th.
                        成功写入10w,吞吐量为0.98MB/s

                2.2.1.1 topic分区:3,副本数:2,共发送10w条,每秒发送2000条
                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 2000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    
                    结果:producer-topic-metrics:record-error-total:{client-id=producer-1, topic=test_producer3} : 5340.000
                          producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 94660.000
                          100000 records sent, 1203.876482 records/sec (1.18 MB/sec), 15891.99 ms avg latency, 30390.00 ms max latency, 12657 ms 50th, 30055 ms 95th, 30279 ms 99th, 30358 ms 99.9th.
                          只成功写入94660条,失败5340条,最大吞吐量为1.18MB/s  (测试每秒1300,成功写入10w)

                          
                    消费者命令:bin/kafka-consumer-perf-test.sh --new-consumer --topic test_producer3 --broker-list s201:9092,s202:9092,s203:9092 --fetch-size 10000 --messages 100000 
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2021-12-30 23:55:17:093, 2021-12-30 23:56:47:476, 97.6592, 1.0805MB/s, 100003, 1106.4359, 32, 90351, 1.0809, 1106.8278
            
            2.2.3 使用100M带宽模拟(理想:12.5MB/s)

2.2.3.1 topic分区:3,副本数:2,共发送10w条,每秒发送1000条
                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 1000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果:producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 100000.000
                    100000 records sent, 999.960002 records/sec (0.98 MB/sec), 2.78 ms avg latency, 335.00 ms max latency, 3 ms 50th, 4 ms 95th, 14 ms 99th, 127 ms 99.9th.
                    成功写入10w,而且最高延迟才335毫秒,说明还有空间去压缩,所以继续加大每秒吞吐,直到达到超时,则为最大
                    
                    
                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 2000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果:producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 100000.000
                    100000 records sent, 1999.880007 records/sec (1.95 MB/sec), 1.78 ms avg latency, 114.00 ms max latency, 1 ms 50th, 4 ms 95th, 6 ms 99th, 29 ms 99.9th.
                    也是成功,而且延迟最高114毫秒。
                    
                    因为带宽100M,理想的网络吞吐量为12.5MB/s,= 12800KB/s ,而模拟数据每秒1KB,所以理想最大为每秒12800条。
                    

                    生产者命令:bin/kafka-producer-perf-test.sh --topic test_producer3 --num-records 100000 --record-size 1024 --throughput 10000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 --print-metrics
                    结果:producer-topic-metrics:record-send-total:{client-id=producer-1, topic=test_producer3}  : 100000.000
                    100000 records sent, 9998.000400 records/sec (9.76 MB/sec), 5.34 ms avg latency, 117.00 ms max latency, 2 ms 50th, 31 ms 95th, 65 ms 99th, 85 ms 99.9th.
                    成功写入10w,吞吐量为9.76MB/s
            
                    消费:bin/kafka-consumer-perf-test.sh --new-consumer --topic test_producer3 --broker-list s201:9092,s202:9092,s203:9092 --fetch-size 20000 --messages 500000
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2021-12-31 00:21:25:976, 2021-12-31 00:22:19:900, 488.2900, 9.0552, 500009, 9272.4761, 31, 53893, 9.0604, 9277.8097

--------------- 通过idea来发送消息到kafka

---------1、带宽4M,topic分区1个,参数:--topic test_producer --num-records 100000 --record-size 2048 --throughput 500 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 request.timeout.ms=62000 --print-metrics

-------------- 开启两个java类,设置相同的参数。结果如下:带宽均分。

同时往同一个topic去写数据,吞吐量被均分0.23MB/s。

----------2、分区设置3,副本数为2,压测 参数:--topic test_idea_producer --num-records 100000 --record-size 1024 --throughput 1000 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 request.timeout.ms=62000 --print-metrics

leader分布均匀在每台机器。

结论:是单个分区的吞吐量的2倍多(说明多分区可以提升吞吐)

注意:分区的leader分布也是影响吞吐量的原因之一

如下测试的是同样的3个分区数不同的leader分布:--topic test_producer3_500 --num-records 100000 --record-size 2048 --throughput -1 --producer-props bootstrap.servers=s201:9092,s202:9092,s203:9092 request.timeout.ms=62000 --print-metrics

其中s203机器作为分区【1、2】的leader,导致io开销,最终导致压测出来的结果平均0.67MB/s

经过kafka的分区重分配进行调整。kafka分区重分配https://blog.csdn.net/cuichunchi/article/details/120930445

再进行同样参数的压力测试:明显吞吐量得到提升

--------------结论 吞吐量还是和上面测试一样。

模拟代码(从源码抠出来的):


  1. package com.kafkaTestJava;
  2. import net.sourceforge.argparse4j.ArgumentParsers;
  3. import net.sourceforge.argparse4j.inf.ArgumentParser;
  4. import net.sourceforge.argparse4j.inf.ArgumentParserException;
  5. import net.sourceforge.argparse4j.inf.MutuallyExclusiveGroup;
  6. import net.sourceforge.argparse4j.inf.Namespace;
  7. import org.apache.kafka.clients.producer.*;
  8. import org.apache.kafka.common.utils.Exit;
  9. import org.apache.kafka.common.utils.Utils;
  10. import org.apache.kafka.tools.ThroughputThrottler;
  11. import org.apache.kafka.tools.ToolsUtils;
  12. import java.nio.charset.StandardCharsets;
  13. import java.nio.file.Files;
  14. import java.nio.file.Path;
  15. import java.nio.file.Paths;
  16. import java.util.*;
  17. import static net.sourceforge.argparse4j.impl.Arguments.store;
  18. import static net.sourceforge.argparse4j.impl.Arguments.storeTrue;
  19. /**
  20. * 压测kafka
  21. * @author :CUICHUNCHI
  22. * @date :2022/1/5
  23. * @time : 21:16
  24. * @description:
  25. * @modified By:2022/1/5
  26. * @version: 1.0
  27. */
  28. public class TestKafkaProducer2 {
  29. public static void main(String[] args) throws Exception {
  30. ArgumentParser parser = argParser();
  31. try {
  32. Namespace res = parser.parseArgs(args);
  33. /* parse args */
  34. String topicName = res.getString("topic");
  35. long numRecords = res.getLong("numRecords");
  36. Integer recordSize = res.getInt("recordSize");
  37. int throughput = res.getInt("throughput");
  38. List<String> producerProps = res.getList("producerConfig");
  39. String producerConfig = res.getString("producerConfigFile");
  40. String payloadFilePath = res.getString("payloadFile");
  41. String transactionalId = res.getString("transactionalId");
  42. boolean shouldPrintMetrics = res.getBoolean("printMetrics");
  43. long transactionDurationMs = res.getLong("transactionDurationMs");
  44. boolean transactionsEnabled = 0 < transactionDurationMs;
  45. // since default value gets printed with the help text, we are escaping \n there and replacing it with correct value here.
  46. String payloadDelimiter = res.getString("payloadDelimiter").equals("\\n") ? "\n" : res.getString("payloadDelimiter");
  47. if (producerProps == null && producerConfig == null) {
  48. throw new ArgumentParserException("Either --producer-props or --producer.config must be specified.", parser);
  49. }
  50. List<byte[]> payloadByteList = new ArrayList<>();
  51. if (payloadFilePath != null) {
  52. Path path = Paths.get(payloadFilePath);
  53. System.out.println("Reading payloads from: " + path.toAbsolutePath());
  54. if (Files.notExists(path) || Files.size(path) == 0) {
  55. throw new IllegalArgumentException("File does not exist or empty file provided.");
  56. }
  57. String[] payloadList = new String(Files.readAllBytes(path), "UTF-8").split(payloadDelimiter);
  58. System.out.println("Number of messages read: " + payloadList.length);
  59. for (String payload : payloadList) {
  60. payloadByteList.add(payload.getBytes(StandardCharsets.UTF_8));
  61. }
  62. }
  63. Properties props = new Properties();
  64. if (producerConfig != null) {
  65. props.putAll(Utils.loadProps(producerConfig));
  66. }
  67. if (producerProps != null)
  68. for (String prop : producerProps) {
  69. String[] pieces = prop.split("=");
  70. if (pieces.length != 2)
  71. throw new IllegalArgumentException("Invalid property: " + prop);
  72. props.put(pieces[0], pieces[1]);
  73. }
  74. props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
  75. props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.ByteArraySerializer");
  76. if (transactionsEnabled)
  77. props.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, transactionalId);
  78. KafkaProducer<byte[], byte[]> producer = new KafkaProducer<>(props);
  79. if (transactionsEnabled)
  80. producer.initTransactions();
  81. /* setup perf test */
  82. byte[] payload = null;
  83. Random random = new Random(0);
  84. if (recordSize != null) {
  85. payload = new byte[recordSize];
  86. for (int i = 0; i < payload.length; ++i)
  87. payload[i] = (byte) (random.nextInt(26) + 65);
  88. }
  89. ProducerRecord<byte[], byte[]> record;
  90. Stats stats = new Stats(numRecords, 5000);
  91. long startMs = System.currentTimeMillis();
  92. ThroughputThrottler throttler = new ThroughputThrottler(throughput, startMs);
  93. int currentTransactionSize = 0;
  94. long transactionStartTime = 0;
  95. for (long i = 0; i < numRecords; i++) {
  96. if (transactionsEnabled && currentTransactionSize == 0) {
  97. producer.beginTransaction();
  98. transactionStartTime = System.currentTimeMillis();
  99. }
  100. if (payloadFilePath != null) {
  101. payload = payloadByteList.get(random.nextInt(payloadByteList.size()));
  102. }
  103. record = new ProducerRecord<>(topicName, payload);
  104. long sendStartMs = System.currentTimeMillis();
  105. Callback cb = stats.nextCompletion(sendStartMs, payload.length, stats);
  106. producer.send(record, cb);
  107. currentTransactionSize++;
  108. if (transactionsEnabled && transactionDurationMs <= (sendStartMs - transactionStartTime)) {
  109. producer.commitTransaction();
  110. currentTransactionSize = 0;
  111. }
  112. if (throttler.shouldThrottle(i, sendStartMs)) {
  113. throttler.throttle();
  114. }
  115. }
  116. if (transactionsEnabled && currentTransactionSize != 0)
  117. producer.commitTransaction();
  118. if (!shouldPrintMetrics) {
  119. producer.close();
  120. /* print final results */
  121. stats.printTotal();
  122. } else {
  123. // Make sure all messages are sent before printing out the stats and the metrics
  124. // We need to do this in a different branch for now since tests/kafkatest/sanity_checks/test_performance_services.py
  125. // expects this class to work with older versions of the client jar that don't support flush().
  126. producer.flush();
  127. /* print final results */
  128. stats.printTotal();
  129. /* print out metrics */
  130. ToolsUtils.printMetrics(producer.metrics());
  131. producer.close();
  132. }
  133. } catch (ArgumentParserException e) {
  134. if (args.length == 0) {
  135. parser.printHelp();
  136. Exit.exit(0);
  137. } else {
  138. parser.handleError(e);
  139. Exit.exit(1);
  140. }
  141. }
  142. }
  143. /** Get the command-line argument parser. */
  144. private static ArgumentParser argParser() {
  145. ArgumentParser parser = ArgumentParsers
  146. .newArgumentParser("producer-performance")
  147. .defaultHelp(true)
  148. .description("This tool is used to verify the producer performance.");
  149. MutuallyExclusiveGroup payloadOptions = parser
  150. .addMutuallyExclusiveGroup()
  151. .required(true)
  152. .description("either --record-size or --payload-file must be specified but not both.");
  153. parser.addArgument("--topic")
  154. .action(store())
  155. .required(true)
  156. .type(String.class)
  157. .metavar("TOPIC")
  158. .help("produce messages to this topic");
  159. parser.addArgument("--num-records")
  160. .action(store())
  161. .required(true)
  162. .type(Long.class)
  163. .metavar("NUM-RECORDS")
  164. .dest("numRecords")
  165. .help("number of messages to produce");
  166. payloadOptions.addArgument("--record-size")
  167. .action(store())
  168. .required(false)
  169. .type(Integer.class)
  170. .metavar("RECORD-SIZE")
  171. .dest("recordSize")
  172. .help("message size in bytes. Note that you must provide exactly one of --record-size or --payload-file.");
  173. payloadOptions.addArgument("--payload-file")
  174. .action(store())
  175. .required(false)
  176. .type(String.class)
  177. .metavar("PAYLOAD-FILE")
  178. .dest("payloadFile")
  179. .help("file to read the message payloads from. This works only for UTF-8 encoded text files. " +
  180. "Payloads will be read from this file and a payload will be randomly selected when sending messages. " +
  181. "Note that you must provide exactly one of --record-size or --payload-file.");
  182. parser.addArgument("--payload-delimiter")
  183. .action(store())
  184. .required(false)
  185. .type(String.class)
  186. .metavar("PAYLOAD-DELIMITER")
  187. .dest("payloadDelimiter")
  188. .setDefault("\\n")
  189. .help("provides delimiter to be used when --payload-file is provided. " +
  190. "Defaults to new line. " +
  191. "Note that this parameter will be ignored if --payload-file is not provided.");
  192. parser.addArgument("--throughput")
  193. .action(store())
  194. .required(true)
  195. .type(Integer.class)
  196. .metavar("THROUGHPUT")
  197. .help("throttle maximum message throughput to *approximately* THROUGHPUT messages/sec. Set this to -1 to disable throttling.");
  198. parser.addArgument("--producer-props")
  199. .nargs("+")
  200. .required(false)
  201. .metavar("PROP-NAME=PROP-VALUE")
  202. .type(String.class)
  203. .dest("producerConfig")
  204. .help("kafka producer related configuration properties like bootstrap.servers,client.id etc. " +
  205. "These configs take precedence over those passed via --producer.config.");
  206. parser.addArgument("--producer.config")
  207. .action(store())
  208. .required(false)
  209. .type(String.class)
  210. .metavar("CONFIG-FILE")
  211. .dest("producerConfigFile")
  212. .help("producer config properties file.");
  213. parser.addArgument("--print-metrics")
  214. .action(storeTrue())
  215. .type(Boolean.class)
  216. .metavar("PRINT-METRICS")
  217. .dest("printMetrics")
  218. .help("print out metrics at the end of the test.");
  219. parser.addArgument("--transactional-id")
  220. .action(store())
  221. .required(false)
  222. .type(String.class)
  223. .metavar("TRANSACTIONAL-ID")
  224. .dest("transactionalId")
  225. .setDefault("performance-producer-default-transactional-id")
  226. .help("The transactionalId to use if transaction-duration-ms is > 0. Useful when testing the performance of concurrent transactions.");
  227. parser.addArgument("--transaction-duration-ms")
  228. .action(store())
  229. .required(false)
  230. .type(Long.class)
  231. .metavar("TRANSACTION-DURATION")
  232. .dest("transactionDurationMs")
  233. .setDefault(0L)
  234. .help("The max age of each transaction. The commitTransaction will be called after this time has elapsed. Transactions are only enabled if this value is positive.");
  235. return parser;
  236. }
  237. private static class Stats {
  238. private long start;
  239. private long windowStart;
  240. private int[] latencies;
  241. private int sampling;
  242. private int iteration;
  243. private int index;
  244. private long count;
  245. private long bytes;
  246. private int maxLatency;
  247. private long totalLatency;
  248. private long windowCount;
  249. private int windowMaxLatency;
  250. private long windowTotalLatency;
  251. private long windowBytes;
  252. private long reportingInterval;
  253. public Stats(long numRecords, int reportingInterval) {
  254. this.start = System.currentTimeMillis();
  255. this.windowStart = System.currentTimeMillis();
  256. this.iteration = 0;
  257. this.sampling = (int) (numRecords / Math.min(numRecords, 500000));
  258. this.latencies = new int[(int) (numRecords / this.sampling) + 1];
  259. this.index = 0;
  260. this.maxLatency = 0;
  261. this.totalLatency = 0;
  262. this.windowCount = 0;
  263. this.windowMaxLatency = 0;
  264. this.windowTotalLatency = 0;
  265. this.windowBytes = 0;
  266. this.totalLatency = 0;
  267. this.reportingInterval = reportingInterval;
  268. }
  269. public void record(int iter, int latency, int bytes, long time) {
  270. this.count++;
  271. this.bytes += bytes;
  272. this.totalLatency += latency;
  273. this.maxLatency = Math.max(this.maxLatency, latency);
  274. this.windowCount++;
  275. this.windowBytes += bytes;
  276. this.windowTotalLatency += latency;
  277. this.windowMaxLatency = Math.max(windowMaxLatency, latency);
  278. if (iter % this.sampling == 0) {
  279. this.latencies[index] = latency;
  280. this.index++;
  281. }
  282. /* maybe report the recent perf */
  283. if (time - windowStart >= reportingInterval) {
  284. printWindow();
  285. newWindow();
  286. }
  287. }
  288. public Callback nextCompletion(long start, int bytes, Stats stats) {
  289. Callback cb = new PerfCallback(this.iteration, start, bytes, stats);
  290. this.iteration++;
  291. return cb;
  292. }
  293. public void printWindow() {
  294. long ellapsed = System.currentTimeMillis() - windowStart;
  295. double recsPerSec = 1000.0 * windowCount / (double) ellapsed;
  296. double mbPerSec = 1000.0 * this.windowBytes / (double) ellapsed / (1024.0 * 1024.0);
  297. System.out.printf("%d records sent, %.1f records/sec (%.2f MB/sec), %.1f ms avg latency, %.1f ms max latency.%n",
  298. windowCount,
  299. recsPerSec,
  300. mbPerSec,
  301. windowTotalLatency / (double) windowCount,
  302. (double) windowMaxLatency);
  303. }
  304. public void newWindow() {
  305. this.windowStart = System.currentTimeMillis();
  306. this.windowCount = 0;
  307. this.windowMaxLatency = 0;
  308. this.windowTotalLatency = 0;
  309. this.windowBytes = 0;
  310. }
  311. public void printTotal() {
  312. long elapsed = System.currentTimeMillis() - start;
  313. double recsPerSec = 1000.0 * count / (double) elapsed;
  314. double mbPerSec = 1000.0 * this.bytes / (double) elapsed / (1024.0 * 1024.0);
  315. int[] percs = percentiles(this.latencies, index, 0.5, 0.95, 0.99, 0.999);
  316. System.out.printf("%d records sent, %f records/sec (%.2f MB/sec), %.2f ms avg latency, %.2f ms max latency, %d ms 50th, %d ms 95th, %d ms 99th, %d ms 99.9th.%n",
  317. count,
  318. recsPerSec,
  319. mbPerSec,
  320. totalLatency / (double) count,
  321. (double) maxLatency,
  322. percs[0],
  323. percs[1],
  324. percs[2],
  325. percs[3]);
  326. }
  327. private static int[] percentiles(int[] latencies, int count, double... percentiles) {
  328. int size = Math.min(count, latencies.length);
  329. Arrays.sort(latencies, 0, size);
  330. int[] values = new int[percentiles.length];
  331. for (int i = 0; i < percentiles.length; i++) {
  332. int index = (int) (percentiles[i] * size);
  333. values[i] = latencies[index];
  334. }
  335. return values;
  336. }
  337. }
  338. private static final class PerfCallback implements Callback {
  339. private final long start;
  340. private final int iteration;
  341. private final int bytes;
  342. private final Stats stats;
  343. public PerfCallback(int iter, long start, int bytes, Stats stats) {
  344. this.start = start;
  345. this.stats = stats;
  346. this.iteration = iter;
  347. this.bytes = bytes;
  348. }
  349. public void onCompletion(RecordMetadata metadata, Exception exception) {
  350. long now = System.currentTimeMillis();
  351. int latency = (int) (now - start);
  352. this.stats.record(iteration, latency, bytes, now);
  353. if (exception != null)
  354. exception.printStackTrace();
  355. }
  356. }
  357. }
文章知识点与官方知识档案匹配,可进一步学习相关知识
Java技能树首页概览120812 人正在系统学习中

[转帖]kafka压测多维度分析实战的更多相关文章

  1. 使用ab压测网页结果分析

    使用ab压测网页结果分析 下载工具:ab 图片来自:http://my.oschina.net/u/1246814/blog/291696?fromerr=JfLeu1jk

  2. Kafka压测— 搞垮kafka的方法(转)

    分布式系统故障场景梳理方法: 场景梳理逻辑关系: 单点硬件故障→单点进程故障类型→集群影响→集群故障场景 第三方依赖故障→集群依赖关系→集群影响→集群故障场景 业务场景→集群负载/错误影响→集群故障场 ...

  3. Asp.net 性能监控之压测接口“卡住” 分析

    问题描述:web api项目接口压测.前期并发100,500没出现问题,平均耗时也在几百毫秒.当并发1000时候,停留等待许久,看现象是jemeter卡住,没返回,时间过了许久,才正常. 解决过程: ...

  4. kafka压测

    原文并未提及kafka的版本 并且测试的消息大小都偏小  测试数据供参考 原文还测试了broker等    原文请移步文章末尾 4.1 producer测试 4.1.1 batch-size 测试结果 ...

  5. Jmeter之压测探索和结果分析

    1.copy过来的,很有道理的一句话~ 最大并发数:取决于你的业务类型,数据量,处理时的资源需求等,具体多少,需要做一些性能测试来衡量 确定待测试的场景,设计脚本,不断增加并发数量. 2.CPU压不上 ...

  6. Jmeter压测场景及结果分析

    1)压力测试分两种场景: 一种是单场景,压一个接口的: 第二种是混合场景,多个有关联的接口. 压测时间,一般场景都运行10-15分钟.如果是疲劳测试,可以压一天或一周,根据实际情况来定. 2)压测设置 ...

  7. 压测2.0:云压测 + APM = 端到端压测解决方案

    从压力测试说起 压力测试是确立系统稳定性的一种测试方法,通常在系统正常运作范围之外进行,以考察其功能极限和隐患.与功能测试不同,压测是以软件响应速度为测试目标的,尤其是针对在较短时间内大量并发用户的访 ...

  8. 全链路压测平台(Quake)在美团中的实践

    背景 在美团的价值观中,以“客户为中心”被放在一个非常重要的位置,所以我们对服务出现故障越来越不能容忍.特别是目前公司业务正在高速增长阶段,每一次故障对公司来说都是一笔非常不小的损失.而整个IT基础设 ...

  9. ab压力测试工具-批量压测脚本

    ab(Apache benchmark)是一款常用的压力测试工具.简单易用,ab的命令行一次只能支持一次测试.如果想要批量执行不同的测试方式,并自动对指标进行分析,那么单靠手工一条一条命令运行ab,估 ...

  10. 性能测试:压测中TPS上不去的几种原因分析(就是思路要说清楚)

    转https://www.cnblogs.com/imyalost/p/8309468.html 先来解释下什么叫TPS: TPS(Transaction Per Second):每秒事务数,指服务器 ...

随机推荐

  1. UE5:相机震动CameraShake源码分析

    本文将会分析UE5中相机震动的调用流程,会简要地分析UCameraModifier_CameraShake.UCameraShakeBase等相关类的调用过程. 阅读本文,你至少需要使用或者了解过Ca ...

  2. CodeForces 1030E Vasya and Good Sequences 位运算 思维

    原题链接 题意 目前我们有一个长为n的序列,我们可以对其中的每一个数进行任意的二进制重排(改变其二进制表示结果的排列),问我们进行若干次操作后得到的序列,最多能有多少对 \(l, r\) 使得 \([ ...

  3. 在xml中比较运算符

    SQL 中,可以使用比较运算符来比较两个值,如使用小于运算符 < 比较两个值大小.但是,在 SQL 查询中,有时候需要将小于运算符 < 用于 XML 或 HTML 语法中,这会导致语法冲突 ...

  4. spring-mvc 系列:拦截器和异常处理器(HandlerInterceptor、HandlerExceptionResolver)

    目录 一.拦截器的配置 二.拦截器的三个抽象方法 三.多个拦截器的执行顺序 四.基于配置的异常处理器 五.基于注解的异常处理器 一.拦截器的配置 SpringMVC中的拦截器用于拦截控制器方法的执行 ...

  5. AI专家一席谈:复用算法、模型、案例,AI Gallery带你快速上手应用开发

    摘要: 华为云社区邀请到了AI Gallery的负责人严博,听他谈一谈AI Gallery的设计初衷.经典案例以及未来规划. 本文分享自华为云社区<AI专家一席谈:复用算法.模型.案例,AI G ...

  6. 一文带你从零认识什么是XLA

    摘要:简要介绍XLA的工作原理以及它在 Pytorch下的使用. 本文分享自华为云社区<XLA优化原理简介>,作者: 拓荒者01. 初识XLA XLA的全称是Accelerated Lin ...

  7. 一文带你了解什么是GitOps

    摘要:说起GitOps,可能很多朋友马上会联想到DevOps,那么GitOps和DevOps之间有什么关系.又有什么区别呢? 本文分享自华为云社区<浅谈GitOps>,作者: 敏捷的小智. ...

  8. ServiceWorker工作机制与生命周期:资源缓存与协作通信处理

    在 <web messaging与Woker分类:漫谈postMessage跨线程跨页面通信>介绍过ServiceWorker,这里摘抄跟多的内容,补全 Service Worker 理解 ...

  9. Hive查看,删除分区

    查看所有分区 show partitions 表名; 删除一般会有两种方案 1.直接删除hdfs文件 亲测删除hdfs路径后 查看分区还是能看到此分区 可能会引起其他问题 此方法不建议 2. 使用删除 ...

  10. PPT 用图片轻松做出高大上的精修

    PPT 用图片轻松做出高大上的精修 图片留白充分 图片很花 文字和图片中间,插入一个透明背景 单图片型 放大+色块 左右分割 上下分割 用一个容器 图形结合 多图型 图片并列