php-kafka
1、环境依赖
The extension support both PHP 5 and PHP 7.
The extension requires » librdkafka >= 0.8 for basic
functionality, and >= 0.9 for the high level consumer.
2、安装librdkafka
git clone https://github.com/edenhill/librdkafka.git
./configure
make
sudo make install
3、安装php-rdkafka
sudo pecl install rdkafka
#Add the following line to your php.ini file:
extension=rdkafka.so
重启服务器,查看phpinfo,即安装好了rdkafka拓展
4、使用
producer:
<?php
$rk = new RdKafka\Producer();
$rk->setLogLevel(LOG_DEBUG);
$rk->addBrokers("127.0.0.1");
$topic = $rk->newTopic("test");
for ($i = 0; $i < 10; $i++) {
$topic->produce(RD_KAFKA_PARTITION_UA, 0, "Message $i");
$rk->poll(0);
}
while ($rk->getOutQLen() > 0) {
$rk->poll(50);
}
?>
Low level consumer:
<?php
$conf = new RdKafka\Conf();
// Set the group id. This is required when storing offsets on the broker
$conf->set('group.id', 'myConsumerGroup');
$rk = new RdKafka\Consumer($conf);
$rk->addBrokers("127.0.0.1");
$topicConf = new RdKafka\TopicConf();
$topicConf->set('auto.commit.interval.ms', 100);
// Set the offset store method to 'file'
$topicConf->set('offset.store.method', 'file');
$topicConf->set('offset.store.path', sys_get_temp_dir());
// Alternatively, set the offset store method to 'broker'
// $topicConf->set('offset.store.method', 'broker');
// Set where to start consuming messages when there is no initial offset in
// offset store or the desired offset is out of range.
// 'smallest': start from the beginning
$topicConf->set('auto.offset.reset', 'smallest');
$topic = $rk->newTopic("test", $topicConf);
// Start consuming partition 0
$topic->consumeStart(0, RD_KAFKA_OFFSET_STORED);
while (true) {
$message = $topic->consume(0, 120*10000);
switch ($message->err) {
case RD_KAFKA_RESP_ERR_NO_ERROR:
var_dump($message);
break;
case RD_KAFKA_RESP_ERR__PARTITION_EOF:
echo "No more messages; will wait for more\n";
break;
case RD_KAFKA_RESP_ERR__TIMED_OUT:
echo "Timed out\n";
break;
default:
throw new \Exception($message->errstr(), $message->err);
break;
}
}
?>
High-level consumer:
<?php
$conf = new RdKafka\Conf();
// Set a rebalance callback to log partition assignments (optional)
$conf->setRebalanceCb(function (RdKafka\KafkaConsumer $kafka, $err, array $partitions = null) {
switch ($err) {
case RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS:
echo "Assign: ";
var_dump($partitions);
$kafka->assign($partitions);
break;
case RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS:
echo "Revoke: ";
var_dump($partitions);
$kafka->assign(NULL);
break;
default:
throw new \Exception($err);
}
});
// Configure the group.id. All consumer with the same group.id will consume
// different partitions.
$conf->set('group.id', 'myConsumerGroup');
// Initial list of Kafka brokers
$conf->set('metadata.broker.list', '127.0.0.1');
$topicConf = new RdKafka\TopicConf();
// Set where to start consuming messages when there is no initial offset in
// offset store or the desired offset is out of range.
// 'smallest': start from the beginning
$topicConf->set('auto.offset.reset', 'smallest');
// Set the configuration to use for subscribed/assigned topics
$conf->setDefaultTopicConf($topicConf);
$consumer = new RdKafka\KafkaConsumer($conf);
// Subscribe to topic 'test'
$consumer->subscribe(['test']);
echo "Waiting for partition assignment... (make take some time when\n";
echo "quickly re-joining the group after leaving it.)\n";
while (true) {
$message = $consumer->consume(120*1000);
switch ($message->err) {
case RD_KAFKA_RESP_ERR_NO_ERROR:
var_dump($message);
break;
case RD_KAFKA_RESP_ERR__PARTITION_EOF:
echo "No more messages; will wait for more\n";
break;
case RD_KAFKA_RESP_ERR__TIMED_OUT:
echo "Timed out\n";
break;
default:
throw new \Exception($message->errstr(), $message->err);
break;
}
}
?>
php-kafka的更多相关文章
- Spark踩坑记——Spark Streaming+Kafka
[TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...
- 消息队列 Kafka 的基本知识及 .NET Core 客户端
前言 最新项目中要用到消息队列来做消息的传输,之所以选着 Kafka 是因为要配合其他 java 项目中,所以就对 Kafka 了解了一下,也算是做个笔记吧. 本篇不谈论 Kafka 和其他的一些消息 ...
- kafka学习笔记:知识点整理
一.为什么需要消息系统 1.解耦: 允许你独立的扩展或修改两边的处理过程,只要确保它们遵守同样的接口约束. 2.冗余: 消息队列把数据进行持久化直到它们已经被完全处理,通过这一方式规避了数据丢失风险. ...
- .net windows Kafka 安装与使用入门(入门笔记)
完整解决方案请参考: Setting Up and Running Apache Kafka on Windows OS 在环境搭建过程中遇到两个问题,在这里先列出来,以方便查询: 1. \Jav ...
- kafka配置与使用实例
kafka作为消息队列,在与netty.多线程配合使用时,可以达到高效的消息队列
- kafka源码分析之一server启动分析
0. 关键概念 关键概念 Concepts Function Topic 用于划分Message的逻辑概念,一个Topic可以分布在多个Broker上. Partition 是Kafka中横向扩展和一 ...
- Kafka副本管理—— 为何去掉replica.lag.max.messages参数
今天查看Kafka 0.10.0的官方文档,发现了这样一句话:Configuration parameter replica.lag.max.messages was removed. Partiti ...
- Kafka:主要参数详解(转)
原文地址:http://kafka.apache.org/documentation.html ############################# System ############### ...
- kafka
2016-11-13 20:48:43 简单说明什么是kafka? Apache kafka是消息中间件的一种,我发现很多人不知道消息中间件是什么,在开始学习之前,我这边就先简单的解释一下什么是消息 ...
- Spark Streaming+Kafka
Spark Streaming+Kafka 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端, ...
随机推荐
- 我对alias的重新认识:通过alias让rm更安全
bash&shell系列文章:http://www.cnblogs.com/f-ck-need-u/p/7048359.html rm的悲剧总是发生在不经意之间,所以无论是在shell脚本中还 ...
- Ansible系列(四):playbook应用和roles自动化批量安装示例
Ansible系列文章:http://www.cnblogs.com/f-ck-need-u/p/7576137.html playbook是ansible实现批量自动化最重要的手段.在其中可以使用变 ...
- svgalib_1.4.3 移植
运行环境 RedHat 6.3 Linux localhost 2.6.32-279.el6.i686 需准备好的文件: libx86_1.1+ds1.orig.tar.gz libx86_1.1+d ...
- unsafe关键字
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.L ...
- MVC 获取控制器名称和Action名称(转载)
MVC在filter中如何获取控制器名称和Action名称 使用ActionExecutingContext对象可以获取控制器名称.Action名称.参数名称以及参数值.路由和Action返回值不 ...
- minitab的鱼骨图的制作
Minitab的操作路径为:主菜单Stat > Quality Tools > Cause-and-Effect 先分别选择列"Man.Machine.Material.Meth ...
- 解决VM提示:VMware Workstation cannot connect to the virtual machine. Make sure you have rights to run the program, access all directories the program uses, and access all directories for temporary files.
问题: 在开启虚拟机的时候报: VMware Workstation cannot connect to the virtual machine. Make sure you have rights ...
- loj#2002. 「SDOI2017」序列计数(dp 矩阵乘法)
题意 题目链接 Sol 质数的限制并没有什么卵用,直接容斥一下:答案 = 忽略质数总的方案 - 没有质数的方案 那么直接dp,设\(f[i][j]\)表示到第i个位置,当前和为j的方案数 \(f[i ...
- gulp使用 笔记
全局安装gulp,也需要本地安装gulp插件.全局安装gulp是为了执行gulp任务,本地安装gulp则是为了调用gulp插件的功能 //导入工具包 require('node_modules里对应模 ...
- NODE获取节点删除空格的操作
NODE节点操作有: object.parentNode:获取某子元素的父级: object.childNodes:是获取所有的子元素节点,返回数组类型: object.lastChild: 获取该元 ...