Kafka Ecosystem(Kafka生态)
http://kafka.apache.org/documentation/#ecosystem
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
- 由 Jay Kreps创建, 最终由 Ray Chiang修改于 一月 04, 2019
Here is a list of tools we have been told about that integrate with Kafka outside the main distribution. We haven't tried them all, so they may not work!
Clients, of course, are listed separately here.
Kafka Connect
Kafka has a built-in framework called Kafka Connect for writing sources and sinks that either continuously ingest data into Kafka or continuously ingest data in Kafka into external systems. The connectors themselves for different applications or data systems are federated and maintained separately from the main code base. You can find a list of available connectors at the Kafka Connect Hub.
Distributions & Packaging
- Confluent Platform - http://confluent.io/product/. Downloads - http://confluent.io/downloads/.
- Cloudera Kafka source (0.11.0) https://github.com/cloudera/kafka/tree/cdh5-1.0.1_3.1.0 and release http://archive.cloudera.com/kafka/parcels/3.1.0/
- Hortonworks Kafka source and release http://hortonworks.com/hadoop/kafka/
- Stratio Kafka source for ubuntu http://repository.stratio.com/sds/1.1/ubuntu/13.10/binary/ and for RHEL http://repository.stratio.com/sds/1.1/RHEL/
- IBM Event Streams - https://www.ibm.com/cloud/event-streams - Apache Kafka on premise and the public cloud
- Strimzi - http://strimzi.io/ - Apache Kafka Operator for Kubernetes and Openshift. Downloads and Helm Chart - https://github.com/strimzi/strimzi-kafka-operator/releases/latest
- TIBCO Messaging - Apache Kafka Distribution - https://www.tibco.com/products/apache-kafka Downloads - https://www.tibco.com/products/tibco-messaging/downloads
Stream Processing
- Kafka Streams - the built-in stream processing library of the Apache Kafka project
- Kafka Streams Ecosystem:
- Complex Event Processing (CEP): https://github.com/fhussonnois/kafkastreams-cep.
- Storm - A stream-processing framework.
- Samza - A YARN-based stream processing framework.
- Storm Spout - Consume messages from Kafka and emit as Storm tuples
- Kafka-Storm - Kafka 0.8, Storm 0.9, Avro integration
- SparkStreaming - Kafka receiver supports Kafka 0.8 and above
- Flink - Apache Flink has an integration with Kafka
- IBM Streams - A stream processing framework with Kafka source and sink to consume and produce Kafka messages
- Spring Cloud Stream - a framework for building event-driven microservices, Spring Cloud Data Flow - a cloud-native orchestration service for Spring Cloud Stream applications
- Apache Apex - Stream processing framework with connectors for Kafka as source and sink.
Hadoop Integration
- Confluent HDFS Connector - A sink connector for the Kafka Connect framework for writing data from Kafka to Hadoop HDFS
- Camus - LinkedIn's Kafka=>HDFS pipeline. This one is used for all data at LinkedIn, and works great.
- Kafka Hadoop Loader A different take on Hadoop loading functionality from what is included in the main distribution.
- Flume - Contains Kafka source (consumer) and sink (producer)
- KaBoom - A high-performance HDFS data loader
Database Integration
- Confluent JDBC Connector - A source connector for the Kafka Connect framework for writing data from RDBMS (e.g. MySQL) to Kafka
- Oracle Golden Gate Connector - Source connector that collects CDC operations via Golden Gate and writes them to Kafka
Search and Query
- ElasticSearch - This project, Kafka Standalone Consumer will read the messages from Kafka, processes and index them in ElasticSearch. There are also several Kafka Connect connectors for ElasticSeach.
- Presto - The Presto Kafka connector allows you to query Kafka in SQL using Presto.
- Hive- Hive SerDe that allows querying Kafka (Avro only for now) using Hive SQL
Management Consoles
- Kafka Manager - A tool for managing Apache Kafka.
- kafkat - Simplified command-line administration for Kafka brokers.
- Kafka Web Console - Displays information about your Kafka cluster including which nodes are up and what topics they host data for.
- Kafka Offset Monitor - Displays the state of all consumers and how far behind the head of the stream they are.
- Capillary – Displays the state and deltas of Kafka-based Apache Storm topologies. Supports Kafka >= 0.8. It also provides an API for fetching this information for monitoring purposes.
- Doctor Kafka - Service for cluster auto healing and workload balancing.
- Cruise Control - Fully automate the dynamic workload rebalance and self-healing of a Kafka cluster.
- Burrow - Monitoring companion that provides consumer lag checking as a service without the need for specifying thresholds.
- Chaperone - An audit system that monitors the completeness and latency of data stream.
AWS Integration
- Automated AWS deployment
- Kafka -> S3 Mirroring tool from Pinterest.
- Alternative Kafka->S3 Mirroring tool
Logging
- syslog (1M)
- syslog producer : A producer that supports both raw data and protobuf with meta data for deep analytics usage.
- syslog-ng (https://syslog-ng.org/) is one of the most widely used open source log collection tools, capable of filtering, classifying, parsing log data and forwarding it to a wide variety of destinations. Kafka is a first-class destination in the syslog-ng tool; details on the integration can be found at https://czanik.blogs.balabit.com/2015/11/kafka-and-syslog-ng/ .
- klogd - A python syslog publisher
- klogd2 - A java syslog publisher
- Tail2Kafka - A simple log tailing utility
- Fluentd plugin - Integration with Fluentd
- Remote log viewer
- LogStash integration - Integration with LogStash and Fluentd
- Syslog Collector written in Go
- Klogger - A simple proxy service for Kafka.
- fuse-kafka: A file system logging agent based on Kafka
- omkafka: Another syslog integration, this one in C and uses librdkafka library
- logkafka - Collect logs and send lines to Apache Kafka
Flume - Kafka plugins
- Flume Kafka Plugin - Integration with Flume
- Kafka as a sink and source in Flume - Integration with Flume
Metrics
- Mozilla Metrics Service - A Kafka and Protocol Buffers based metrics and logging system
- Ganglia Integration
- SPM for Kafka
- Coda Hale Metric Reporter to Kafka
- kafka-dropwizard-reporter - Register built-in Kafka client and stream metrics to Dropwizard Metrics
Packing and Deployment
- RPM packaging
- Debian packaginghttps://github.com/tomdz/kafka-deb-packaging
- Puppet Integration
- Dropwizard packaging
Kafka Camel Integration
Misc.
- Kafka Websocket - A proxy that interoperates with websockets for delivering Kafka data to browsers.
- KafkaCat- A native, command line producer and consumer.
- Kafka Mirror - An alternative to the built-in mirroring tool
- Ruby Demo App
- Apache Camel Integration
- Infobright integration
- Riemann Consumer of Metrics
- stormkafkamom – curses-based tool which displays state ofApache Storm based Kafka consumers (Kafka 0.7 only).
- uReplicator - Provides the ability to replicate across Kafka clusters in other data centers
- Mirus - A tool for distributed, high-volume replication between Apache Kafka clusters based on Kafka Connect
Kafka Ecosystem(Kafka生态)的更多相关文章
- CentOS 7部署Kafka和Kafka集群
CentOS 7部署Kafka和Kafka集群 注意事项 需要启动多个shell脚本交互客户端进行验证,运行中的客户端不要停止. 准备工作: 安装java并设置java环境变量,在`/etc/prof ...
- Kafka(3)--kafka消息的存储及Partition副本原理
消息的存储原理: 消息的文件存储机制: 前面我们知道了一个 topic 的多个 partition 在物理磁盘上的保存路径,那么我们再来分析日志的存储方式.通过 [root@localhost ~]# ...
- Kafka记录-Kafka简介与单机部署测试
1.Kafka简介 kafka-分布式发布-订阅消息系统,开发语言-Scala,协议-仿AMQP,不支持事务,支持集群,支持负载均衡,支持zk动态扩容 2.Kafka的架构组件 1.话题(Topic) ...
- Apache Kafka安全| Kafka的需求和组成部分
1.目标 - 卡夫卡安全 今天,在这个Kafka教程中,我们将看到Apache Kafka Security 的概念 .Kafka Security教程包括我们需要安全性的原因,详细介绍加密.有了这 ...
- kafka - Confluent.Kafka
上个章节我们讲了kafka的环境安装(这里),现在主要来了解下Kafka使用,基于.net实现kafka的消息队列应用,本文用的是Confluent.Kafka,版本0.11.6 1.安装: 在NuG ...
- kafka实战教程(python操作kafka),kafka配置文件详解
kafka实战教程(python操作kafka),kafka配置文件详解 应用往Kafka写数据的原因有很多:用户行为分析.日志存储.异步通信等.多样化的使用场景带来了多样化的需求:消息是否能丢失?是 ...
- kafka笔记-Kafka在zookeeper中的存储结构【转】
参考链接:apache kafka系列之在zookeeper中存储结构 http://blog.csdn.net/lizhitao/article/details/23744675 1.topic注 ...
- 流式处理的新贵 Kafka Stream - Kafka设计解析(七)
原创文章,转载请务必将下面这段话置于文章开头处. 本文转发自技术世界,原文链接 http://www.jasongj.com/kafka/kafka_stream/ Kafka Stream背景 Ka ...
- Spark Streaming + Kafka整合(Kafka broker版本0.8.2.1+)
这篇博客是基于Spark Streaming整合Kafka-0.8.2.1官方文档. 本文主要讲解了Spark Streaming如何从Kafka接收数据.Spark Streaming从Kafka接 ...
随机推荐
- PAT1031:Hello World for U
1031. Hello World for U (20) 时间限制 400 ms 内存限制 65536 kB 代码长度限制 16000 B 判题程序 Standard 作者 CHEN, Yue Giv ...
- 使用Python分析ELF文件优化Flash和Sram空间的案例
1. 背景 Zephyr项目Flash和Ram空间比较紧张,有着非常强烈的优化需求. 优化的前提是量化标的,那么如何量化Flash和Ram的使用量呢? 在量化之后,首先要对量化结果进行分析,然后采取措 ...
- LoadRunner 11 中Analysis分析
原文:http://www.cnblogs.com/Chilam007/p/6445165.html analysis简介 分析器就是对测试结果数据进行分析的组件,它是LR三大组件之一,保存着大量用来 ...
- tkinter中menu菜单控件(十二)
menu菜单控件 import tkinter wuya = tkinter.Tk() wuya.title("wuya") wuya.geometry("300x200 ...
- 给xmpphp添加了几个常用的方法
给xmpphp添加给了以下的常用方法: registerNewUser //注册一个新用户 addRosterContact //发送添加好友的请求 acce ...
- MySQL 8 新特性之降序索引
MySQL 8.0终于支持降序索引了.其实,从语法上,MySQL 4就支持了,但正如官方文档所言,"they are parsed but ignored",实际创建的还是升序索引 ...
- 转载:selenium webdriver定位不到元素的五种原因及解决办法
1.动态id定位不到元素for example: //WebElement xiexin_element = driver.findElement(By.id("_mail_c ...
- Prometheus监控⼊⻔简介
文档目录: • prometheus是什么?• prometheus能为我们带来些什么• prometheus对于运维的要求• prometheus多图效果展示 1) Prometheus是什么pro ...
- Elasticsearch笔记四之配置参数与核心概念
在es根目录下有一个config目录,在此目录下有两个文件分别是elasticsearch.yml和logging.yml. logging.yml是日志文件,es也是使用log4j来记录日志的,我在 ...
- .net core使用Ocelot+Identity Server统一网关验证
源码下载地址:下载 项目结构如下图: 在Identity Server授权中,实现IResourceOwnerPasswordValidator接口: public class IdentityVal ...