kafka connect rest api
1. 获取 Connect Worker 信息
curl -s http://127.0.0.1:8083/ | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/ | jq
{
"version": "2.1.0",
"commit": "809be928f1ae004e",
"kafka_cluster_id": "NGQRxNZMSY6Q53ktQABHsQ"
}
2.列出 Connect Worker 上所有 Connector
curl -s http://127.0.0.1:8083/connector-plugins | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/connector-plugins | jq
[
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"type": "sink",
"version": "5.2.1"
},
{
"class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.oracle.OracleConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.postgresql.PostgresConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.sqlserver.SqlServerConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "2.1.0"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "2.1.0"
}
]
3.获取 Connector 上 Task 以及相关配置的信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/tasks | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/tasks |jq
[
{
"id": {
"connector": "inventory-connector",
"task":
},
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
}
]
4.获取 Connector 状态信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/status | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/status |jq
{
"name": "inventory-connector",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": ,
"worker_id": "127.0.0.1:8083"
}
],
"type": "source"
}
5.获取 Connector 配置信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/config | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/config |jq
{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
6.暂停 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/pause
7.重启 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/resume
8.删除 Connector
curl -s -X DELETE http://127.0.0.1:8083/connectors/<Connector名字>
9.创建新 Connector (以FileStreamSourceConnector举例)
curl -s -X POST -H "Content-Type: application/json" --data
'{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"flush.size": "10",
"format.class":"io.confluent.connect.hdfs.string.StringFormat",
"hive.integration": true,
"hive.database": "inventory",
"hive.metastore.uris": "thrift://127.0.0.1:9083",
"schema.compatibility": "BACKWARD"
}
}'
http://http://127.0.0.1:8083/connectors | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -H "applicaiton/json" http://127.0.0.1:8083/connectors/hdfs-hive-sink |jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- --:--:-- --:--:--
{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"format.class": "io.confluent.connect.hdfs.string.StringFormat",
"flush.size": "",
"tasks.max": "",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"name": "hdfs-hive-sink"
},
"tasks": [
{
"connector": "hdfs-hive-sink",
"task":
}
],
"type": "sink"
}
10.更新 Connector配置 (以FileStreamSourceConnector举例)
curl -s -X PUT -H "Content-Type: application/json" --data
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}'
http://127.0.0.1:8083/connectors/file-stream-demo-distributed/config | jq
kafka connect rest api的更多相关文章
- 替代Flume——Kafka Connect简介
我们知道过去对于Kafka的定义是分布式,分区化的,带备份机制的日志提交服务.也就是一个分布式的消息队列,这也是他最常见的用法.但是Kafka不止于此,打开最新的官网. 我们看到Kafka最新的定义是 ...
- Streaming data from Oracle using Oracle GoldenGate and Kafka Connect
This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...
- kafka connect 使用说明
KAFKA CONNECT 使用说明 一.概述 kafka connect 是一个可扩展的.可靠的在kafka和其他系统之间流传输的数据工具.简而言之就是他可以通过Connector(连接器)简单.快 ...
- Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive
In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...
- Hadoop生态圈-Kafka的旧API实现生产者-消费者
Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...
- Kafka: Connect
转自:http://www.cnblogs.com/f1194361820/p/6108025.html Kafka Connect 简介 Kafka Connect 是一个可以在Kafka与其他系统 ...
- kafka connect简介以及部署
https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...
- 使用Kafka Connect创建测试数据生成器
在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...
- Kafka Connect简介
Kafka Connect简介 http://colobu.com/2016/02/24/kafka-connect/#more Kafka 0.9+增加了一个新的特性Kafka Connect,可以 ...
随机推荐
- 部署在sae上的servlet程序出现is not a javax.servlet.Servlet 错误
sae本身提供了servlet jar包, 部署时删掉你lib目录下的servlet jar包.
- getRealPath()和getContextPath()的区别
转载自:http://sucre.iteye.com/blog/319178 在程序中常常要获取文件的路径,有的时候需要用到相对路径而有的时候就要用到绝对路径,一提到绝对路径大家一定想到了getRea ...
- Ajax的工作原理以及优缺点
Ajax的工作原理 : 相当于在客户端与服务端之间加了一个抽象层(Ajax引擎),使用户请求和服务器响应异步化,并不是所有的请求都提交给服务器,像一些数据验证和数据处理 都交给Ajax引擎来完成,只有 ...
- 【scarpy】笔记三:实战一
一.前提 我们开始爬虫前,基本按照以下步骤来做: 1.爬虫步骤:新建项目,明确爬虫目标,制作爬虫,存储爬虫内容 二.实战(已豆瓣为例子) 2.1 创建项目 1.打开pycharm -> 点开te ...
- Acrobat.CAcroPDDoc open 无法找到指定文件
pdfDoc = (Acrobat.CAcroPDDoc)Microsoft.VisualBasic.Interaction.CreateObject("AcroExch.PDDoc&quo ...
- 【java编程】java的关键字修饰符
一.transient java语言的关键字,变量修饰符,如果用transient声明一个实例变量,当对象存储时,它的值不需要维持.换句话来说就是,用transient关键字标记的成员变量不参与序列化 ...
- redis命令行批量删除匹配到的key
执行命令如下 redis-cli -h 12.132.30.21 -p 6379 -a 2016 -n 4 keys "ecard*" | xargs redis-cli -h 1 ...
- python selenium-webdriver 下拉菜单处理 (九)
测试过程中经常遇到下来菜单,比如说分页,每页显示的条数,以及语言的切换,很多时候经常是以下来菜单的形式展现,下面我们看一下selenium如何处理下来菜单. 首先selenium 很人性化的给提供了一 ...
- 【DevExpress】邮箱制作小结
利用DevExpress的RichEditControl控件可以发送包含图片的邮件.但存在一个问题.RichEdit直接将图片解析成base64码包含在RichEdit的HtmlText中,这导致客户 ...
- 修改windows7 的管理员密码
某天,公司财务同事的电脑出现了一个相当奇葩的现象,有些程序不能用了,经过查看,发现本是管理员的账户变得只有一个users用户组了 造成程序在运行时,无权限修改一些文件,造成程序无法启动 my god, ...