kafka connect rest api
1. 获取 Connect Worker 信息
curl -s http://127.0.0.1:8083/ | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/ | jq
{
"version": "2.1.0",
"commit": "809be928f1ae004e",
"kafka_cluster_id": "NGQRxNZMSY6Q53ktQABHsQ"
}
2.列出 Connect Worker 上所有 Connector
curl -s http://127.0.0.1:8083/connector-plugins | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/connector-plugins | jq
[
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"type": "sink",
"version": "5.2.1"
},
{
"class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.oracle.OracleConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.postgresql.PostgresConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.sqlserver.SqlServerConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "2.1.0"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "2.1.0"
}
]
3.获取 Connector 上 Task 以及相关配置的信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/tasks | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/tasks |jq
[
{
"id": {
"connector": "inventory-connector",
"task":
},
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
}
]
4.获取 Connector 状态信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/status | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/status |jq
{
"name": "inventory-connector",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": ,
"worker_id": "127.0.0.1:8083"
}
],
"type": "source"
}
5.获取 Connector 配置信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/config | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/config |jq
{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
6.暂停 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/pause
7.重启 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/resume
8.删除 Connector
curl -s -X DELETE http://127.0.0.1:8083/connectors/<Connector名字>
9.创建新 Connector (以FileStreamSourceConnector举例)
curl -s -X POST -H "Content-Type: application/json" --data
'{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"flush.size": "10",
"format.class":"io.confluent.connect.hdfs.string.StringFormat",
"hive.integration": true,
"hive.database": "inventory",
"hive.metastore.uris": "thrift://127.0.0.1:9083",
"schema.compatibility": "BACKWARD"
}
}'
http://http://127.0.0.1:8083/connectors | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -H "applicaiton/json" http://127.0.0.1:8083/connectors/hdfs-hive-sink |jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- --:--:-- --:--:--
{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"format.class": "io.confluent.connect.hdfs.string.StringFormat",
"flush.size": "",
"tasks.max": "",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"name": "hdfs-hive-sink"
},
"tasks": [
{
"connector": "hdfs-hive-sink",
"task":
}
],
"type": "sink"
}
10.更新 Connector配置 (以FileStreamSourceConnector举例)
curl -s -X PUT -H "Content-Type: application/json" --data
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}'
http://127.0.0.1:8083/connectors/file-stream-demo-distributed/config | jq
kafka connect rest api的更多相关文章
- 替代Flume——Kafka Connect简介
我们知道过去对于Kafka的定义是分布式,分区化的,带备份机制的日志提交服务.也就是一个分布式的消息队列,这也是他最常见的用法.但是Kafka不止于此,打开最新的官网. 我们看到Kafka最新的定义是 ...
- Streaming data from Oracle using Oracle GoldenGate and Kafka Connect
This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...
- kafka connect 使用说明
KAFKA CONNECT 使用说明 一.概述 kafka connect 是一个可扩展的.可靠的在kafka和其他系统之间流传输的数据工具.简而言之就是他可以通过Connector(连接器)简单.快 ...
- Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive
In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...
- Hadoop生态圈-Kafka的旧API实现生产者-消费者
Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...
- Kafka: Connect
转自:http://www.cnblogs.com/f1194361820/p/6108025.html Kafka Connect 简介 Kafka Connect 是一个可以在Kafka与其他系统 ...
- kafka connect简介以及部署
https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...
- 使用Kafka Connect创建测试数据生成器
在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...
- Kafka Connect简介
Kafka Connect简介 http://colobu.com/2016/02/24/kafka-connect/#more Kafka 0.9+增加了一个新的特性Kafka Connect,可以 ...
随机推荐
- C#入门基本概念
一.版本号的命名规则 大部分时候是在名字后面加些数字表示不同的版本.其中以加上年份号最为简单明了.比如 Visual Studio 2008.但是大部分人还是不用这个方式.因为年份号中没有带来跟多的信 ...
- Anaconda安装(Windows)
Anaconda集成了python和各种常用工具. 目前有两个版本,分别包含了python2.7和python3.5,并且自动添加环境变量.去官网下载安装包. 1.安装其他模块.conda insta ...
- 在SQL Server 2008上安装ArcSDE 10.1并实现远程连接
先安装SQL Server 2008 R2 X64(SP2),创建数据库实例,安装客户端. 再安装ArcSDE 10.1,ArcGIS Desktop 10.1,一切顺利. 由于Desktop是32位 ...
- knockout为绑定元素生成id
knockout 提供生成了uniqueName的方法,但没有提供生成Id的方法. 感谢stackoverflow提供的思路与方法. 下面是uniqueName的实现方法. ko.bindingHan ...
- 《Java编程思想》读书笔记-赋值操作符
在最底层,Java中的数据是通过使用操作符来操作的.接下来我们逐一认识一些操作符. 怎么运用操作符 操作符接受一个或多个参数,并生成一个新值. 基本操作符 赋值操作符 符号:= 作用:取右边的值,把它 ...
- 克拉美罗界(CRB)
转载自:http://www.cnblogs.com/rubbninja/p/4512765.html 各种研究领域(包括无线定位方向)都会碰到参数估计的问题,这时常常会看到克拉美罗界 (Cramér ...
- Python多线程的运行及time.sleep()的应用
已知小明和其弟弟小白每月都需要生活费,二人同时从同一个账户中取钱,两人每人每月需要1000元,账户中现有余额3200元,如果卡内余额大于2000元,则父母不会存入,如果卡内余额小于2000元,则父母当 ...
- 增删改查js
-----------------------------------------------------一---------------------------------------------- ...
- XSS(四)攻击防御
XSS Filter XSS Filter的作用是过滤用户(客户端)提交的有害信息,从而达到防范XSS攻击的效果 XSS Filter作为防御跨站攻击的主要手段之一,已经广泛应用在各类Web系统之中, ...
- 真tm郁闷
昨天这时还是信心满满,今天这时就已经彻底颓了. 感觉这次是最接近的了,4个题目都做出来了,怎么还会fail,那边也不说为什么,到底是哪里不足,尽说些没用的. 前后都当了4次炮灰了,以后也不会再冲动了, ...