kafka connect rest api
1. 获取 Connect Worker 信息
curl -s http://127.0.0.1:8083/ | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/ | jq
{
"version": "2.1.0",
"commit": "809be928f1ae004e",
"kafka_cluster_id": "NGQRxNZMSY6Q53ktQABHsQ"
}
2.列出 Connect Worker 上所有 Connector
curl -s http://127.0.0.1:8083/connector-plugins | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/connector-plugins | jq
[
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"type": "sink",
"version": "5.2.1"
},
{
"class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.oracle.OracleConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.postgresql.PostgresConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.sqlserver.SqlServerConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "2.1.0"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "2.1.0"
}
]
3.获取 Connector 上 Task 以及相关配置的信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/tasks | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/tasks |jq
[
{
"id": {
"connector": "inventory-connector",
"task":
},
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
}
]
4.获取 Connector 状态信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/status | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/status |jq
{
"name": "inventory-connector",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": ,
"worker_id": "127.0.0.1:8083"
}
],
"type": "source"
}
5.获取 Connector 配置信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/config | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/config |jq
{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
6.暂停 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/pause
7.重启 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/resume
8.删除 Connector
curl -s -X DELETE http://127.0.0.1:8083/connectors/<Connector名字>
9.创建新 Connector (以FileStreamSourceConnector举例)
curl -s -X POST -H "Content-Type: application/json" --data
'{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"flush.size": "10",
"format.class":"io.confluent.connect.hdfs.string.StringFormat",
"hive.integration": true,
"hive.database": "inventory",
"hive.metastore.uris": "thrift://127.0.0.1:9083",
"schema.compatibility": "BACKWARD"
}
}'
http://http://127.0.0.1:8083/connectors | jq
lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -H "applicaiton/json" http://127.0.0.1:8083/connectors/hdfs-hive-sink |jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- --:--:-- --:--:--
{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"format.class": "io.confluent.connect.hdfs.string.StringFormat",
"flush.size": "",
"tasks.max": "",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"name": "hdfs-hive-sink"
},
"tasks": [
{
"connector": "hdfs-hive-sink",
"task":
}
],
"type": "sink"
}
10.更新 Connector配置 (以FileStreamSourceConnector举例)
curl -s -X PUT -H "Content-Type: application/json" --data
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}'
http://127.0.0.1:8083/connectors/file-stream-demo-distributed/config | jq
kafka connect rest api的更多相关文章
- 替代Flume——Kafka Connect简介
我们知道过去对于Kafka的定义是分布式,分区化的,带备份机制的日志提交服务.也就是一个分布式的消息队列,这也是他最常见的用法.但是Kafka不止于此,打开最新的官网. 我们看到Kafka最新的定义是 ...
- Streaming data from Oracle using Oracle GoldenGate and Kafka Connect
This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...
- kafka connect 使用说明
KAFKA CONNECT 使用说明 一.概述 kafka connect 是一个可扩展的.可靠的在kafka和其他系统之间流传输的数据工具.简而言之就是他可以通过Connector(连接器)简单.快 ...
- Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive
In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...
- Hadoop生态圈-Kafka的旧API实现生产者-消费者
Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...
- Kafka: Connect
转自:http://www.cnblogs.com/f1194361820/p/6108025.html Kafka Connect 简介 Kafka Connect 是一个可以在Kafka与其他系统 ...
- kafka connect简介以及部署
https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...
- 使用Kafka Connect创建测试数据生成器
在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...
- Kafka Connect简介
Kafka Connect简介 http://colobu.com/2016/02/24/kafka-connect/#more Kafka 0.9+增加了一个新的特性Kafka Connect,可以 ...
随机推荐
- Dapp的PVP发模式--magic-maze-2d游戏解读
前言: 未来基于Dapp的游戏可能会多起来吧, 尤其是博彩类游戏, 由于区块链匿名特性, 加之数字货币不受国家监控, 几乎成了一个法外之地. 大量游戏团队都往之涌入. 今天讲讲当前Dapp的一种游戏模 ...
- Could not find a version that satisfies the requirement PIL
Python Imageing Library 简称 PIL Python常用的图像处理库之一 Pillow是PIL一个fork. C:\Users\dangzhengtao>pip insta ...
- 小程序之hover-class
hover-class 属性主要是用来来指定元素的点击态效果.但是在在使用中要注意,大部分组件是不支持该属性的. 目前支持 hover-class 属性的组件有三个:view.button.navig ...
- 评测parser的好坏
1.在dependency parsing中一般是用 LAS UAS 来衡量 简要说来UAS是知道是边对了(也就是它依赖的节点找对了)就算对,而LAS在前者的基础上要求更加严格,还要求边的Label也 ...
- C# 中String.Join()方法
今天在工作中看到了组里一个大佬写的代码,感触颇多,同样实现一个需求,我写循环费了老大劲,代码又臭又长,大佬的代码简洁明了,三行搞定...不得不说,今天赚大了 简单总结一下今天赚到的知识 string里 ...
- 锋利的jQuery初学(2)
js与jq事件处理程序区别: 1,事件源: document.getElementById('id'); $("#id") 2,事件: document.getElem ...
- 小妖精的完美游戏教室——东方PROJECT,同人,子机
//================================================================//// Copyright (C)// All Rights Re ...
- jquery的相关用法
选择器基本选择器1.id选择器$('#id1')找到id为id1 的标签2.class选择器$('.class1')找到class中有class1这个类的标签3.标签选择器$('tag') 找到tag ...
- 解决微信浏览器无法使用window.location.reload刷新页面
function reload(){ window.location.href=window.location.href+"?id="+10000*Math.random(); }
- elipse使用,Java和Javaee模式区别
Java带有用户界面的基本ide,缺少数据库和web开发工具 IDE(Integrated Development Environment,集成开发环境).集成开发环境(简称IDE)软件是用于程序开发 ...