1. 获取 Connect Worker 信息
curl -s http://127.0.0.1:8083/ | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/ | jq
{
"version": "2.1.0",
"commit": "809be928f1ae004e",
"kafka_cluster_id": "NGQRxNZMSY6Q53ktQABHsQ"
}

2.列出 Connect Worker 上所有 Connector
curl -s http://127.0.0.1:8083/connector-plugins | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s http://127.0.0.1:8083/connector-plugins | jq
[
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"type": "sink",
"version": "5.2.1"
},
{
"class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
"type": "source",
"version": "2.1.0"
},
{
"class": "io.debezium.connector.mongodb.MongoDbConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.oracle.OracleConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.postgresql.PostgresConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "io.debezium.connector.sqlserver.SqlServerConnector",
"type": "source",
"version": "0.9.4.Final"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "2.1.0"
},
{
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "2.1.0"
}
]

3.获取 Connector 上 Task 以及相关配置的信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/tasks | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/tasks |jq
[
{
"id": {
"connector": "inventory-connector",
"task":
},
"config": {
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"task.class": "io.debezium.connector.mysql.MySqlConnectorTask",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}
}
]

4.获取 Connector 状态信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/status | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/status |jq
{
"name": "inventory-connector",
"connector": {
"state": "RUNNING",
"worker_id": "127.0.0.1:8083"
},
"tasks": [
{
"state": "RUNNING",
"id": ,
"worker_id": "127.0.0.1:8083"
}
],
"type": "source"
}

5.获取 Connector 配置信息
curl -s http://127.0.0.1:8083/connectors/<Connector名字>/config | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -s localhost:/connectors/inventory-connector/config |jq
{
"connector.class": "io.debezium.connector.mysql.MySqlConnector",
"database.user": "root",
"database.server.id": "",
"tasks.max": "",
"database.history.kafka.bootstrap.servers": "127.0.0.1:9092",
"database.history.kafka.topic": "dbhistory.inventory",
"database.server.name": "127.0.0.1",
"database.port": "",
"database.hostname": "127.0.0.1",
"database.password": "root",
"name": "inventory-connector",
"database.whitelist": "inventory"
}

6.暂停 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/pause

7.重启 Connector
curl -s -X PUT http://127.0.0.1:8083/connectors/<Connector名字>/resume

8.删除 Connector
curl -s -X DELETE http://127.0.0.1:8083/connectors/<Connector名字>

9.创建新 Connector (以FileStreamSourceConnector举例)
curl -s -X POST -H "Content-Type: application/json" --data
'{

"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "1",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"flush.size": "10",
"format.class":"io.confluent.connect.hdfs.string.StringFormat",
"hive.integration": true,
"hive.database": "inventory",
"hive.metastore.uris": "thrift://127.0.0.1:9083",
"schema.compatibility": "BACKWARD"
}
}'

http://http://127.0.0.1:8083/connectors | jq

lenmom@M1701:~/workspace/software/kafka_2.-2.1./logs$ curl -H "applicaiton/json"  http://127.0.0.1:8083/connectors/hdfs-hive-sink |jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
--:--:-- --:--:-- --:--:--
{
"name": "hdfs-hive-sink",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"format.class": "io.confluent.connect.hdfs.string.StringFormat",
"flush.size": "",
"tasks.max": "",
"topics": "127.0.0.1.inventory.customers",
"hdfs.url": "hdfs://127.0.0.1:9000/inventory",
"name": "hdfs-hive-sink"
},
"tasks": [
{
"connector": "hdfs-hive-sink",
"task":
}
],
"type": "sink"
}

10.更新 Connector配置 (以FileStreamSourceConnector举例)
curl -s -X PUT -H "Content-Type: application/json" --data
'{"connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector",
"key.converter.schemas.enable":"true",
"file":"demo-file.txt",
"tasks.max":"2",
"value.converter.schemas.enable":"true",
"name":"file-stream-demo-distributed",
"topic":"demo-2-distributed",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter":"org.apache.kafka.connect.json.JsonConverter"}'
http://127.0.0.1:8083/connectors/file-stream-demo-distributed/config | jq

kafka connect rest api的更多相关文章

  1. 替代Flume——Kafka Connect简介

    我们知道过去对于Kafka的定义是分布式,分区化的,带备份机制的日志提交服务.也就是一个分布式的消息队列,这也是他最常见的用法.但是Kafka不止于此,打开最新的官网. 我们看到Kafka最新的定义是 ...

  2. Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

    This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...

  3. kafka connect 使用说明

    KAFKA CONNECT 使用说明 一.概述 kafka connect 是一个可扩展的.可靠的在kafka和其他系统之间流传输的数据工具.简而言之就是他可以通过Connector(连接器)简单.快 ...

  4. Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive

    In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...

  5. Hadoop生态圈-Kafka的旧API实现生产者-消费者

    Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...

  6. Kafka: Connect

    转自:http://www.cnblogs.com/f1194361820/p/6108025.html Kafka Connect 简介 Kafka Connect 是一个可以在Kafka与其他系统 ...

  7. kafka connect简介以及部署

    https://blog.csdn.net/u011687037/article/details/57411790 1.什么是kafka connect? 根据官方介绍,Kafka Connect是一 ...

  8. 使用Kafka Connect创建测试数据生成器

    在最近的一些项目中,我使用Apache Kafka开发了一些数据管道.在性能测试方面,数据生成总是会在整个活动中引入一些样板代码,例如创建客户端实例,编写控制流以发送数据,根据业务逻辑随机化有效负载等 ...

  9. Kafka Connect简介

    Kafka Connect简介 http://colobu.com/2016/02/24/kafka-connect/#more Kafka 0.9+增加了一个新的特性Kafka Connect,可以 ...

随机推荐

  1. 锦囊9-if语句

    [程序描述] 编写程序,通过使用 if...elif...else 语句判断数字是正数.负数或零: [程序分析] 正数.负数或零的判断非常简单,只需要判断这个数是否大于零,小于零或者等于零.由于判断的 ...

  2. Windows下Redis安装过程

    1.去github下载Redis-x64-2.8.2402.zip压缩包 2.将压缩包解压到你要安装的目录下 3.将redis设置为开机自启动服务 redis-server --service-ins ...

  3. python类的语法和底层实现

    语法: class 类名: name = “egon”    # 类属性 def __init__(self): self.age = 18  # 对象属性 self.__sex = "fe ...

  4. 04PHP HTML状态保持

    HTTP无状态:会话时没有储存数据 HTTP状态保持: 1.Cookie:保存在浏览器   $_COOKIE[ ] 超全局变量  数组 不安全,用户可清楚数据时把Cookie清除 ==目的:多页面之间 ...

  5. less--入门

    Less(Learner Style Sheets)是向后兼容css扩展语言. 变量(Variables) @width: 10px; @height: @width + 10px; header{ ...

  6. Mabatis面试题

    Mybatis面试题 1请写出Mybatis核心配置文件MyBatis-config.xml的内容? <?xml version="1.0" encoding="U ...

  7. CF285D.Permutation Sum

    想了很久觉得自己做法肯定T啊,就算是CF机子的3s时限,但我毕竟是 O ( C(15,7)*7!*log ) .... 果然在n=15的点T了...贱兮兮地特判了15过掉了,结果发现题解说就是打表.. ...

  8. bootstrap研究感想1

    我—>新人,特纯的新人,受到方大神的建议,开始写博客,写一些工作时敲代码时的感受,学习模仿大神时的感悟. -------------------------------------------- ...

  9. 第一次博客作业(初识C++)

    Q1:学习<C++语言程序设计>课程之前,你知道什么是编程吗?谈谈上这门课之前你对编程的理解,以及你对自己编程能力的评估. A1:开始课程之前,我认为编程是这样的:用计算机的语言写一份流程 ...

  10. Django学习笔记之视图高级-HTTP请求与响应

    Django限制请求method 常用的请求method GET请求 GET请求一般用来向服务器索取数据,但不会向服务器提交数据,不会对服务器的状态进行更改.比如向服务器获取某篇文章的详情. POST ...