Kafka Connect REST Interface
Since Kafka Connect is intended to be run as a service, it also supports a REST API for managing connectors. By default this service runs on port 8083. When executed in distributed mode, the REST API will be the primary interface to the cluster. You can make requests to any cluster member; the REST API automatically forwards requests if required.
Although you can use the standalone mode just by submitting a connector on the command line, it also runs the REST interface. This is useful for getting status information, adding and removing connectors without stopping the process, and more.
Currently the top level resources are connector and connector-plugins. The sub-resources for connector lists configuration settings and tasks and the sub-resource for connector-plugins provides configuration validation and recommendation.
Note that if you try to modify, update or delete a resource under connector which may require the request to be forwarded to the leader, Connect will return status code 409 while the worker group rebalance is in process as the leader may change during rebalance.
Content Types
Currently the REST API only supports application/json as both the request and response entity content type. Your requests should specify the expected content type of the response via the HTTP Accept header:
Accept: application/json
and should specify the content type of the request entity (if one is included) via the Content-Type header:
Content-Type: application/json
Statuses & Errors
The REST API will return standards-compliant HTTP statuses. Clients should check the HTTP status, especially before attempting to parse and use response entities. Currently the API does not use redirects (statuses in the 300 range), but the use of these codes is reserved for future use so clients should handle them.
When possible, all endpoints will use a standard error message format for all errors (status codes in the 400 or 500 range). For example, a request entity that omits a required field may generate the following response:
HTTP/1.1 422 Unprocessable Entity
Content-Type: application/json {
"error_code": 422,
"message": "config may not be empty"
}Copy
Connectors
GET/connectors-
Get a list of active connectors
Response JSON Object: - connectors (array) -- List of connector names
Example request:
GET /connectors HTTP/1.1
Host: connect.example.com
Accept: application/jsonCopyExample response:
HTTP/1.1 200 OK
Content-Type: application/json ["my-jdbc-source", "my-hdfs-sink"]Copy
POST/connectors-
Create a new connector, returning the current connector info if successful. Return
409 (Conflict)if rebalance is in process.Request JSON Object: - name (string) -- Name of the connector to create
- config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object: - name (string) -- Name of the created connector
- config (map) -- Configuration parameters for the connector.
- tasks (array) -- List of active tasks generated by the connector
- tasks[i].connector (string) -- The name of the connector the task belongs to
- tasks[i].task (int) -- Task ID within the connector.
Example request:
POST /connectors HTTP/1.1
Host: connect.example.com
Content-Type: application/json
Accept: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
}CopyExample response:
HTTP/1.1 201 Created
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}Copy
GET/connectors/(string:name)-
Get information about the connector.
Response JSON Object: - name (string) -- Name of the created connector
- config (map) -- Configuration parameters for the connector.
- tasks (array) -- List of active tasks generated by the connector
- tasks[i].connector (string) -- The name of the connector the task belongs to
- tasks[i].task (int) -- Task ID within the connector.
Example request:
GET /connectors/hdfs-sink-connector HTTP/1.1
Host: connect.example.com
Accept: application/jsonCopyExample response:
HTTP/1.1 200 OK
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}Copy
GET/connectors/(string:name)/config-
Get the configuration for the connector.
Response JSON Object: - config (map) -- Configuration parameters for the connector.
Example request:
GET /connectors/hdfs-sink-connector/config HTTP/1.1
Host: connect.example.com
Accept: application/jsonCopyExample response:
HTTP/1.1 200 OK
Content-Type: application/json {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}Copy
PUT/connectors/(string:name)/config-
Create a new connector using the given configuration, or update the configuration for an existing connector. Returns information about the connector after the change has been made. Return
409 (Conflict)if rebalance is in process.Request JSON Object: - config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object: - name (string) -- Name of the created connector
- config (map) -- Configuration parameters for the connector.
- tasks (array) -- List of active tasks generated by the connector
- tasks[i].connector (string) -- The name of the connector the task belongs to
- tasks[i].task (int) -- Task ID within the connector.
Example request:
PUT /connectors/hdfs-sink-connector/config HTTP/1.1
Host: connect.example.com
Accept: application/json {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}CopyExample response:
HTTP/1.1 201 Created
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}CopyNote that in this example the return status indicates that the connector was
Created. In the case of a configuration update the status would have been200 OK.
GET/connectors/(string:name)/status-
Get current status of the connector, including whether it is running, failed or paused, which worker it is assigned to, error information if it has failed, and the state of all its tasks.
Response JSON Object: - name (string) -- The name of the connector.
- connector (map) -- The map containing connector status.
- tasks[i] (map) -- The map containing the task status.
Example request:
GET /connectors/hdfs-sink-connector/status HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK {
"name": "hdfs-sink-connector",
"connector": {
"state": "RUNNING",
"worker_id": "fakehost:8083"
},
"tasks":
[
{
"id": 0,
"state": "RUNNING",
"worker_id": "fakehost:8083"
},
{
"id": 1,
"state": "FAILED",
"worker_id": "fakehost:8083",
"trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
}
]
}Copy
POST/connectors/(string:name)/restart-
Restart the connector and its tasks. Return
409 (Conflict)if rebalance is in process.Example request:
POST /connectors/hdfs-sink-connector/restart HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK
Copy
PUT/connectors/(string:name)/pause-
Pause the connector and its tasks, which stops message processing until the connector is resumed. This call asynchronous and the tasks will not transition to
PAUSEDstate at the same time.Example request:
PUT /connectors/hdfs-sink-connector/pause HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 202 Accepted
Copy
PUT/connectors/(string:name)/resume-
Resume a paused connector or do nothing if the connector is not paused. This call asynchronous and the tasks will not transition to
RUNNINGstate at the same time.Example request:
PUT /connectors/hdfs-sink-connector/resume HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 202 Accepted
Copy
DELETE/connectors/(string:name)/-
Delete a connector, halting all tasks and deleting its configuration. Return
409 (Conflict)if rebalance is in process.Example request:
DELETE /connectors/hdfs-sink-connector HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 204 No Content
Copy
Tasks
GET/connectors/(string:name)/tasks-
Get a list of tasks currently running for the connector.
Response JSON Object: - tasks (array) -- List of active task configs that have been created by the connector
- tasks[i].id (string) -- The ID of task
- tasks[i].id.connector (string) -- The name of the connector the task belongs to
- tasks[i].id.task (int) -- Task ID within the connector.
- tasks[i].config (map) -- Configuration parameters for the task
Example request:
GET /connectors/hdfs-sink-connector/tasks HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK [
{
"task.class": "io.confluent.connect.hdfs.HdfsSinkTask",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
{
"task.class": "io.confluent.connect.hdfs.HdfsSinkTask",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
]Copy
GET/connectors/(string:name)/tasks/(int:taskid)/status-
Get a task's status.
Example request:
GET /connectors/hdfs-sink-connector/tasks/1/status HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK {"state":"RUNNING","id":1,"worker_id":"192.168.86.101:8083"}Copy
POST/connectors/(string:name)/tasks/(int:taskid)/restart-
Restart an individual task.
Example request:POST /connectors/hdfs-sink-connector/tasks/1/restart HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK
Copy
Connector Plugins
GET/connector-plugins/-
Return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means it is possible to see inconsistent results, especially during a rolling upgrade if you add new connector jars.
Response JSON Object: - class (string) -- The connector class name.
Example request:
GET /connector-plugins/ HTTP/1.1
Host: connect.example.comCopyExample response:
HTTP/1.1 200 OK [
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector"
},
{
"class": "io.confluent.connect.jdbc.JdbcSourceConnector"
}
]Copy
PUT/connector-plugins/(string:name)/config/validate-
Validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.
Request JSON Object: - config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object: - name (string) -- The class name of the connector plugin.
- error_count (int) -- The total number of errors encountered during configuration validation.
- groups (array) -- The list of groups used in configuration definitions.
- configs[i].definition (map) -- The definition for a config in the connector plugin, which includes the name, type, importance, etc.
- configs[i].value (map) -- The current value for a config, which includes the name, value, recommended values, etc.
Example request:
PUT /connector-plugins/FileStreamSinkConnector/config/validate/ HTTP/1.1
Host: connect.example.com
Accept: application/json {
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"tasks.max": "1",
"topics": "test-topic"
}CopyExample response:HTTP/1.1 200 O
{
"name": "FileStreamSinkConnector",
"error_count": 1,
"groups": [
"Common"
],
"configs": [
{
"definition": {
"name": "topics",
"type": "LIST",
"required": false,
"default_value": "",
"importance": "HIGH",
"documentation": "",
"group": "Common",
"width": "LONG",
"display_name": "Topics",
"dependents": [],
"order": 4
},
"value": {
"name": "topics",
"value": "test-topic",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "file",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Destination filename.",
"group": null,
"width": "NONE",
"display_name": "file",
"dependents": [],
"order": -1
},
"value": {
"name": "file",
"value": null,
"recommended_values": [],
"errors": [
"Missing required configuration \"file\" which has no default value."
],
"visible": true
}
},
{
"definition": {
"name": "name",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Globally unique name to use for this connector.",
"group": "Common",
"width": "MEDIUM",
"display_name": "Connector name",
"dependents": [],
"order": 1
},
"value": {
"name": "name",
"value": "test",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "tasks.max",
"type": "INT",
"required": false,
"default_value": "1",
"importance": "HIGH",
"documentation": "Maximum number of tasks to use for this connector.",
"group": "Common",
"width": "SHORT",
"display_name": "Tasks max",
"dependents": [],
"order": 3
},
"value": {
"name": "tasks.max",
"value": "1",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "connector.class",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use \"FileStreamSink\" or \"FileStreamSinkConnector\" to make the configuration a bit shorter",
"group": "Common",
"width": "LONG",
"display_name": "Connector class",
"dependents": [],
"order": 2
},
"value": {
"name": "connector.class",
"value": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"recommended_values": [],
"errors": [],
"visible": true
}
}
]
} reference:
https://docs.confluent.io/current/connect/references/restapi.html
Kafka Connect REST Interface的更多相关文章
- Streaming data from Oracle using Oracle GoldenGate and Kafka Connect
This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...
- Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive
In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...
- Kafka Connect Architecture
Kafka Connect's goal of copying data between systems has been tackled by a variety of frameworks, ma ...
- Build an ETL Pipeline With Kafka Connect via JDBC Connectors
This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...
- Kafka connect快速构建数据ETL通道
摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...
- 使用kafka connect,将数据批量写到hdfs完整过程
版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...
- 基于Kafka Connect框架DataPipeline可以更好地解决哪些企业数据集成难题?
DataPipeline已经完成了很多优化和提升工作,可以很好地解决当前企业数据集成面临的很多核心难题. 1. 任务的独立性与全局性. 从Kafka设计之初,就遵从从源端到目的的解耦性.下游可以有很多 ...
- 基于Kafka Connect框架DataPipeline在实时数据集成上做了哪些提升?
在不断满足当前企业客户数据集成需求的同时,DataPipeline也基于Kafka Connect 框架做了很多非常重要的提升. 1. 系统架构层面. DataPipeline引入DataPipeli ...
- 以Kafka Connect作为实时数据集成平台的基础架构有什么优势?
Kafka Connect是一种用于在Kafka和其他系统之间可扩展的.可靠的流式传输数据的工具,可以更快捷和简单地将大量数据集合移入和移出Kafka的连接器.Kafka Connect为DataPi ...
随机推荐
- ui自动化测试 SeleniumBase
ui自动化 SeleniumBase SeleniumBase是一个自动化web测试框架,它的设计pyse相似,基于selenium和unittest封装的框架,api多,支持命令行多参数执行 文档地 ...
- CORS通信
CORS 是一个 W3C 标准,全称是"跨域资源共享"(Cross-origin resource sharing).它允许浏览器向跨域的服务器,发出XMLHttpRequest请 ...
- 中文日历Calendar
一.层次结构 Object<-----Calendar<-----EastAsianLunisolarCalendar<-----ChineseLunisolarCalendar(农 ...
- 第113题:路径总和II
一. 问题描述 给定一个二叉树和一个目标和,找到所有从根节点到叶子节点路径总和等于给定目标和的路径. 说明: 叶子节点是指没有子节点的节点. 示例: 给定如下二叉树,以及目标和 sum = 22, 5 ...
- TDOA 之 基站接收数据
基站主要 接收同步节点发来的同步信号,代码里定义为S信息. 以及标签节点发来的定位信号,代码中定义为T信号. 代码中使用中断以及帧过滤功能,对模块只接收自己关心设定好的信息,通过中断告知上层,而不是长 ...
- vue+大文件上传控件
总结一下大文件分片上传和断点续传的问题.因为文件过大(比如1G以上),必须要考虑上传过程网络中断的情况.http的网络请求中本身就已经具备了分片上传功能,当传输的文件比较大时,http协议自动会将文件 ...
- luogu 3248
直接向原树加子树是不可能的考虑重新建立这样一颗树,我们称之为 S 树 将每次需要添加的子树看做一个点,称之为 S 点 新建的树就是由这些点构成的,那么树的大小是合理的 初始节点为整棵原树由于添加的子树 ...
- java1.8新特性之stream流式算法
在Java1.8之前还没有stream流式算法的时候,我们要是在一个放有多个User对象的list集合中,将每个User对象的主键ID取出,组合成一个新的集合,首先想到的肯定是遍历,如下: List& ...
- 如何在微信小程序中国引入fontawesome字体图标
fontawesome官网地址:http://fontawesome.dashgame.com/ 一. 二. 下载之后的字体图标 找到 文件中的如下图.ttf文件 三. 在https://transf ...
- selenium鼠标下滑操作
# coding = utf-8 import time from selenium import webdriver from selenium.webdriver.common.by import ...