Since Kafka Connect is intended to be run as a service, it also supports a REST API for managing connectors. By default this service runs on port 8083. When executed in distributed mode, the REST API will be the primary interface to the cluster. You can make requests to any cluster member; the REST API automatically forwards requests if required.

Although you can use the standalone mode just by submitting a connector on the command line, it also runs the REST interface. This is useful for getting status information, adding and removing connectors without stopping the process, and more.

Currently the top level resources are connector and connector-plugins. The sub-resources for connector lists configuration settings and tasks and the sub-resource for connector-plugins provides configuration validation and recommendation.

Note that if you try to modify, update or delete a resource under connector which may require the request to be forwarded to the leader, Connect will return status code 409 while the worker group rebalance is in process as the leader may change during rebalance.

Content Types

Currently the REST API only supports application/json as both the request and response entity content type. Your requests should specify the expected content type of the response via the HTTP Accept header:

Accept: application/json
Copy

and should specify the content type of the request entity (if one is included) via the Content-Type header:

Content-Type: application/json
Copy

Statuses & Errors

The REST API will return standards-compliant HTTP statuses. Clients should check the HTTP status, especially before attempting to parse and use response entities. Currently the API does not use redirects (statuses in the 300 range), but the use of these codes is reserved for future use so clients should handle them.

When possible, all endpoints will use a standard error message format for all errors (status codes in the 400 or 500 range). For example, a request entity that omits a required field may generate the following response:

HTTP/1.1 422 Unprocessable Entity
Content-Type: application/json {
"error_code": 422,
"message": "config may not be empty"
}
Copy

Connectors

GET /connectors

Get a list of active connectors

Response JSON Object:
 
  • connectors (array) -- List of connector names

Example request:

GET /connectors HTTP/1.1
Host: connect.example.com
Accept: application/json
Copy

Example response:

HTTP/1.1 200 OK
Content-Type: application/json ["my-jdbc-source", "my-hdfs-sink"]
Copy
POST /connectors

Create a new connector, returning the current connector info if successful. Return 409 (Conflict) if rebalance is in process.

Request JSON Object:
 
  • name (string) -- Name of the connector to create
  • config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object:
 
  • name (string) -- Name of the created connector
  • config (map) -- Configuration parameters for the connector.
  • tasks (array) -- List of active tasks generated by the connector
  • tasks[i].connector (string) -- The name of the connector the task belongs to
  • tasks[i].task (int) -- Task ID within the connector.

Example request:

POST /connectors HTTP/1.1
Host: connect.example.com
Content-Type: application/json
Accept: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
}
Copy

Example response:

HTTP/1.1 201 Created
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}
Copy
GET /connectors/(string:name)

Get information about the connector.

Response JSON Object:
 
  • name (string) -- Name of the created connector
  • config (map) -- Configuration parameters for the connector.
  • tasks (array) -- List of active tasks generated by the connector
  • tasks[i].connector (string) -- The name of the connector the task belongs to
  • tasks[i].task (int) -- Task ID within the connector.

Example request:

GET /connectors/hdfs-sink-connector HTTP/1.1
Host: connect.example.com
Accept: application/json
Copy

Example response:

HTTP/1.1 200 OK
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}
Copy
GET /connectors/(string:name)/config

Get the configuration for the connector.

Response JSON Object:
 
  • config (map) -- Configuration parameters for the connector.

Example request:

GET /connectors/hdfs-sink-connector/config HTTP/1.1
Host: connect.example.com
Accept: application/json
Copy

Example response:

HTTP/1.1 200 OK
Content-Type: application/json {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
Copy
PUT /connectors/(string:name)/config

Create a new connector using the given configuration, or update the configuration for an existing connector. Returns information about the connector after the change has been made. Return 409 (Conflict) if rebalance is in process.

Request JSON Object:
 
  • config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object:
 
  • name (string) -- Name of the created connector
  • config (map) -- Configuration parameters for the connector.
  • tasks (array) -- List of active tasks generated by the connector
  • tasks[i].connector (string) -- The name of the connector the task belongs to
  • tasks[i].task (int) -- Task ID within the connector.

Example request:

PUT /connectors/hdfs-sink-connector/config HTTP/1.1
Host: connect.example.com
Accept: application/json {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
Copy

Example response:

HTTP/1.1 201 Created
Content-Type: application/json {
"name": "hdfs-sink-connector",
"config": {
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"tasks.max": "10",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
"tasks": [
{ "connector": "hdfs-sink-connector", "task": 1 },
{ "connector": "hdfs-sink-connector", "task": 2 },
{ "connector": "hdfs-sink-connector", "task": 3 }
]
}
Copy

Note that in this example the return status indicates that the connector was Created. In the case of a configuration update the status would have been 200 OK.

GET /connectors/(string:name)/status

Get current status of the connector, including whether it is running, failed or paused, which worker it is assigned to, error information if it has failed, and the state of all its tasks.

Response JSON Object:
 
  • name (string) -- The name of the connector.
  • connector (map) -- The map containing connector status.
  • tasks[i] (map) -- The map containing the task status.

Example request:

GET /connectors/hdfs-sink-connector/status HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK

{
"name": "hdfs-sink-connector",
"connector": {
"state": "RUNNING",
"worker_id": "fakehost:8083"
},
"tasks":
[
{
"id": 0,
"state": "RUNNING",
"worker_id": "fakehost:8083"
},
{
"id": 1,
"state": "FAILED",
"worker_id": "fakehost:8083",
"trace": "org.apache.kafka.common.errors.RecordTooLargeException\n"
}
]
}
Copy
POST /connectors/(string:name)/restart

Restart the connector and its tasks. Return 409 (Conflict) if rebalance is in process.

Example request:

POST /connectors/hdfs-sink-connector/restart HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK
Copy
PUT /connectors/(string:name)/pause

Pause the connector and its tasks, which stops message processing until the connector is resumed. This call asynchronous and the tasks will not transition to PAUSED state at the same time.

Example request:

PUT /connectors/hdfs-sink-connector/pause HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 202 Accepted
Copy
PUT /connectors/(string:name)/resume

Resume a paused connector or do nothing if the connector is not paused. This call asynchronous and the tasks will not transition to RUNNINGstate at the same time.

Example request:

PUT /connectors/hdfs-sink-connector/resume HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 202 Accepted
Copy
DELETE /connectors/(string:name)/

Delete a connector, halting all tasks and deleting its configuration. Return 409 (Conflict) if rebalance is in process.

Example request:

DELETE /connectors/hdfs-sink-connector HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 204 No Content
Copy

Tasks

GET /connectors/(string:name)/tasks

Get a list of tasks currently running for the connector.

Response JSON Object:
 
  • tasks (array) -- List of active task configs that have been created by the connector
  • tasks[i].id (string) -- The ID of task
  • tasks[i].id.connector (string) -- The name of the connector the task belongs to
  • tasks[i].id.task (int) -- Task ID within the connector.
  • tasks[i].config (map) -- Configuration parameters for the task

Example request:

GET /connectors/hdfs-sink-connector/tasks HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK

[
{
"task.class": "io.confluent.connect.hdfs.HdfsSinkTask",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
},
{
"task.class": "io.confluent.connect.hdfs.HdfsSinkTask",
"topics": "test-topic",
"hdfs.url": "hdfs://fakehost:9000",
"hadoop.conf.dir": "/opt/hadoop/conf",
"hadoop.home": "/opt/hadoop",
"flush.size": "100",
"rotate.interval.ms": "1000"
}
]
Copy
GET /connectors/(string:name)/tasks/(int:taskid)/status

Get a task's status.

Example request:

GET /connectors/hdfs-sink-connector/tasks/1/status HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK

{"state":"RUNNING","id":1,"worker_id":"192.168.86.101:8083"}
Copy
POST /connectors/(string:name)/tasks/(int:taskid)/restart

Restart an individual task.

Example request:
POST /connectors/hdfs-sink-connector/tasks/1/restart HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK
Copy

Connector Plugins

GET /connector-plugins/

Return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means it is possible to see inconsistent results, especially during a rolling upgrade if you add new connector jars.

Response JSON Object:
 
  • class (string) -- The connector class name.

Example request:

GET /connector-plugins/ HTTP/1.1
Host: connect.example.com
Copy

Example response:

HTTP/1.1 200 OK

[
{
"class": "io.confluent.connect.hdfs.HdfsSinkConnector"
},
{
"class": "io.confluent.connect.jdbc.JdbcSourceConnector"
}
]
Copy
PUT /connector-plugins/(string:name)/config/validate

Validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.

Request JSON Object:
 
  • config (map) -- Configuration parameters for the connector. All values should be strings.
Response JSON Object:
 
  • name (string) -- The class name of the connector plugin.
  • error_count (int) -- The total number of errors encountered during configuration validation.
  • groups (array) -- The list of groups used in configuration definitions.
  • configs[i].definition (map) -- The definition for a config in the connector plugin, which includes the name, type, importance, etc.
  • configs[i].value (map) -- The current value for a config, which includes the name, value, recommended values, etc.

Example request:

PUT /connector-plugins/FileStreamSinkConnector/config/validate/ HTTP/1.1
Host: connect.example.com
Accept: application/json {
"connector.class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"tasks.max": "1",
"topics": "test-topic"
}
Copy

Example response:HTTP/1.1 200 O

{
"name": "FileStreamSinkConnector",
"error_count": 1,
"groups": [
"Common"
],
"configs": [
{
"definition": {
"name": "topics",
"type": "LIST",
"required": false,
"default_value": "",
"importance": "HIGH",
"documentation": "",
"group": "Common",
"width": "LONG",
"display_name": "Topics",
"dependents": [],
"order": 4
},
"value": {
"name": "topics",
"value": "test-topic",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "file",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Destination filename.",
"group": null,
"width": "NONE",
"display_name": "file",
"dependents": [],
"order": -1
},
"value": {
"name": "file",
"value": null,
"recommended_values": [],
"errors": [
"Missing required configuration \"file\" which has no default value."
],
"visible": true
}
},
{
"definition": {
"name": "name",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Globally unique name to use for this connector.",
"group": "Common",
"width": "MEDIUM",
"display_name": "Connector name",
"dependents": [],
"order": 1
},
"value": {
"name": "name",
"value": "test",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "tasks.max",
"type": "INT",
"required": false,
"default_value": "1",
"importance": "HIGH",
"documentation": "Maximum number of tasks to use for this connector.",
"group": "Common",
"width": "SHORT",
"display_name": "Tasks max",
"dependents": [],
"order": 3
},
"value": {
"name": "tasks.max",
"value": "1",
"recommended_values": [],
"errors": [],
"visible": true
}
},
{
"definition": {
"name": "connector.class",
"type": "STRING",
"required": true,
"default_value": "",
"importance": "HIGH",
"documentation": "Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use \"FileStreamSink\" or \"FileStreamSinkConnector\" to make the configuration a bit shorter",
"group": "Common",
"width": "LONG",
"display_name": "Connector class",
"dependents": [],
"order": 2
},
"value": {
"name": "connector.class",
"value": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"recommended_values": [],
"errors": [],
"visible": true
}
}
]
} reference:
https://docs.confluent.io/current/connect/references/restapi.html

Kafka Connect REST Interface的更多相关文章

  1. Streaming data from Oracle using Oracle GoldenGate and Kafka Connect

    This is a guest blog from Robin Moffatt. Robin Moffatt is Head of R&D (Europe) at Rittman Mead, ...

  2. Kafka connect in practice(3): distributed mode mysql binlog ->kafka->hive

    In the previous post Kafka connect in practice(1): standalone, I have introduced about the basics of ...

  3. Kafka Connect Architecture

    Kafka Connect's goal of copying data between systems has been tackled by a variety of frameworks, ma ...

  4. Build an ETL Pipeline With Kafka Connect via JDBC Connectors

    This article is an in-depth tutorial for using Kafka to move data from PostgreSQL to Hadoop HDFS via ...

  5. Kafka connect快速构建数据ETL通道

    摘要: 作者:Syn良子 出处:http://www.cnblogs.com/cssdongl 转载请注明出处 业余时间调研了一下Kafka connect的配置和使用,记录一些自己的理解和心得,欢迎 ...

  6. 使用kafka connect,将数据批量写到hdfs完整过程

    版权声明:本文为博主原创文章,未经博主允许不得转载 本文是基于hadoop 2.7.1,以及kafka 0.11.0.0.kafka-connect是以单节点模式运行,即standalone. 首先, ...

  7. 基于Kafka Connect框架DataPipeline可以更好地解决哪些企业数据集成难题?

    DataPipeline已经完成了很多优化和提升工作,可以很好地解决当前企业数据集成面临的很多核心难题. 1. 任务的独立性与全局性. 从Kafka设计之初,就遵从从源端到目的的解耦性.下游可以有很多 ...

  8. 基于Kafka Connect框架DataPipeline在实时数据集成上做了哪些提升?

    在不断满足当前企业客户数据集成需求的同时,DataPipeline也基于Kafka Connect 框架做了很多非常重要的提升. 1. 系统架构层面. DataPipeline引入DataPipeli ...

  9. 以Kafka Connect作为实时数据集成平台的基础架构有什么优势?

    Kafka Connect是一种用于在Kafka和其他系统之间可扩展的.可靠的流式传输数据的工具,可以更快捷和简单地将大量数据集合移入和移出Kafka的连接器.Kafka Connect为DataPi ...

随机推荐

  1. SHELL编程基础01

    首先shell是在linux下运行的一种环境,它是以shell脚本来运行的,学会了它基本可以解决任何问题,也可以用shell脚本开发. 和java,python的相比,其弱类型的语言没有那么复杂的结构 ...

  2. JSOI2012 玄武密码 和 HDU2222 Keywords Search

    玄武密码 给若干模式串和一个文本串.求每个模式串在文本串上能匹配的最大前缀长度. N<=10^7,M<=10^5,每一段文字的长度<=100. jklover的题解 将模式串建成一个 ...

  3. 全球化 System.Globalization.CultureInfo与RegionInfo类

    一.CultureInfo类:文化信息 分类: 1.中立文化(Neutral culture): zh-CHS:中文,无区域信息,无格式化信息 2.特定区域文化(Specific culture) z ...

  4. HttpServletRequest获取浏览器、服务端和客户端信息

    如何通过HttpServletRequest来获取到上面的属性呢? 1.引入开源工具 <!-- https://mvnrepository.com/artifact/eu.bitwalker/U ...

  5. redis登录及设置密码

    redis服务开启 : ./redis-server /opt/redisConf/redis.conf 1,查询默认密码 127.0.0.1:6379> config get requirep ...

  6. 模拟I2C协议学习点滴之复习三极管、场效应管

    晶体三极管分为NPN和PNP型两种结构形式,除了电源极性的不同工作原理是大致相同的.对于NPN管,它是由2块N型半导体夹着一块P型半导体所组成的,发射区与基区之间形成的PN结称为发射结,而集电区与基区 ...

  7. Bzoj 2282: [Sdoi2011]消防(二分答案)

    2282: [Sdoi2011]消防 Time Limit: 10 Sec Memory Limit: 512 MB Description 某个国家有n个城市,这n个城市中任意两个都连通且有唯一一条 ...

  8. gcc/g++ 链接库的编译与链接

    GCC编译步骤 gcc -E t1.c -o t1.i 预处理 gcc -S t1.i -o t1.s 转成汇编语言 gcc -c t1.s -o t1.o 转成机器码 gcc t1.o -o t1. ...

  9. centos7 浏览器(firefox)中文乱码

    主要问题在于firefox用了系统没有的字体 百度的方案: 通过yum安装中文字体 (找不到对应的库) 修改firefox的默认字体(尴尬.不知道改哪个才有效) 粗暴的解决方案: 把 windows ...

  10. ROS launch 总结

    ROS launch 总结 转自博客:https://www.cnblogs.com/Jessica-jie/p/6961837.html 1 运行Launch文件2 新建Launch文件3  在na ...