实验环境

全部部署于本地虚拟机

debezium docker部署

postgresql、kafka本机部署

1 postgresql

1.1 配置

设置postgres密码为123

仿照example,创建database postgres,scheme inventory,table customers

因为postgres用户有replication权限,所以可以直接使用

修改postgresql.conf文件

listen_addresses = '*' #确保容器可以访问到
shared_preload_libraries = '' #使用默认的pgoutput
wal_level = logical

以postgres用户重启pg

pg_ctl restart

1.2 测试

show wal_level;

2 kafka

2.1 启动

参考博文

单节点kafka部署笔记

2.2 配置

修改kafka目录下的config/kraft/server.properties,确保容器可以访问到

listeners=PLAINTEXT://:9092,CONTROLLER://:9093
advertised.listeners=PLAINTEXT://172.17.0.1:9092

启动后无需创建topic,connect启动后会自动创建

如果手工创建,注意cleanup.policy一定要设置为compact模式,否则connect会出错停止

bin/kafka-topics.sh --create --topic debezium --config cleanup.policy=compact --bootstrap-server 172.17.0.1:9092

2.3 测试

列出所有topic

bin/kafka-topics.sh --bootstrap-server 172.17.0.1:9092 --list

3 启动connector

3.1 启动

下载docker镜像并启动,注意通过BOOTSTRAP_SERVERS指定kafka

docker pull debezium/postgres
docker run -d --name connect -p 8083:8083 -e GROUP_ID=1 -e CONFIG_STORAGE_TOPIC=debezium_configs -e OFFSET_STORAGE_TOPIC=debezium_offsets -e STATUS_STORAGE_TOPIC=debezium_statuses -e BOOTSTRAP_SERVERS=172.17.0.1:9092 debezium/connect:latest

3.2 配置

由于默认数据格式是avro,非常长,改为json格式会简洁很多

修改容器中的配置文件/kafka/config/connect-standalone.properties

key.converter.schemas.enable设置为false
value.converter.schemas.enable设置为false

可以通过docker cp将文件拷贝出来,修改后再复制进去,或者直接挂载配置文件

3.3 创建connect

在pgsql-inventory-connector.json中写入请求数据,通过database.hostname确定postgresql

{
"name": "inventory-connector",
"config": {
"connector.class": "io.debezium.connector.postgresql.PostgresConnector",
"plugin.name": "pgoutput",
"database.hostname": "172.17.0.1",
"database.port": "5432",
"database.user": "postgres",
"database.password": "123",
"database.dbname" : "postgres",
"topic.prefix": "dbserver1",
"table.include.list": "inventory.customers"
}
}

添加

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" 172.17.0.1:8083/connectors/ -d @pgsql-inventory-connector.json

删除

curl -i -X DELETE 172.17.0.1:8083/connectors/inventory-connector/

查询

curl -i -X GET -H "Accept:application/json" 172.17.0.1:8083/connectors/inventory-connector

重启

curl -X POST -H "Accept:application/json" 172.17.0.1:8083/connectors/inventory-connector/restart

4 测试

postgresql、kakfa、connect启动完成后

4.1 kafka消费

bin/kafka-console-consumer.sh --topic dbserver1.inventory.customers --from-beginning --bootstrap-server 172.17.0.1:9092

4.2 postgresql修改

insert into inventory.customers values (1005,'aA','bB','aAbB@home.com');

4.3 kafka结果

Avro格式

{"schema":{"type":"struct","fields":[{"type":"struct","fields":[{"type":"int32","optional":false,"default":0,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"dbserver1.inventory.customers.Value","field":"before"},{"type":"struct","fields":[{"type":"int32","optional":false,"default":0,"field":"id"},{"type":"string","optional":false,"field":"first_name"},{"type":"string","optional":false,"field":"last_name"},{"type":"string","optional":false,"field":"email"}],"optional":true,"name":"dbserver1.inventory.customers.Value","field":"after"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"version"},{"type":"string","optional":false,"field":"connector"},{"type":"string","optional":false,"field":"name"},{"type":"int64","optional":false,"field":"ts_ms"},{"type":"string","optional":true,"name":"io.debezium.data.Enum","version":1,"parameters":{"allowed":"true,last,false,incremental"},"default":"false","field":"snapshot"},{"type":"string","optional":false,"field":"db"},{"type":"string","optional":true,"field":"sequence"},{"type":"string","optional":false,"field":"schema"},{"type":"string","optional":false,"field":"table"},{"type":"int64","optional":true,"field":"txId"},{"type":"int64","optional":true,"field":"lsn"},{"type":"int64","optional":true,"field":"xmin"}],"optional":false,"name":"io.debezium.connector.postgresql.Source","field":"source"},{"type":"string","optional":false,"field":"op"},{"type":"int64","optional":true,"field":"ts_ms"},{"type":"struct","fields":[{"type":"string","optional":false,"field":"id"},{"type":"int64","optional":false,"field":"total_order"},{"type":"int64","optional":false,"field":"data_collection_order"}],"optional":true,"name":"event.block","version":1,"field":"transaction"}],"optional":false,"name":"dbserver1.inventory.customers.Envelope","version":1},"payload":{"before":null,"after":{"id":1005,"first_name":"aA","last_name":"bB","email":"aAbB@home.com"},"source":{"version":"2.2.0.Alpha3","connector":"postgresql","name":"dbserver1","ts_ms":1687946054175,"snapshot":"false","db":"postgres","sequence":"[\"34244288\",\"34244576\"]","schema":"inventory","table":"customers","txId":758,"lsn":34244576,"xmin":null},"op":"c","ts_ms":1687946054536,"transaction":null}}

JSON格式

瞬间简洁很多

{"before":null,"after":{"id":1005,"first_name":"aA","last_name":"bB","email":"aAbB@home.com"},"source":{"version":"2.2.0.Alpha3","connector":"postgresql","name":"dbserver1","ts_ms":1688112418157,"snapshot":"false","db":"postgres","sequence":"[\"85230368\",\"85230656\"]","schema":"inventory","table":"customers","txId":1637,"lsn":85230656,"xmin":null},"op":"c","ts_ms":1688112418467,"transaction":null}

debezium同步postgresql数据至kafka笔记的更多相关文章

  1. 使用maxwell实时同步mysql数据到kafka

    一.软件环境: 操作系统:CentOS release 6.5 (Final) java版本: jdk1.8 zookeeper版本: zookeeper-3.4.11 kafka 版本: kafka ...

  2. flink-cdc同步mysql数据到kafka

    本文首发于我的个人博客网站 等待下一个秋-Flink 什么是CDC? CDC是(Change Data Capture 变更数据获取)的简称.核心思想是,监测并捕获数据库的变动(包括数据 或 数据表的 ...

  3. 【大数据】Kafka学习笔记

    第1章 Kafka概述 1.1 消息队列 (1)点对点模式(一对一,消费者主动拉取数据,消息收到后消息清除) 点对点模型通常是一个基于拉取或者轮询的消息传送模型,这种模型从队列中请求信息,而不是将消息 ...

  4. OGG 从Oracle备库同步数据至kafka

    OGG 从Oracle备库同步数据至kafka Table of Contents 1. 目的 2. 环境及规划 3. 安装配置JDK 3.1. 安装jdk 3.2. 配置环境变量 4. 安装Data ...

  5. MongoDB -> kafka 高性能实时同步(采集)mongodb数据到kafka解决方案

    写这篇博客的目的 让更多的人了解 阿里开源的MongoShake可以很好满足mongodb到kafka高性能高可用实时同步需求(项目地址:https://github.com/alibaba/Mong ...

  6. MongoDB -> kafka 高性能实时同步(sync 采集)mongodb数据到kafka解决方案

    写这篇博客的目的 让更多的人了解 阿里开源的MongoShake可以很好满足mongodb到kafka高性能高可用实时同步需求(项目地址:https://github.com/alibaba/Mong ...

  7. SQLServer数据实时同步PostgreSQL

    SQLServer数据实时同步至PostgreSQL 前言: 为迎合工作需求有时候传送的数据保存在SQLServer中但由于工作需要需要保存到PostgreSQL中进行处理,本文主要通过在SQLSer ...

  8. 使用logstash同步MySQL数据到ES

    使用logstash同步MySQL数据到ES 版权声明:[分享也是一种提高]个人转载请在正文开头明显位置注明出处,未经作者同意禁止企业/组织转载,禁止私自更改原文,禁止用于商业目的. https:// ...

  9. HttpServer发送数据到kafka

    文件夹 1.需求 2.框架结构图和步鄹图 3.代码结构 4.代码展现 ------------------------ 1.需求 1.1.解析路径,将路径的最后一个字符串作为Appkey: 1.2.数 ...

  10. Kafka笔记整理(三):消费形式验证与性能测试

    Kafka消费形式验证 前面的<Kafka笔记整理(一)>中有提到消费者的消费形式,说明如下: .每个consumer属于一个consumer group,可以指定组id.group.id ...

随机推荐

  1. 16-js兼容性处理

    const { resolve } = require('path'); const HtmlWebpackPlugin = require('html-webpack-plugin'); modul ...

  2. 绝对强大的三大linux指令:ar, nm, objdump

    前言 如果普通编程不需要了解这些东西,如果想精确控制你的对象文件的格式或者你想查看一下文件对象里的内容以便作出某种判断,刚你可以看一下下面的工具:objdump, nm, ar.当然,本文不可能非常详 ...

  3. [Pytorch框架] 1.1、Pytorch简介

    文章目录 1.1 Pytorch 简介 1.1.1 PyTorch的由来 1.1.2 Torch是什么? 1.1.3 重新介绍 PyTorch 1.1.4 对比PyTorch和Tensorflow 1 ...

  4. python的format方法中文字符输出问题

    format方法的介绍 前言 提示:本文仅介绍format方法的使用和中文的输出向左右和居中输出问题 一.format方法的使用 format方法一般可以解决中文居中输出问题,假如我们设定宽度,当中文 ...

  5. AutoGPT:有手就会的安装教程

    AutoGPT 是什么 Auto-GPT 是一个实验性开源应用程序,展示了 GPT-4 语言模型的功能.该程序由 GPT-4 驱动,将 LLM 的"思想"链接在一起,以自主实现您设 ...

  6. 用go设计开发一个自己的轻量级登录库/框架吧(项目维护篇)

    用go设计开发一个自己的轻量级登录库/框架吧(项目维护篇) 本篇将开始讲讲开发库/框架的最开始阶段,也就是搭建一个项目 源码:weloe/token-go: a light login library ...

  7. 2022-09-26:以下go语言代码输出什么?A:{“Time“: “2020-12-20T00:00:00Z“, “N“: 5 };B:“2020-12-20T00:00:00Z“;C:{“N“:

    2022-09-26:以下go语言代码输出什么?A:{"Time": "2020-12-20T00:00:00Z", "N": 5 }:B: ...

  8. 2021-01-09:linux中,某一个实时日志通过什么命令查?

    福哥答案2020-01-09:[答案来自此链接:](https://www.zhihu.com/question/438536200)1.tailtail -f首先就是 tail -f,tail 命令 ...

  9. Strings must be encoded before hashing

    Strings must be encoded before hashing 当我们将字符串传递给 hash 算法时,会出现 "TypeError: Strings must be enco ...

  10. TypeError: this.CliEngine is not a constructor

    vue cli3 项目老是提示TypeError: this.CliEngine is not a constructor这个,看着特别扭 解决方法也不难,直接点击Details 然后再点击,如下第一 ...