使用maxwell实时同步mysql数据到kafka
一、软件环境:
操作系统:CentOS release 6.5 (Final)
java版本: jdk1.8
zookeeper版本: zookeeper-3.4.11
kafka 版本: kafka_2.11-1.1.0.tgz
maxwell版本:maxwell-1.16.0.tar.gz
注意 : 关闭所有机器的防火墙,同时注意启动可以相互telnet ip 端口
二、环境部署
1、安装jdk
export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH
2、安装maven
参考:https://www.cnblogs.com/wcwen1990/p/7227278.html
3、安装zookeeper
1)下载软件:
wget http://101.96.8.157/archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
tar zxvf zookeeper-3.4.11.tar.gz
mv zookeeper-3.4.11 /usr/local/zookeeper
2)修改环境变量
编辑 /etc/profile 文件, 在文件末尾添加以下环境变量配置:
# ZooKeeper Env
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin
运行以下命令使环境变量生效: source /etc/profile
3)重命名配置文件
初次使用 ZooKeeper 时,需要将$ZOOKEEPER_HOME/conf 目录下的 zoo_sample.cfg 重命名为 zoo.cfg
mv $ZOOKEEPER_HOME/conf/zoo_sample.cfg $ZOOKEEPER_HOME/conf/zoo.cfg
4)单机模式--修改配置文件
创建目录/usr/local/zookeeper/data 和/usr/local/zookeeper/logs 修改配置文件
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/logs
clientPort=2181
5)启动zookeeper
# bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
6)验证zukeeper服务
# telnet chavin.king 2181
Trying 192.168.72.130...
Connected to chavin.king.
Escape character is '^]'.
stat
Zookeeper version: 3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
Clients:
/192.168.72.130:44054[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x1a4
Mode: standalone
Node count: 147
Connection closed by foreign host.
4、安装zkui
git clone https://github.com/DeemOpen/zkui.git
cd zkui
mvn clean install
修改配置文件默认值
#vim config.cfg
serverPort=9090 #指定端口
zkServer=192.168.1.110:2181
sessionTimeout=300000
启动程序至后台
2.0-SNAPSHOT 会随软件的更新版本不同而不同,执行时请查看target 目录中真正生成的版本
nohup java -jar target/zkui-2.0-SNAPSHOT-jar-with-dependencies.jar &
用浏览器访问:
http://chavin.king:9090/
5、安装kafka
wget http://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz
tar -zxvf kafka_2.11-1.1.0.tgz -C /usr/local/kafka
mkdir -p /usr/local/kafka/data-logs
修改配置文件
vim server.properties
log.dirs=/usr/local/kafka/data-logs
zookeeper.connect=chavin.king:2181
启动kafka
bin/kafka-server-start.sh -daemon config/server.properties &
创建topic
bin/kafka-topics.sh --create --zookeeper chavin.king:2181 --replication-factor 1 --partitions 1 --topic maxwell
查看所有topic
bin/kafka-topics.sh --list --zookeeper chavin.king:2181
启动producer
bin/kafka-console-producer.sh --broker-list chavin.king:9092 --topic maxwell
启动consumer
bin/kafka-console-consumer.sh --zookeeper chavin.king:2181 --topic maxwell --from-beginning
或者
bin/kafka-console-consumer.sh --bootstrap-server chavin.king:9092 --from-beginning --topic maxwell
6、开启mysql binlog
more /etc/my.cnf
[client]
default_character_set = utf8
[mysqld]
basedir = /usr/local/mysql-5.6.24
datadir = /usr/local/mysql-5.6.24/data
port = 3306
#skip-grant-tables
character_set_server = utf8
log_error = /usr/local/mysql-5.6.24/data/mysql.err
binlog_format = row
log-bin = /usr/local/mysql-5.6.24/logs/mysql-bin
sync_binlog = 2
max_binlog_size = 16M
expire_logs_days = 10
server_id = 1
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
7、安装maxwell
wget https://github.com/zendesk/maxwell/releases/download/v1.16.0/maxwell-1.16.0.tar.gz
tar -zxvf maxwell-1.16.0.tar.gz -C /usr/local/maxwell
启动maxwell
nohup bin/maxwell --user='canal' --password='canal' --host='chavin.king' --producer=kafka --kafka.bootstrap.servers=chavin.king:9092 > maxwell.log &
8、开发kafka消费程序
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Arrays;
import java.util.Properties;
public class KafkaTest {
public static void main(String[] args){
String topicName = "maxwell";
String groupID = "example-group";
Properties props = new Properties();
props.put("bootstrap.servers","192.168.72.130:9092");
props.put("group.id",groupID);
props.put("auto.offset.reset","earliest");
props.put("serializer.encoding","utf-8");
props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String,String > consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(topicName));
try{
while(true){
ConsumerRecords<String,String> records = consumer.poll(1000);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
}
}finally{
consumer.close();
}
}
}
ideal启动以上消费程序
9、测试
offset = 3428, key = {"database":"chavin","table":"dept","_uuid":"0b195622-e7c7-4cf6-8203-5576752f9024"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2276,"data":{"deptno":10,"dname":"ACCOUNTING","loc":"NEW YORK"}}
offset = 3429, key = {"database":"chavin","table":"dept","_uuid":"333b98e3-a597-47fc-95ad-6e59ee0dadf6"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2277,"data":{"deptno":20,"dname":"RESEARCH","loc":"DALLAS"}}
offset = 3430, key = {"database":"chavin","table":"dept","_uuid":"cf9fa656-ed13-4cb0-b909-d1218e402e96"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2278,"data":{"deptno":30,"dname":"SALES","loc":"CHICAGO"}}
offset = 3431, key = {"database":"chavin","table":"dept","_uuid":"7f2f683a-39bc-498b-9a4e-920697b3da18"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2279,"data":{"deptno":40,"dname":"OPERATIONS","loc":"BOSTON"}}
offset = 3432, key = {"database":"chavin","table":"dept","_uuid":"ef639cd1-9206-4145-8608-372bbaaaa14a"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2280,"data":{"deptno":10,"dname":"ACCOUNTING","loc":"NEW YORK"}}
offset = 3433, key = {"database":"chavin","table":"dept","_uuid":"ebdf15ad-7149-4ac4-b567-627dd910182c"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2281,"data":{"deptno":20,"dname":"RESEARCH","loc":"DALLAS"}}
offset = 3434, key = {"database":"chavin","table":"dept","_uuid":"1bc667f4-15f0-438c-8139-6f1cbe8b4db3"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2282,"data":{"deptno":30,"dname":"SALES","loc":"CHICAGO"}}
offset = 3435, key = {"database":"chavin","table":"dept","_uuid":"1613b695-284a-49e3-9793-74fb2cf8dc5b"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2283,"data":{"deptno":40,"dname":"OPERATIONS","loc":"BOSTON"}}
offset = 3436, key = {"database":"chavin","table":"dept","_uuid":"f72a800c-92cc-4494-9438-bc61c58b5cb9"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2284,"data":{"deptno":10,"dname":"ACCOUNTING","loc":"NEW YORK"}}
offset = 3437, key = {"database":"chavin","table":"dept","_uuid":"9887d144-d75d-46f8-96ba-ad7c3adf45fd"}, value = {"database":"chavin","table":"dept","type":"insert","ts":1499944326,"xid":121,"xoffset":2285,"data":{"deptno":20,"dname":"RESEARCH","loc":"DALLAS"}}
至此数据同步已经可以正常进行了,是不是很简单。
使用maxwell实时同步mysql数据到kafka的更多相关文章
- 使用Logstash来实时同步MySQL数据到ES
上篇讲到了ES和Head插件的环境搭建和配置,也简单模拟了数据作测试 本篇我们来实战从MYSQL里直接同步数据 一.首先下载和你的ES对应的logstash版本,本篇我们使用的都是6.1.1 下载后使 ...
- flume实时采集mysql数据到kafka中并输出
环境说明 centos7(运行于vbox虚拟机) flume1.9.0(flume-ng-sql-source插件版本1.5.3) jdk1.8 kafka(版本忘了后续更新) zookeeper(版 ...
- maxwell实时同步mysql中binlog
概述 Maxwell是一个能实时读取MySQL二进制日志binlog,并生成 JSON 格式的消息,作为生产者发送给 Kafka,Kinesis.RabbitMQ.Redis.Google Cloud ...
- canal-1.1.5实时同步MySQL数据到Elasticsearch
一.环境准备 1.jkd 8+ 2.mysql 5.7+ 3.Elasticsearch 7+ 4.kibana 7+ 5.canal.adapter 1.1.5 二.部署 一.创建数据库CanalD ...
- flink-cdc同步mysql数据到kafka
本文首发于我的个人博客网站 等待下一个秋-Flink 什么是CDC? CDC是(Change Data Capture 变更数据获取)的简称.核心思想是,监测并捕获数据库的变动(包括数据 或 数据表的 ...
- flink-cdc实时同步mysql数据到elasticsearch
本文首发于我的个人博客网站 等待下一个秋-Flink 什么是CDC? CDC是(Change Data Capture 变更数据获取)的简称.核心思想是,监测并捕获数据库的变动(包括数据 或 数据表的 ...
- 使用logstash同步MySQL数据到ES
使用logstash同步MySQL数据到ES 版权声明:[分享也是一种提高]个人转载请在正文开头明显位置注明出处,未经作者同意禁止企业/组织转载,禁止私自更改原文,禁止用于商业目的. https:// ...
- 推荐一个同步Mysql数据到Elasticsearch的工具
把Mysql的数据同步到Elasticsearch是个很常见的需求,但在Github里找到的同步工具用起来或多或少都有些别扭. 例如:某记录内容为"aaa|bbb|ccc",将其按 ...
- vue中的watch方法 实时同步存储数据
watch 监视模式里面有个独特的方法handler 注意要加上deep: true.deep为true时,当对象的key值改变时也监听 当值发生改变被watch监视到触发了事件 开始执行handle ...
随机推荐
- 🍓vue & react 一些重要但没必要死记硬背的东西
- pt-online-schema-change VS oak-online-alter-table【转】
前言 在上篇文章中提到了MySQL 5.6 Online DDL,如果是MySQL 5.5的版本在DDL方面是要付出代价的,虽然已经有了Fast index Creation,但是在添加字段还是会锁表 ...
- JAVA 数组元素的反转
package Code411;/*数组元素的反转本来[1,2,3,4]反转后[4,3,2,1]1.对称位置的元素交换2.对称位子需要两个索引3.int temp =a:a=b;b=temp;4.什么 ...
- C++反汇编调试
1.使用 OllyDBG打开的dll文件,最好找破解pro版本.不然没有编辑权限 ,目前OllyDBG并不支持eclipse IDE 64位编辑的 .class文件类型. 另外使用反编译的时候物理内 ...
- vs2010编译error_code
C1083 : 现象: xxxxx.cpp clxx:fatal error C1083:无法打开源文件: “..\..\..\..\src\folder1\folder2\folder3\folde ...
- mysql group by 过滤字段 只能在SELECT 后面出现,不能写其他字段 报错解决 关键字 sql_mode=only_full_group_by
1:报错 关键字 sql_mode=only_full_group_bymysql> select uuid,ip,count(*) from dbname_report.t_client_i ...
- python全栈开发day112-CBV、flask_session、WTForms
1.Flask 中的 CBV class Index(views.MethodView): # methods = ["POST"] # decorators = [war,nei ...
- 20172328 2018-2019《Java软件结构与数据结构》第七周学习总结
20172328 2018-2019<Java软件结构与数据结构>第七周学习总结 概述 Generalization 本周学习了第11章:二叉查找树.在本章中,主要探讨了二叉查找树的概念和 ...
- 基础知识-Mockjs进行数据模拟
目录 1. 目标 2. 创建模拟数据服务器 3. 安装 mockjs, 熟悉 mockjs 语法 4. 设置代理,解决 vue 项目跨域问题 5. 设置响应头,解决无法获取获取 token 和 coo ...
- Python编程中出现ImportError: bad magic number in 'numpy': b'\x03\xf3\r\n'
在终端输入ls -a 会出现一个.pyc的文件,将文件删掉