[粘贴]github-redis-rdb-cli
redis-rdb-cli

A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.
Chat with author
Contract the author
Binary release
Runtime requirement
jdk 1.8+
Install
$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip
$ unzip redis-rdb-cli-release.zip
$ cd ./redis-rdb-cli/bin
$ ./rct -h
Compile requirement
jdk 1.8+
maven-3.3.1+
Compile & run
$ git clone https://github.com/leonchen83/redis-rdb-cli.git
$ cd redis-rdb-cli
$ mvn clean install -Dmaven.test.skip=true
$ cd target/redis-rdb-cli-release/redis-rdb-cli/bin
$ ./rct -h
Run in docker
# run with jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest
$ rct -V # run without jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native
$ rct -V
Build native image via graalvm in docker
$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli .
$ docker run -it redisrdbcli:redis-rdb-cli bash
$ bash-5.1# rct -V
Windows Environment Variables
Add /path/to/redis-rdb-cli/bin to Path environment variable
Usage
Redis mass insertion
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r
$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe
Convert rdb to dump format
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof
Convert rdb to json format
$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json
Numbers of key in rdb
$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv
Find top 50 largest keys
$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50
Diff rdb
$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff
$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff
$ diff /path/to/dump1.diff /path/to/dump2.diff
Convert rdb to RESP
$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof
Sync with 2 redis
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Sync single redis to redis cluster
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0
Handle infinite loop in rst command
# set client-output-buffer-limit in source redis
$ redis-cli config set client-output-buffer-limit "slave 0 0 0"
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r
Downgrade migration
# Migrate data from redis-7 to redis-6
# About dump_rdb_version please see comment in redis-rdb-cli.conf
$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r
Handle big key in migration
# set proto-max-bulk-len in target redis
$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb # set Xms Xmx in redis-rdb-cli node
$ export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g" # execute migration
$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis cluster
$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r
or simply use following cmd without nodes-30001.conf
$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r
Backup remote rdb
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb
Backup remote rdb and convert db to dest db
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3
Filter rdb
$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string
Split rdb via cluster's nodes.conf
$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0
Merge multi rdb to one
$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash
Cut aof-use-rdb-preamble file to rdb file and aof file
$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof
Other parameter
More configurable parameter can be modified in /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Filter
rct,rdtandrmtthese 3 commands support data filter bytype,dbandkeyRegEx(Java style).rstthis command only support data filter bydb.
For example:
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0
Monitor redis server
# step1
# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`
# change property `metric_gateway from `none` to `influxdb`
#
# step2
$ cd /path/to/redis-rdb-cli/dashboard
$ docker-compose up -d
#
# step3
$ rmonitor -s redis://127.0.0.1:6379 -n standalone
$ rmonitor -s redis://127.0.0.1:30001 -n cluster
$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel
#
# step4
# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.

Difference between rmt and rst
- When
rmtstarted. source redis first doBGSAVEand generate a snapshot rdb file.rmtcommand migrate this snapshot file to target redis. after this process done,rmtterminated. rstnot only migrate snapshot rdb file but also incremental data from source redis. sorstnever terminated except typeCTRL+C.rstonly supportdbfilter more details please refer to Limitation of migration
Dashboard
Since v0.1.9, the rct -f mem support showing result in grafana dashboard like the following:
If you want to turn it on. you MUST install docker and docker-compose first, the installation please refer to docker
Then run the following command:
$ cd /path/to/redis-rdb-cli/dashboard # start
$ docker-compose up -d # stop
$ docker-compose down
cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parameter metric_gateway from none to influxdb.
Open http://localhost:3000 to check the rct -f mem's result.
If you deployed this tool in multi instance, you need to change parameter metric_instance to make sure unique between instances.
Redis 6
Redis 6 SSL
- use openssl to generate keystore
$ cd /path/to/redis-6.0-rc1
$ ./utils/gen-test-certs.sh
$ cd tests/tls
$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
If source redis and target redis use the same keystore. then config following parameters
source_keystore_path and target_keystore_path to point to/path/to/redis-6.0-rc1/tests/tls/redis.p12
set source_keystore_pass and target_keystore_passafter config ssl parameters use
rediss://host:portin your command to open ssl, for example:rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0
Redis 6 ACL
- use following URI to open redis ACL support
$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0
userMUST have+@allpermission to handle commands
Hack rmt
Rmt threading model
The rmt command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.
migrate_batch_size=4096
migrate_threads=4
migrate_flush=yes
migrate_retries=1
The most important parameter is migrate_threads=4. this means we use the following threading model to migrate data.
single redis ----> single redis
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| Source Redis |----| | Target Redis |
| | | +----------+ thread 3 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoint |-------------------| |
+--------------+ +----------+ +--------------+
single redis ----> redis cluster
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| Source Redis |----| | Redis cluster|
| | | +----------+ thread 3 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoints|-------------------| |
+--------------+ +----------+ +--------------+
The difference between cluster migration and single migration is Endpoint and Endpoints. In cluster migration the Endpoints contains multi Endpoint to point to every master instance in cluster. For example:
3 masters 3 replicas redis cluster. if migrate_threads=4 then we have 3 * 4 = 12 connections that connected with master instance.
Migration performance
The following 3 parameters affect migration performance
migrate_batch_size=4096
migrate_retries=1
migrate_flush=yes
migrate_batch_size: By default we use redispipelineto migrate data to remote. themigrate_batch_sizeis thepipelinebatch size. ifmigrate_batch_size=1then thepipelinedevolved into 1 single command to sent and wait the response from remote.migrate_retries: Themigrate_retries=1means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis withmigrate_retriestimes.migrate_flush: Themigrate_flush=yesmeans we write every 1 command to socket. then we invokeSocketOutputStream.flush()immediately. ifmigrate_flush=nowe invokeSocketOutputStream.flush()when write to socket every 64KB. notice that this parameter also affectmigrate_retries. themigrate_retriesonly take effect whenmigrate_flush=yes.
Migration principle
+---------------+ +-------------------+ restore +---------------+
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | convert | redis dump format |---------------->| |
| Dump rdb |------------>|-------------------| restore | Targe Redis |
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | | redis dump format |---------------->| |
+---------------+ +-------------------+ +---------------+
Limitation of migration
- We use cluster's
nodes.confto migrate data to cluster. because of we didn't handle theMOVEDASKredirection. so limitation of cluster migration is that the cluster MUST in stable state during the migration. this means the cluster MUST have nomigrating,importingslot and no switch slave to master. - If use
rstmigrate data to cluster. the following commands not supportedPUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA. and the following commandsRPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTOREONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg:del {user}:1 {user}:2)
Hack ret
What ret command do
retcommand that allow user define there own sink service like sink redis data tomysqlormongodb.retcommand using Java SPI extension to do this job.
How to implement a sink service
User should follow the steps below to implement a sink service.
- create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.your.company</groupId>
<artifactId>your-sink-service</artifactId>
<version>1.0.0</version> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties> <dependencies>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-rdb-cli-api</artifactId>
<version>1.8.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-replicator</artifactId>
<version>[3.6.4, )</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
<scope>provided</scope>
</dependency> <!--
<dependency>
other dependencies
</dependency>
--> </dependencies> <build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>${maven.compiler.source}</source>
<target>${maven.compiler.target}</target>
<encoding>${project.build.sourceEncoding}</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
- implement
SinkServiceinterface
public class YourSinkService implements SinkService {
@Override
public String sink() {
return "your-sink-service";
}
@Override
public void init(File config) throws IOException {
// parse your external sink config
}
@Override
public void onEvent(Replicator replicator, Event event) {
// your sink business
}
}
- register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService
# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkService
your.package.YourSinkService
- package and deploy
$ mvn clean install $ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
- debug your sink service
public static void main(String[] args) throws Exception {
Replicator replicator = new RedisReplicator("redis://127.0.0.1:6379");
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
Replicators.closeQuietly(replicator);
}));
replicator.addExceptionListener((rep, tx, e) -> {
throw new RuntimeException(tx.getMessage(), tx);
});
SinkService sink = new YourSinkService();
sink.init(new File("/path/to/your-sink.conf"));
replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));
replicator.open();
}
How to implement a formatter service
- create
YourFormatterServiceextendAbstractFormatterService
public class YourFormatterService extends AbstractFormatterService {
@Override
public String format() {
return "test";
}
@Override
public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {
byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);
getEscaper().encode(key, getOutputStream());
getEscaper().encode(val, getOutputStream());
getOutputStream().write('\n');
return context;
}
}
- register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService
# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterService
your.package.YourFormatterService
- package and deploy
$ mvn clean install $ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your formatter service
$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json
Contributors
- Baoyi Chen
- Jintao Zhang
- Maz Ahmadi
- Anish Karandikar
- Air
- Raghu Nandan B S
- Special thanks to Kater Technologies
Consulting
Commercial support for redis-rdb-cli is available. The following services are currently available:
- Onsite consulting. $10,000 per day
- Onsite training. $10,000 per day
You may also contact Baoyi Chen directly, mail to chen.bao.yi@gmail.com.
Supported by 宁文君
27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going. Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.
Supported by IntelliJ IDEA
IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.
[粘贴]github-redis-rdb-cli的更多相关文章
- 深入剖析 redis RDB 持久化策略
简介 redis 持久化 RDB.AOF redis 提供两种持久化方式:RDB 和 AOF.redis 允许两者结合,也允许两者同时关闭. RDB 可以定时备份内存中的数据集.服务器启动的时候,可以 ...
- 使用rdbtools工具来解析redis rdb文件
工欲善其事必先利其器,日常工作中,好的工具能够高效的协助我们工作:今天介绍一款用来解析redis rdb文件的工具,非常好用.会之,受用无穷! 一.rdbtools工具介绍 源码地址:https:// ...
- Redis RDB文件
[Redis RDB文件] 1.RDB 持久化可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot). RDB 的优点 RDB 是一个非常紧凑(compact)的 ...
- Redis RDB 与AOF
参考书籍<Redis设计与实现> 一丶为什么redis需要持久化 redis 作为一个内存数据库,如果不想办法将存储在内存中的数据,保存到磁盘中,那么一旦服务器进程退出,那么redis数据 ...
- redis RDB和AOF
1.RDB 在指定的时间间隔内讲数据快照写入硬盘当中 2.AOF 2.1 以日志的形式来记录每个写操作,redis启动之初会读取该文件重新构建数据 2.2 修改配置文件 appendonly no 为 ...
- Redis RDB 分析工具 rdbtools 说明
背景 Redis是基于内存的KV数据库,内存作为存储介质,关注其内存的使用情况是一个重要指标,解析其内部的存储信息是给出优化方法和维护的最基本要求.解析内存有二种方法:第一个是通过scan遍历所有ke ...
- Redis rdb文件CRC64校验算法 Java实现
查看RDB文件结构,发现最后的8字节是CRC64校验算得,从文件头开始直到8字节校验码前的FF结束码(含),经过CRC64校验计算发现,貌似最后的8字节是小端模式实现的. 参考redis的crc64实 ...
- 搞懂Redis RDB和AOF持久化及工作原理
前言 因为Redis的数据都储存在内存中,当进程退出时,所有数据都将丢失.为了保证数据安全,Redis支持RDB和AOF两种持久化机制有效避免数据丢失问题.RDB可以看作在某一时刻Redis的快照(s ...
- redis rdb aof比较
Redis中数据存储模式有2种:cache-only,persistence; cache-only即只做为“缓存”服务,不持久数据,数据在服务终止后将消失,此模式下也将不存在“数据恢复”的手段,是一 ...
- redis RDB快照和AOF日志持久化配置
Redis持久化配置 Redis的持久化有2种方式 1快照 2是日志 Rdb快照的配置选项: save 900 1 // 900内,有1条写入,则产生快照 save 300 1000 ...
随机推荐
- Java日期时间处理详解
Java中SimpleDateFormat.LocalDateTime和DateTimeFormatter的区别及使用 在Java的世界里,处理日期和时间是常见的任务.尤其在Java 8之前,Simp ...
- 十分钟从入门到精通(下)——OBS权限配置
上一篇我们介绍了OBS权限管理中统一身份认证和企业项目管理,本期我们继续介绍OBS权限管理中的高级桶策略和ACL应用. 您是否也遇到过类似的问题或者困扰? 1.隔壁的主账户给了子用户创建一个桶,但 ...
- SecSolar:为代码“捉虫”,让你能更专心写代码
摘要:在"更健壮.更安全"的路上,CloudIDE又迈出了关键的一步:推出了代码安全检测服务SecSolar,以轻量插件的形式,为代码"捉虫",帮助企业和开发者 ...
- 谁说count(*) 性能最差,我需要跟你聊聊
摘要:当我们对一张数据表中的记录进行统计的时候,习惯都会使用 count 函数来统计,但是 count 函数传入的参数有很多种,比如 count(1).count(*).count(字段) 等.到底哪 ...
- Ubuntu 安装 MySQL 5.7
一.安装MySQL 1. 删除Mysql 数据库 sudo apt autoremove --purge mysql-server-* sudo apt remove mysql-server sud ...
- 轻松导航:教你在Excel中添加超链接功能
前言 超链接是指在网页或电子文档中常见的元素,它的主要作用是将一个文本或图像与另一网页.文件或资源链接起来,从而使用户能够通过点击该链接跳转到目标资源.超链接可以起到导航以及引用的作用.超链接通常有以 ...
- vivo 全球商城:订单中心架构设计与实践
一.背景 随着用户量级的快速增长,vivo 官方商城 v1.0 的单体架构逐渐暴露出弊端:模块愈发臃肿.开发效率低下.性能出现瓶颈.系统维护困难. 从2017年开始启动的 v2.0 架构升级,基于业务 ...
- linux安装pyarmor踩坑记录
现有环境 centos 7.8 python 3.7.6 pip 20.0 找度娘学习安装pyarmor pip install pyarmor 然后查看版本 pyarmor --version 进入 ...
- 智慧风电:数字孪生 3D 风机智能设备运维
前言 6 月 1 日,福建省人民政府发布关于<福建省"十四五"能源发展专项规划>的通知.规划要求,加大风电建设规模.自 "30·60" 双碳目标颁布 ...
- 构建高效数据流转的 ETL 系统:数据库 + Serverless 函数计算的最佳实践
作者|柳下 概述 随着企业规模和数据量的增长,数据的价值越来越受到重视.数据的变化和更新变得更加频繁和复杂,因此及时捕获和处理这些变化变得至关重要.为了满足这一需求,数据库 CDC(Change Da ...