[粘贴]github-redis-rdb-cli
redis-rdb-cli

A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.
Chat with author
Contract the author
Binary release
Runtime requirement
jdk 1.8+
Install
$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip
$ unzip redis-rdb-cli-release.zip
$ cd ./redis-rdb-cli/bin
$ ./rct -h
Compile requirement
jdk 1.8+
maven-3.3.1+
Compile & run
$ git clone https://github.com/leonchen83/redis-rdb-cli.git
$ cd redis-rdb-cli
$ mvn clean install -Dmaven.test.skip=true
$ cd target/redis-rdb-cli-release/redis-rdb-cli/bin
$ ./rct -h
Run in docker
# run with jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest
$ rct -V # run without jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native
$ rct -V
Build native image via graalvm in docker
$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli .
$ docker run -it redisrdbcli:redis-rdb-cli bash
$ bash-5.1# rct -V
Windows Environment Variables
Add /path/to/redis-rdb-cli/bin to Path environment variable
Usage
Redis mass insertion
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r
$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe
Convert rdb to dump format
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof
Convert rdb to json format
$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json
Numbers of key in rdb
$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv
Find top 50 largest keys
$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50
Diff rdb
$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff
$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff
$ diff /path/to/dump1.diff /path/to/dump2.diff
Convert rdb to RESP
$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof
Sync with 2 redis
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Sync single redis to redis cluster
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0
Handle infinite loop in rst command
# set client-output-buffer-limit in source redis
$ redis-cli config set client-output-buffer-limit "slave 0 0 0"
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r
Downgrade migration
# Migrate data from redis-7 to redis-6
# About dump_rdb_version please see comment in redis-rdb-cli.conf
$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r
Handle big key in migration
# set proto-max-bulk-len in target redis
$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb # set Xms Xmx in redis-rdb-cli node
$ export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g" # execute migration
$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis cluster
$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r
or simply use following cmd without nodes-30001.conf
$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r
Backup remote rdb
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb
Backup remote rdb and convert db to dest db
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3
Filter rdb
$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string
Split rdb via cluster's nodes.conf
$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0
Merge multi rdb to one
$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash
Cut aof-use-rdb-preamble file to rdb file and aof file
$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof
Other parameter
More configurable parameter can be modified in /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Filter
rct,rdtandrmtthese 3 commands support data filter bytype,dbandkeyRegEx(Java style).rstthis command only support data filter bydb.
For example:
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0
Monitor redis server
# step1
# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`
# change property `metric_gateway from `none` to `influxdb`
#
# step2
$ cd /path/to/redis-rdb-cli/dashboard
$ docker-compose up -d
#
# step3
$ rmonitor -s redis://127.0.0.1:6379 -n standalone
$ rmonitor -s redis://127.0.0.1:30001 -n cluster
$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel
#
# step4
# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.

Difference between rmt and rst
- When
rmtstarted. source redis first doBGSAVEand generate a snapshot rdb file.rmtcommand migrate this snapshot file to target redis. after this process done,rmtterminated. rstnot only migrate snapshot rdb file but also incremental data from source redis. sorstnever terminated except typeCTRL+C.rstonly supportdbfilter more details please refer to Limitation of migration
Dashboard
Since v0.1.9, the rct -f mem support showing result in grafana dashboard like the following:
If you want to turn it on. you MUST install docker and docker-compose first, the installation please refer to docker
Then run the following command:
$ cd /path/to/redis-rdb-cli/dashboard # start
$ docker-compose up -d # stop
$ docker-compose down
cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parameter metric_gateway from none to influxdb.
Open http://localhost:3000 to check the rct -f mem's result.
If you deployed this tool in multi instance, you need to change parameter metric_instance to make sure unique between instances.
Redis 6
Redis 6 SSL
- use openssl to generate keystore
$ cd /path/to/redis-6.0-rc1
$ ./utils/gen-test-certs.sh
$ cd tests/tls
$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
If source redis and target redis use the same keystore. then config following parameters
source_keystore_path and target_keystore_path to point to/path/to/redis-6.0-rc1/tests/tls/redis.p12
set source_keystore_pass and target_keystore_passafter config ssl parameters use
rediss://host:portin your command to open ssl, for example:rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0
Redis 6 ACL
- use following URI to open redis ACL support
$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0
userMUST have+@allpermission to handle commands
Hack rmt
Rmt threading model
The rmt command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.
migrate_batch_size=4096
migrate_threads=4
migrate_flush=yes
migrate_retries=1
The most important parameter is migrate_threads=4. this means we use the following threading model to migrate data.
single redis ----> single redis
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| Source Redis |----| | Target Redis |
| | | +----------+ thread 3 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoint |-------------------| |
+--------------+ +----------+ +--------------+
single redis ----> redis cluster
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| Source Redis |----| | Redis cluster|
| | | +----------+ thread 3 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoints|-------------------| |
+--------------+ +----------+ +--------------+
The difference between cluster migration and single migration is Endpoint and Endpoints. In cluster migration the Endpoints contains multi Endpoint to point to every master instance in cluster. For example:
3 masters 3 replicas redis cluster. if migrate_threads=4 then we have 3 * 4 = 12 connections that connected with master instance.
Migration performance
The following 3 parameters affect migration performance
migrate_batch_size=4096
migrate_retries=1
migrate_flush=yes
migrate_batch_size: By default we use redispipelineto migrate data to remote. themigrate_batch_sizeis thepipelinebatch size. ifmigrate_batch_size=1then thepipelinedevolved into 1 single command to sent and wait the response from remote.migrate_retries: Themigrate_retries=1means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis withmigrate_retriestimes.migrate_flush: Themigrate_flush=yesmeans we write every 1 command to socket. then we invokeSocketOutputStream.flush()immediately. ifmigrate_flush=nowe invokeSocketOutputStream.flush()when write to socket every 64KB. notice that this parameter also affectmigrate_retries. themigrate_retriesonly take effect whenmigrate_flush=yes.
Migration principle
+---------------+ +-------------------+ restore +---------------+
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | convert | redis dump format |---------------->| |
| Dump rdb |------------>|-------------------| restore | Targe Redis |
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | | redis dump format |---------------->| |
+---------------+ +-------------------+ +---------------+
Limitation of migration
- We use cluster's
nodes.confto migrate data to cluster. because of we didn't handle theMOVEDASKredirection. so limitation of cluster migration is that the cluster MUST in stable state during the migration. this means the cluster MUST have nomigrating,importingslot and no switch slave to master. - If use
rstmigrate data to cluster. the following commands not supportedPUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA. and the following commandsRPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTOREONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg:del {user}:1 {user}:2)
Hack ret
What ret command do
retcommand that allow user define there own sink service like sink redis data tomysqlormongodb.retcommand using Java SPI extension to do this job.
How to implement a sink service
User should follow the steps below to implement a sink service.
- create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.your.company</groupId>
<artifactId>your-sink-service</artifactId>
<version>1.0.0</version> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties> <dependencies>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-rdb-cli-api</artifactId>
<version>1.8.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-replicator</artifactId>
<version>[3.6.4, )</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
<scope>provided</scope>
</dependency> <!--
<dependency>
other dependencies
</dependency>
--> </dependencies> <build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>${maven.compiler.source}</source>
<target>${maven.compiler.target}</target>
<encoding>${project.build.sourceEncoding}</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
- implement
SinkServiceinterface
public class YourSinkService implements SinkService {
@Override
public String sink() {
return "your-sink-service";
}
@Override
public void init(File config) throws IOException {
// parse your external sink config
}
@Override
public void onEvent(Replicator replicator, Event event) {
// your sink business
}
}
- register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService
# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkService
your.package.YourSinkService
- package and deploy
$ mvn clean install $ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
- debug your sink service
public static void main(String[] args) throws Exception {
Replicator replicator = new RedisReplicator("redis://127.0.0.1:6379");
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
Replicators.closeQuietly(replicator);
}));
replicator.addExceptionListener((rep, tx, e) -> {
throw new RuntimeException(tx.getMessage(), tx);
});
SinkService sink = new YourSinkService();
sink.init(new File("/path/to/your-sink.conf"));
replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));
replicator.open();
}
How to implement a formatter service
- create
YourFormatterServiceextendAbstractFormatterService
public class YourFormatterService extends AbstractFormatterService {
@Override
public String format() {
return "test";
}
@Override
public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {
byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);
getEscaper().encode(key, getOutputStream());
getEscaper().encode(val, getOutputStream());
getOutputStream().write('\n');
return context;
}
}
- register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService
# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterService
your.package.YourFormatterService
- package and deploy
$ mvn clean install $ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your formatter service
$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json
Contributors
- Baoyi Chen
- Jintao Zhang
- Maz Ahmadi
- Anish Karandikar
- Air
- Raghu Nandan B S
- Special thanks to Kater Technologies
Consulting
Commercial support for redis-rdb-cli is available. The following services are currently available:
- Onsite consulting. $10,000 per day
- Onsite training. $10,000 per day
You may also contact Baoyi Chen directly, mail to chen.bao.yi@gmail.com.
Supported by 宁文君
27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going. Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.
Supported by IntelliJ IDEA
IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.
[粘贴]github-redis-rdb-cli的更多相关文章
- 深入剖析 redis RDB 持久化策略
简介 redis 持久化 RDB.AOF redis 提供两种持久化方式:RDB 和 AOF.redis 允许两者结合,也允许两者同时关闭. RDB 可以定时备份内存中的数据集.服务器启动的时候,可以 ...
- 使用rdbtools工具来解析redis rdb文件
工欲善其事必先利其器,日常工作中,好的工具能够高效的协助我们工作:今天介绍一款用来解析redis rdb文件的工具,非常好用.会之,受用无穷! 一.rdbtools工具介绍 源码地址:https:// ...
- Redis RDB文件
[Redis RDB文件] 1.RDB 持久化可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot). RDB 的优点 RDB 是一个非常紧凑(compact)的 ...
- Redis RDB 与AOF
参考书籍<Redis设计与实现> 一丶为什么redis需要持久化 redis 作为一个内存数据库,如果不想办法将存储在内存中的数据,保存到磁盘中,那么一旦服务器进程退出,那么redis数据 ...
- redis RDB和AOF
1.RDB 在指定的时间间隔内讲数据快照写入硬盘当中 2.AOF 2.1 以日志的形式来记录每个写操作,redis启动之初会读取该文件重新构建数据 2.2 修改配置文件 appendonly no 为 ...
- Redis RDB 分析工具 rdbtools 说明
背景 Redis是基于内存的KV数据库,内存作为存储介质,关注其内存的使用情况是一个重要指标,解析其内部的存储信息是给出优化方法和维护的最基本要求.解析内存有二种方法:第一个是通过scan遍历所有ke ...
- Redis rdb文件CRC64校验算法 Java实现
查看RDB文件结构,发现最后的8字节是CRC64校验算得,从文件头开始直到8字节校验码前的FF结束码(含),经过CRC64校验计算发现,貌似最后的8字节是小端模式实现的. 参考redis的crc64实 ...
- 搞懂Redis RDB和AOF持久化及工作原理
前言 因为Redis的数据都储存在内存中,当进程退出时,所有数据都将丢失.为了保证数据安全,Redis支持RDB和AOF两种持久化机制有效避免数据丢失问题.RDB可以看作在某一时刻Redis的快照(s ...
- redis rdb aof比较
Redis中数据存储模式有2种:cache-only,persistence; cache-only即只做为“缓存”服务,不持久数据,数据在服务终止后将消失,此模式下也将不存在“数据恢复”的手段,是一 ...
- redis RDB快照和AOF日志持久化配置
Redis持久化配置 Redis的持久化有2种方式 1快照 2是日志 Rdb快照的配置选项: save 900 1 // 900内,有1条写入,则产生快照 save 300 1000 ...
随机推荐
- 斯坦福 UE4 C++ ActionRoguelike游戏实例教程 10.控制台变量的用法 & 静态函数库 & 使用对象通道对碰撞进行控制
斯坦福课程 UE4 C++ ActionRoguelike游戏实例教程 0.绪论 概述 本文对应Lecture 15, 61 - Console Variables for debugging and ...
- Typora 掘金小册主题
主题说明 此主题样式基本来源于掘金小册学习界面 下载地址:https://github.com/easylee1996/typora-juejin-theme 主题预览 主题文档示例 如何使用 克隆仓 ...
- 基于Fabric的性能测试与调优实践
摘要:本文聚焦Fabric核心业务,构建一个测试模型,对社区原生的Fabric和华为云区块链(基于Fabric)进行实测,识别社区原生Fabric的性能瓶颈,并尝试通过华为区块链提供的动态伸缩.快速P ...
- CG行业云渲染服务的演进之路
摘要:影视动画.特效制作等行业渲染需求量增多,4K/6K以及各高分辨率会陆续成为主流,本地算力与存储资源已无法满足现有任务量.而随着大环境的演变,CG行业发展已进入发展快车道.来自赞奇科技的CEO金伟 ...
- JS的深浅复制,原来如此!
摘要:之所以会出现深浅拷贝的问题,实质上是由于JS对基本类型和引用类型的处理不同. 本文分享自华为云社区<js的深浅复制,一看就明白>,作者: 鑫2020. 浅复制的意思 浅复制是仅仅对数 ...
- 没想到,学棋五年的我竟然输给了昇腾CANN!
摘要:整整两天,上百场对弈,TA竟然未尝一败,真是让人拍案叫绝. 近日,一位神秘"人物"亮相华为昇腾CANN技术开放日现场,引得众人簇拥,吸粉无数.从现场AI棋艺大战的画面中我们可 ...
- LAS Spark+云原生:数据分析全新解决方案
更多技术交流.求职机会,欢迎关注字节跳动数据平台微信公众号,回复[1]进入官方交流群 随着数据规模的迅速增长和数据处理需求的不断演进,云原生架构和湖仓分析成为了现代数据处理的重要趋势.在这个数字化时代 ...
- 微服务系列-如何使用 RestTemplate 进行 Spring Boot 微服务通信示例
概述 下面我们将学习如何创建多个 Spring boot 微服务以及如何使用 RestTemplate 类在多个微服务之间进行同步通信. 微服务通信有两种风格: 同步通讯 异步通信 同步通讯 在同步通 ...
- Zookeeper面试题总结
1.请简述Zookeeper的选举机制 假设有五台服务器组成的zookeeper集群,它们的id从1-5,同时它们都是最新启动的,也就是没有历史数据,在存放数据量这一点上,都是一样的. 假设这些服务器 ...
- Educational Codeforces Round 99 (Rated for Div. 2) (A ~ F)个人题解
Educational Codeforces Round 99 (Rated for Div. 2) A. Strange Functions 读懂题即可(或者快速看一下样例解释),直接输出字符串长度 ...