[粘贴]github-redis-rdb-cli
redis-rdb-cli
A tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define there own sink service to migrate redis data to somewhere.
Chat with author
Contract the author
Binary release
Runtime requirement
jdk 1.8+
Install
$ wget https://github.com/leonchen83/redis-rdb-cli/releases/download/${version}/redis-rdb-cli-release.zip
$ unzip redis-rdb-cli-release.zip
$ cd ./redis-rdb-cli/bin
$ ./rct -h
Compile requirement
jdk 1.8+
maven-3.3.1+
Compile & run
$ git clone https://github.com/leonchen83/redis-rdb-cli.git
$ cd redis-rdb-cli
$ mvn clean install -Dmaven.test.skip=true
$ cd target/redis-rdb-cli-release/redis-rdb-cli/bin
$ ./rct -h
Run in docker
# run with jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest
$ rct -V # run without jvm
$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest-native
$ rct -V
Build native image via graalvm in docker
$ docker build -m 8g -f DockerfileNative -t redisrdbcli:redis-rdb-cli .
$ docker run -it redisrdbcli:redis-rdb-cli bash
$ bash-5.1# rct -V
Windows Environment Variables
Add /path/to/redis-rdb-cli/bin
to Path
environment variable
Usage
Redis mass insertion
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r
$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe
Convert rdb to dump format
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof
Convert rdb to json format
$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json
Numbers of key in rdb
$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv
Find top 50 largest keys
$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50
Diff rdb
$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff
$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff
$ diff /path/to/dump1.diff /path/to/dump2.diff
Convert rdb to RESP
$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof
Sync with 2 redis
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Sync single redis to redis cluster
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0
Handle infinite loop in rst command
# set client-output-buffer-limit in source redis
$ redis-cli config set client-output-buffer-limit "slave 0 0 0"
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r
Downgrade migration
# Migrate data from redis-7 to redis-6
# About dump_rdb_version please see comment in redis-rdb-cli.conf
$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r
Handle big key in migration
# set proto-max-bulk-len in target redis
$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb # set Xms Xmx in redis-rdb-cli node
$ export JAVA_TOOL_OPTIONS="-Xms8g -Xmx8g" # execute migration
$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r
Migrate rdb to remote redis cluster
$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r
or simply use following cmd without nodes-30001.conf
$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r
Backup remote rdb
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb
Backup remote rdb and convert db to dest db
$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3
Filter rdb
$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string
Split rdb via cluster's nodes.conf
$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0
Merge multi rdb to one
$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash
Cut aof-use-rdb-preamble file to rdb file and aof file
$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof
Other parameter
More configurable parameter can be modified in /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Filter
rct
,rdt
andrmt
these 3 commands support data filter bytype
,db
andkey
RegEx(Java style).rst
this command only support data filter bydb
.
For example:
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0
$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash
$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list
$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0
Monitor redis server
# step1
# open file `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`
# change property `metric_gateway from `none` to `influxdb`
#
# step2
$ cd /path/to/redis-rdb-cli/dashboard
$ docker-compose up -d
#
# step3
$ rmonitor -s redis://127.0.0.1:6379 -n standalone
$ rmonitor -s redis://127.0.0.1:30001 -n cluster
$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster&authUser=usr&authPassword=pwd -n sentinel
#
# step4
# open url `http://localhost:3000/d/monitor/monitor`, login grafana use `admin`, `admin` and check monitor result.
Difference between rmt and rst
- When
rmt
started. source redis first doBGSAVE
and generate a snapshot rdb file.rmt
command migrate this snapshot file to target redis. after this process done,rmt
terminated. rst
not only migrate snapshot rdb file but also incremental data from source redis. sorst
never terminated except typeCTRL+C
.rst
only supportdb
filter more details please refer to Limitation of migration
Dashboard
Since v0.1.9
, the rct -f mem
support showing result in grafana dashboard like the following:
If you want to turn it on. you MUST install docker
and docker-compose
first, the installation please refer to docker
Then run the following command:
$ cd /path/to/redis-rdb-cli/dashboard # start
$ docker-compose up -d # stop
$ docker-compose down
cd /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf
Then change parameter metric_gateway from none
to influxdb
.
Open http://localhost:3000
to check the rct -f mem
's result.
If you deployed this tool in multi instance, you need to change parameter metric_instance to make sure unique between instances.
Redis 6
Redis 6 SSL
- use openssl to generate keystore
$ cd /path/to/redis-6.0-rc1
$ ./utils/gen-test-certs.sh
$ cd tests/tls
$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12
If source redis and target redis use the same keystore. then config following parameters
source_keystore_path and target_keystore_path to point to/path/to/redis-6.0-rc1/tests/tls/redis.p12
set source_keystore_pass and target_keystore_passafter config ssl parameters use
rediss://host:port
in your command to open ssl, for example:rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0
Redis 6 ACL
- use following URI to open redis ACL support
$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0
user
MUST have+@all
permission to handle commands
Hack rmt
Rmt threading model
The rmt
command use the following 4 parameters(redis-rdb-cli.conf) to migrate data to remote.
migrate_batch_size=4096
migrate_threads=4
migrate_flush=yes
migrate_retries=1
The most important parameter is migrate_threads=4
. this means we use the following threading model to migrate data.
single redis ----> single redis
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| Source Redis |----| | Target Redis |
| | | +----------+ thread 3 | |
| | |----| Endpoint |-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoint |-------------------| |
+--------------+ +----------+ +--------------+
single redis ----> redis cluster
+--------------+ +----------+ thread 1 +--------------+
| | +----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 2 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| Source Redis |----| | Redis cluster|
| | | +----------+ thread 3 | |
| | |----| Endpoints|-------------------| |
| | | +----------+ | |
| | | | |
| | | +----------+ thread 4 | |
| | +----| Endpoints|-------------------| |
+--------------+ +----------+ +--------------+
The difference between cluster migration and single migration is Endpoint
and Endpoints
. In cluster migration the Endpoints
contains multi Endpoint
to point to every master
instance in cluster. For example:
3 masters 3 replicas redis cluster. if migrate_threads=4
then we have 3 * 4 = 12
connections that connected with master
instance.
Migration performance
The following 3 parameters affect migration performance
migrate_batch_size=4096
migrate_retries=1
migrate_flush=yes
migrate_batch_size
: By default we use redispipeline
to migrate data to remote. themigrate_batch_size
is thepipeline
batch size. ifmigrate_batch_size=1
then thepipeline
devolved into 1 single command to sent and wait the response from remote.migrate_retries
: Themigrate_retries=1
means if socket error occurred. we recreate a new socket and retry to send that failed command to target redis withmigrate_retries
times.migrate_flush
: Themigrate_flush=yes
means we write every 1 command to socket. then we invokeSocketOutputStream.flush()
immediately. ifmigrate_flush=no
we invokeSocketOutputStream.flush()
when write to socket every 64KB. notice that this parameter also affectmigrate_retries
. themigrate_retries
only take effect whenmigrate_flush=yes
.
Migration principle
+---------------+ +-------------------+ restore +---------------+
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | convert | redis dump format |---------------->| |
| Dump rdb |------------>|-------------------| restore | Targe Redis |
| | | redis dump format |---------------->| |
| | |-------------------| restore | |
| | | redis dump format |---------------->| |
+---------------+ +-------------------+ +---------------+
Limitation of migration
- We use cluster's
nodes.conf
to migrate data to cluster. because of we didn't handle theMOVED
ASK
redirection. so limitation of cluster migration is that the cluster MUST in stable state during the migration. this means the cluster MUST have nomigrating
,importing
slot and no switch slave to master. - If use
rst
migrate data to cluster. the following commands not supportedPUBLISH,SWAPDB,MOVE,FLUSHALL,FLUSHDB,MULTI,EXEC,SCRIPT FLUSH,SCRIPT LOAD,EVAL,EVALSHA
. and the following commandsRPOPLPUSH,SDIFFSTORE,SINTERSTORE,SMOVE,ZINTERSTORE,ZUNIONSTORE,DEL,UNLINK,RENAME,RENAMENX,PFMERGE,PFCOUNT,MSETNX,BRPOPLPUSH,BITOP,MSET,COPY,BLMOVE,LMOVE,ZDIFFSTORE,GEOSEARCHSTORE
ONLY SUPPORT WHEN THESE COMMAND KEYS IN THE SAME SLOT(eg:del {user}:1 {user}:2
)
Hack ret
What ret command do
ret
command that allow user define there own sink service like sink redis data tomysql
ormongodb
.ret
command using Java SPI extension to do this job.
How to implement a sink service
User should follow the steps below to implement a sink service.
- create a java project using maven pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion> <groupId>com.your.company</groupId>
<artifactId>your-sink-service</artifactId>
<version>1.0.0</version> <properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties> <dependencies>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-rdb-cli-api</artifactId>
<version>1.8.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.moilioncircle</groupId>
<artifactId>redis-replicator</artifactId>
<version>[3.6.4, )</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.25</version>
<scope>provided</scope>
</dependency> <!--
<dependency>
other dependencies
</dependency>
--> </dependencies> <build>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.0</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>${maven.compiler.source}</source>
<target>${maven.compiler.target}</target>
<encoding>${project.build.sourceEncoding}</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
- implement
SinkService
interface
public class YourSinkService implements SinkService { @Override
public String sink() {
return "your-sink-service";
} @Override
public void init(File config) throws IOException {
// parse your external sink config
} @Override
public void onEvent(Replicator replicator, Event event) {
// your sink business
}
}
- register this service using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.sink.SinkService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.sink.SinkService
# add following content in com.moilioncircle.redis.rdb.cli.api.sink.SinkService
your.package.YourSinkService
- package and deploy
$ mvn clean install $ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your sink service
$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service
- debug your sink service
public static void main(String[] args) throws Exception {
Replicator replicator = new RedisReplicator("redis://127.0.0.1:6379");
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
Replicators.closeQuietly(replicator);
}));
replicator.addExceptionListener((rep, tx, e) -> {
throw new RuntimeException(tx.getMessage(), tx);
});
SinkService sink = new YourSinkService();
sink.init(new File("/path/to/your-sink.conf"));
replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));
replicator.open();
}
How to implement a formatter service
- create
YourFormatterService
extendAbstractFormatterService
public class YourFormatterService extends AbstractFormatterService { @Override
public String format() {
return "test";
} @Override
public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {
byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);
getEscaper().encode(key, getOutputStream());
getEscaper().encode(val, getOutputStream());
getOutputStream().write('\n');
return context;
}
}
- register this formatter using Java SPI
# create com.moilioncircle.redis.rdb.cli.api.format.FormatterService file in src/main/resources/META-INF/services/
|-src
|____main
| |____resources
| | |____META-INF
| | | |____services
| | | | |____com.moilioncircle.redis.rdb.cli.api.format.FormatterService
# add following content in com.moilioncircle.redis.rdb.cli.api.format.FormatterService
your.package.YourFormatterService
- package and deploy
$ mvn clean install $ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib
- run your formatter service
$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json
Contributors
- Baoyi Chen
- Jintao Zhang
- Maz Ahmadi
- Anish Karandikar
- Air
- Raghu Nandan B S
- Special thanks to Kater Technologies
Consulting
Commercial support for redis-rdb-cli
is available. The following services are currently available:
- Onsite consulting. $10,000 per day
- Onsite training. $10,000 per day
You may also contact Baoyi Chen
directly, mail to chen.bao.yi@gmail.com.
Supported by 宁文君
27 January 2023, A sad day that I lost my mother 宁文君, She was encouraging and supporting me in developing this tool. Every time a company uses this tool, she got excited like a child and encouraged me to keep going. Without her I couldn't have maintained this tool for so many years. Even I didn't achieve much but she is still proud of me, R.I.P and hope God bless her.
Supported by IntelliJ IDEA
IntelliJ IDEA is a Java integrated development environment (IDE) for developing computer software.
It is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,
and in a proprietary commercial edition. Both can be used for commercial development.
[粘贴]github-redis-rdb-cli的更多相关文章
- 深入剖析 redis RDB 持久化策略
简介 redis 持久化 RDB.AOF redis 提供两种持久化方式:RDB 和 AOF.redis 允许两者结合,也允许两者同时关闭. RDB 可以定时备份内存中的数据集.服务器启动的时候,可以 ...
- 使用rdbtools工具来解析redis rdb文件
工欲善其事必先利其器,日常工作中,好的工具能够高效的协助我们工作:今天介绍一款用来解析redis rdb文件的工具,非常好用.会之,受用无穷! 一.rdbtools工具介绍 源码地址:https:// ...
- Redis RDB文件
[Redis RDB文件] 1.RDB 持久化可以在指定的时间间隔内生成数据集的时间点快照(point-in-time snapshot). RDB 的优点 RDB 是一个非常紧凑(compact)的 ...
- Redis RDB 与AOF
参考书籍<Redis设计与实现> 一丶为什么redis需要持久化 redis 作为一个内存数据库,如果不想办法将存储在内存中的数据,保存到磁盘中,那么一旦服务器进程退出,那么redis数据 ...
- redis RDB和AOF
1.RDB 在指定的时间间隔内讲数据快照写入硬盘当中 2.AOF 2.1 以日志的形式来记录每个写操作,redis启动之初会读取该文件重新构建数据 2.2 修改配置文件 appendonly no 为 ...
- Redis RDB 分析工具 rdbtools 说明
背景 Redis是基于内存的KV数据库,内存作为存储介质,关注其内存的使用情况是一个重要指标,解析其内部的存储信息是给出优化方法和维护的最基本要求.解析内存有二种方法:第一个是通过scan遍历所有ke ...
- Redis rdb文件CRC64校验算法 Java实现
查看RDB文件结构,发现最后的8字节是CRC64校验算得,从文件头开始直到8字节校验码前的FF结束码(含),经过CRC64校验计算发现,貌似最后的8字节是小端模式实现的. 参考redis的crc64实 ...
- 搞懂Redis RDB和AOF持久化及工作原理
前言 因为Redis的数据都储存在内存中,当进程退出时,所有数据都将丢失.为了保证数据安全,Redis支持RDB和AOF两种持久化机制有效避免数据丢失问题.RDB可以看作在某一时刻Redis的快照(s ...
- redis rdb aof比较
Redis中数据存储模式有2种:cache-only,persistence; cache-only即只做为“缓存”服务,不持久数据,数据在服务终止后将消失,此模式下也将不存在“数据恢复”的手段,是一 ...
- redis RDB快照和AOF日志持久化配置
Redis持久化配置 Redis的持久化有2种方式 1快照 2是日志 Rdb快照的配置选项: save 900 1 // 900内,有1条写入,则产生快照 save 300 1000 ...
随机推荐
- Redis 使用的 10 个小技巧
Redis 在当前的技术社区里是非常热门的.从来自 Antirez 一个小小的个人项目到成为内存数据存储行业的标准,Redis已经走过了很长的一段路. 随之而来的一系列最佳实践,使得大多数人可以正确地 ...
- poweroff详解
linux下poweroff命令详解 reboot.halt.poweroff三条命令意思作用一样 阅读这三个命令的man帮助信息后,发现实现的是相同的功能. 作用: 重启或者关闭系统 语法: reb ...
- 第八部分_Shell脚本之综合案例实训
综合案例 1. 实战案例1 ㈠ 具体需求 写一个脚本,将跳板机上yunwei用户的公钥推送到局域网内可以ping通的所有机器上 说明:主机和密码文件已经提供 10.1.1.1:123456 10.1. ...
- 云图说|分布式事务管理DTM:“买买买”背后的小帮手
摘要:分布式事务管理DTM通过提供高性能.高可靠.低侵入等核心价值,可以更好的帮助企业应对微服务场景带来的一致性问题. 本文分享自华为云社区<[云图说]第224期 分布式事务管理DTM,&quo ...
- 渗透测试 vs 漏洞扫描:差异与不同
渗透测试和漏洞扫描常常被混淆,这两者都通过探索系统来寻找 IT 基础架构中的弱点及易受攻击的地方.阅读本文,带你了解两者之间的差异与不同. 手动 vs 自动 渗透测试是一种手动安全评估方式,网络安全人 ...
- 企业新道路怎么走?火山引擎AB测试助力决策选择
更多技术交流.求职机会,欢迎关注字节跳动数据平台微信公众号,回复[1]进入官方交流群 乐刻是一家创立8年的企业,除了消费者熟悉的乐刻健身房可办月卡.24小时营业等,其还有比外界了解更多元的业务.目 ...
- 【django-vue】主页前端搭建 git介绍和安装 git工作流程 git常用命令 git过滤文件 重写drf方法 跨域中间件 导出项目依赖
目录 上节回顾 1 主页前端 Header组件 Banner组件 Footer组件 2 git介绍和安装 git和svn比较 pycharm中配置git svn,git ,github,gitee,g ...
- AtCoder Beginner Contest 215 (个人题解 A~F)
比赛链接:Here AB水题, C - One More aab aba baa 题意: 给出字符串 \(s\) 和整数 \(k\) ,请输出字典序第 \(k\) 大的原字符串 \(s\) 的排序 思 ...
- 领域驱动设计(DDD)实践之路(二):事件驱动与CQRS
本文首发于 vivo互联网技术 微信公众号 链接: https://mp.weixin.qq.com/s/Z3uJhxJGDif3qN5OlE_woA作者:wenbo zhang [领域驱动设计实践之 ...
- JVM自定义类加载器在代码扩展性的实践
一.背景 名单管理系统是手机上各个模块将需要管控的应用配置到文件中,然后下发到手机上进行应用管控的系统,比如各个应用的耗电量管控:各个模块的管控应用文件考虑到安全问题,有自己的不同的加密方式,按照以往 ...