[转帖]tidb4.0.4使用tiup扩容TiKV 节点
https://blog.csdn.net/mchdba/article/details/108896766
环境:centos7、tidb4.0.4、tiup-v1.0.8
添加两个tikv节点 172.21.210.37-38
思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩容命令——>重启grafana
1、初始化服务器、配置ssh互通
|
1
2
3
4
|
1、时间同步2、配置sshssh-copy-id root@172.21.210.37ssh-copy-id root@172.21.210.38 |
2、编辑配置文件
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
tiup cluster list #查看当前的集群名称列表tiup cluster edit-config <cluster-name> #查看集群配置、拷贝对应的配置vi scale-out.yamltikv_servers:- host: 172.21.210.37 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux- host: 172.21.210.38 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux |
3、执行扩容命令
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
|
此处假设当前执行命令的用户和新增的机器打通了互信,如果不满足已打通互信的条件,需要通过 -p 来输入新机器的密码,或通过 -i 指定私钥文件。tiup cluster scale-out <cluster-name> scale-out.yaml预期输出 Scaled cluster <cluster-name> out successfully 信息,表示扩容操作成功root@host-172-21-210-32 tidb_config]# tiup cluster scale-out tidb scale-out.yamlStarting component `cluster`: scale-out tidb scale-out.yamlPlease confirm your topology:TiDB Cluster: tidbTiDB Version: v4.0.4Type Host Ports OS/Arch Directories---- ---- ----- ------- -----------tikv 172.21.210.37 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160tikv 172.21.210.38 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: y+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub - Download tikv:v4.0.4 (linux/amd64) ... Done+ [ Serial ] - RootSSH: user=root, host=172.21.210.38, port=22, key=/root/.ssh/id_rsa+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.38+ [ Serial ] - RootSSH: user=root, host=172.21.210.37, port=22, key=/root/.ssh/id_rsa+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.37+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy','/data1/tidb-data'+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy','/data1/tidb-data'+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.39+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33+ [Parallel] - UserSSH: user=tidb, host=172.21.210.34+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33+ [Parallel] - UserSSH: user=tidb, host=172.21.210.35+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.36+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.38+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.37+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts' - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t... - Copy node_exporter -> 172.21.210.37 ... ? CopyComponent: component=node_exporter, version=v0.17.0, remote=172.21.210.37:/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? MonitoredConfig: cluster=tidb, user=tidb, node_exporter_port=9100, blackbox_export... - Copy node_exporter -> 172.21.210.38 ... Done+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.37, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.38, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Starting component pd Starting instance pd 172.21.210.33:2379 Starting instance pd 172.21.210.32:2379 Start pd 172.21.210.33:2379 success Start pd 172.21.210.32:2379 successStarting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component node_exporter Starting instance 172.21.210.33 Start 172.21.210.33 successStarting component blackbox_exporter Starting instance 172.21.210.33 Start 172.21.210.33 successStarting component tikv Starting instance tikv 172.21.210.35:20160 Starting instance tikv 172.21.210.34:20160 Starting instance tikv 172.21.210.39:20160 Starting instance tikv 172.21.210.36:20160 Start tikv 172.21.210.39:20160 success Start tikv 172.21.210.34:20160 success Start tikv 172.21.210.35:20160 success Start tikv 172.21.210.36:20160 successStarting component node_exporter Starting instance 172.21.210.35 Start 172.21.210.35 successStarting component blackbox_exporter Starting instance 172.21.210.35 Start 172.21.210.35 successStarting component node_exporter Starting instance 172.21.210.34 Start 172.21.210.34 successStarting component blackbox_exporter Starting instance 172.21.210.34 Start 172.21.210.34 successStarting component node_exporter Starting instance 172.21.210.39 Start 172.21.210.39 successStarting component blackbox_exporter Starting instance 172.21.210.39 Start 172.21.210.39 successStarting component node_exporter Starting instance 172.21.210.36 Start 172.21.210.36 successStarting component blackbox_exporter Starting instance 172.21.210.36 Start 172.21.210.36 successStarting component tidb Starting instance tidb 172.21.210.33:4000 Starting instance tidb 172.21.210.32:4000 Start tidb 172.21.210.32:4000 success Start tidb 172.21.210.33:4000 successStarting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 successStarting component grafana Starting instance grafana 172.21.210.32:3000 Start grafana 172.21.210.32:3000 successStarting component alertmanager Starting instance alertmanager 172.21.210.32:9093 Start alertmanager 172.21.210.32:9093 successChecking service state of pd 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days agoChecking service state of tikv 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days agoChecking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days agoChecking service state of prometheus 172.21.210.32 Active: active (running) since Sat 2020-10-17 02:25:27 CST; 2 weeks 5 days agoChecking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days agoChecking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago+ [Parallel] - UserSSH: user=tidb, host=172.21.210.38+ [Parallel] - UserSSH: user=tidb, host=172.21.210.37+ [ Serial ] - save meta+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Starting component tikv Starting instance tikv 172.21.210.38:20160 Starting instance tikv 172.21.210.37:20160 Start tikv 172.21.210.37:20160 success Start tikv 172.21.210.38:20160 successStarting component node_exporter Starting instance 172.21.210.37 Start 172.21.210.37 successStarting component blackbox_exporter Starting instance 172.21.210.37 Start 172.21.210.37 successStarting component node_exporter Starting instance 172.21.210.38 Start 172.21.210.38 successStarting component blackbox_exporter Starting instance 172.21.210.38 Start 172.21.210.38 successChecking service state of tikv 172.21.210.37 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 3s ago 172.21.210.38 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 2s ago+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/alertmanager-9093.service, deploy_dir=/data1/tidb-deploy/alertmanager-9093, data_dir=[/data1/tidb-data/alertmanager-9093], log_dir=/data1/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.36, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.37, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.35, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/prometheus-9090.service, deploy_dir=/data1/tidb-deploy/prometheus-9090, data_dir=[/data1/tidb-data/prometheus-9090], log_dir=/data1/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.34, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/grafana-3000.service, deploy_dir=/data1/tidb-deploy/grafana-3000, data_dir=[], log_dir=/data1/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.38, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.39, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - ClusterOperate: operation=RestartOperation, options={Roles:[prometheus] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Stopping component prometheus Stopping instance 172.21.210.32 Stop prometheus 172.21.210.32:9090 successStarting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 successStarting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successChecking service state of pd 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days agoChecking service state of tikv 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days agoChecking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days agoChecking service state of prometheus 172.21.210.32 Active: active (running) since Thu 2020-11-05 11:33:53 CST; 2s agoChecking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days agoChecking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago+ [ Serial ] - UpdateTopology: cluster=tidbScaled cluster `tidb` out successfully |
4、查看集群状态、重启grafana
|
1
2
3
4
|
检查集群状态 tiup cluster display <cluster-name>重启grafana tiup cluster restart tidb -R grafana |
[转帖]tidb4.0.4使用tiup扩容TiKV 节点的更多相关文章
- Tidb进行缩减扩容tikv节点
这两天接到任务说是要进行测试缩减机器给集群带来的负面效果有哪些. 然后我就按照官方的教程将机器进行了缩减,主要是缩减tikv节点 我们先来看看官方的文章是怎么写的: 步骤都没有什么问题,就是进行到第二 ...
- [转帖]springboot2.0配置连接池(hikari、druid)
springboot2.0配置连接池(hikari.druid) 原文链接:https://www.cnblogs.com/blog5277/p/10660689.html 原文作者:博客园--曲高终 ...
- [转帖]从0开始的高并发(一)--- Zookeeper的基础概念
从0开始的高并发(一)--- Zookeeper的基础概念 https://juejin.im/post/5d0bd358e51d45105e0212db 前言 前面几篇以spring作为主题也是有些 ...
- [转帖]mysql8.0忘记密码如何操作?
mysql8.0忘记密码如何操作? https://www.cnblogs.com/gspsuccess/p/11245314.html mark 一下 上次竟然不知道怎么弄. 很不幸,刚安装了MYS ...
- zookeeper集群扩容/下线节点实践
环境:zookeeper版本 3.4.6jdk版本 1.7.0_8010.111.1.29 zk110.111.1.44 zk210.111.1.45 zk310.111.1.46 zk410.111 ...
- MongoDB 3.0.6的主,从,仲裁节点搭建
在MongoDB所在路径创建log和data目录mkdir logmkdir data 在data目录下 创建master.slaver.arbiter路径 mkdir master mkdir sl ...
- 【转】Rancher 2.0 里程碑版本:支持添加自定义节点!
原文链接: http://mp.weixin.qq.com/s?__biz=MzIyMTUwMDMyOQ==&mid=2247487533&idx=1&sn=c70258577 ...
- 11.2.0.3 RAC(VCS)节点crash以及hang的问题分析
昨天某个客户的一套双节RAC当中一个节点crash,同一时候最后导致另外一个节点也hang住,仅仅能shutdown abort. 且出现shutdown abort实例之后,还有部分进程无法通过ki ...
- Hadoop 动态扩容 增加节点
基础准备 在基础准备部分,主要是设置hadoop运行的系统环境 修改系统hostname(通过hostname和/etc/sysconfig/network进行修改) 修改hosts文件,将集群所有节 ...
- 在线tidb+tipd+tikv扩容,迁移,从UC到阿里云
集群现状: 共有五个节点,配置为16核32g内存,数据节点为1T ssd盘,非数据节点为100g ssd盘: 角色规划: node1 tidb tipd node2 tidb tipd node3 t ...
随机推荐
- Mybatis之TypeHandler使用教程
引言 MyBatis 是一款优秀的持久层框架,它支持定制化 SQL.存储过程以及高级映射.MyBatis 避免了几乎所有的 JDBC 代码和手动设置参数以及获取结果集.MyBatis 可以使用简单的 ...
- 春眠不觉晓,Java数据类型知多少?基础牢不牢看完本文就有数了
文编|JavaBuild 哈喽,大家好呀!我是JavaBuild,以后可以喊我鸟哥!俺滴座右铭是不在沉默中爆发,就在沉默中灭亡,一起加油学习,珍惜现在来之不易的学习时光吧,等工作之后,你就会发现,想学 ...
- 为什么maven配置完Tomcat且运行之后页面内容没有显示出来?
1.如何在maven项目中配置一个webapp项目? 首先新建一个maven项目 项目目录 <?xml version="1.0" encoding="UTF-8& ...
- linux中mysql下载安装部署
创建mysql文件 mkdir mysql 首先通过yum下载wget命令 yum -y install wget 在mysql文件中通过wget下载MySQL存储库 wget https://dev ...
- Scrum Master需要具备哪些能力和经验
摘要:ScrumMaster对于产品负责人和开发团队来说,履行的是教练的职责,帮助团队和组织其他成员发展具有组织特色的.高效的Scrum方法,贯彻执行敏捷思想,激励团队持续提升,并不懈追求卓越的表现. ...
- ipa如何安装到iphone
Sign In - Apple app管理中心: https://appstoreconnect.apple.com/ apple ID管理中心: Manage your Apple ID 工具只 ...
- Cmder - 想让你的windows下 cmd 和 SecureCRT 操作 Linux 一样帅吗 附字符集编码 chcp 936、chcp 65001
想让你的windows下 cmd 和 SecureCRT 操作 Linux 一样帅的命令行显示吗. 下载 cmder 绿色版,然后用我的配置文件,替换原来的文件启动就可以了 配置文件下载:cmder ...
- 【django-vue】登录注册模态框分析 登录注册前端页面 腾讯短信功能二次封装 短信验证码接口 短信登录接口 短信注册接口
目录 昨日回顾 csrf跨站请求伪造 接口幂等性 异常捕获 今日内容 1 登录注册模态框分析 Login.vue Header.vue 2 登录注册前端页面复制 2.0 Header.vue 2.1 ...
- 如何使不定宽高的div在父元素中水平垂直居中
1.flex布局 <div class="box"> <div class="mask"> <!-- 内容 --> < ...
- maven 强制使用本地仓库
pom 文件添加如下内容: <repositories> <repository> <id>alimaven</id> <name>aliy ...