[转帖]tidb4.0.4使用tiup扩容TiKV 节点
https://blog.csdn.net/mchdba/article/details/108896766
环境:centos7、tidb4.0.4、tiup-v1.0.8
添加两个tikv节点 172.21.210.37-38
思路:初始化两台服务器、配置ssh互通——>编辑配置文件——>执行扩容命令——>重启grafana
1、初始化服务器、配置ssh互通
|
1
2
3
4
|
1、时间同步2、配置sshssh-copy-id root@172.21.210.37ssh-copy-id root@172.21.210.38 |
2、编辑配置文件
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
tiup cluster list #查看当前的集群名称列表tiup cluster edit-config <cluster-name> #查看集群配置、拷贝对应的配置vi scale-out.yamltikv_servers:- host: 172.21.210.37 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux- host: 172.21.210.38 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 arch: amd64 os: linux |
3、执行扩容命令
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
|
此处假设当前执行命令的用户和新增的机器打通了互信,如果不满足已打通互信的条件,需要通过 -p 来输入新机器的密码,或通过 -i 指定私钥文件。tiup cluster scale-out <cluster-name> scale-out.yaml预期输出 Scaled cluster <cluster-name> out successfully 信息,表示扩容操作成功root@host-172-21-210-32 tidb_config]# tiup cluster scale-out tidb scale-out.yamlStarting component `cluster`: scale-out tidb scale-out.yamlPlease confirm your topology:TiDB Cluster: tidbTiDB Version: v4.0.4Type Host Ports OS/Arch Directories---- ---- ----- ------- -----------tikv 172.21.210.37 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160tikv 172.21.210.38 20160/20180 linux/x86_64 /data1/tidb-deploy/tikv-20160,/data1/tidb-data/tikv-20160Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host.Do you want to continue? [y/N]: y+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb/ssh/id_rsa.pub - Download tikv:v4.0.4 (linux/amd64) ... Done+ [ Serial ] - RootSSH: user=root, host=172.21.210.38, port=22, key=/root/.ssh/id_rsa+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.38+ [ Serial ] - RootSSH: user=root, host=172.21.210.37, port=22, key=/root/.ssh/id_rsa+ [ Serial ] - EnvInit: user=tidb, host=172.21.210.37+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy','/data1/tidb-data'+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy','/data1/tidb-data'+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.39+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33+ [Parallel] - UserSSH: user=tidb, host=172.21.210.34+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.33+ [Parallel] - UserSSH: user=tidb, host=172.21.210.35+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.36+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [Parallel] - UserSSH: user=tidb, host=172.21.210.32+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.38+ [ Serial ] - UserSSH: user=tidb, host=172.21.210.37+ [ Serial ] - Mkdir: host=172.21.210.38, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts'+ [ Serial ] - Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/tikv-20160','/data1/tidb-deploy/tikv-20160/log','/data1/tidb-deploy/tikv-20160/bin','/data1/tidb-deploy/tikv-20160/conf','/data1/tidb-deploy/tikv-20160/scripts' - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? Mkdir: host=172.21.210.37, directories='/data1/tidb-deploy/monitor-9100','/data1/t... - Copy node_exporter -> 172.21.210.37 ... ? CopyComponent: component=node_exporter, version=v0.17.0, remote=172.21.210.37:/data1/t... - Copy blackbox_exporter -> 172.21.210.37 ... ? MonitoredConfig: cluster=tidb, user=tidb, node_exporter_port=9100, blackbox_export... - Copy node_exporter -> 172.21.210.38 ... Done+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.37, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=+ [ Serial ] - ScaleConfig: cluster=tidb, user=tidb, host=172.21.210.38, service=tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Starting component pd Starting instance pd 172.21.210.33:2379 Starting instance pd 172.21.210.32:2379 Start pd 172.21.210.33:2379 success Start pd 172.21.210.32:2379 successStarting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component node_exporter Starting instance 172.21.210.33 Start 172.21.210.33 successStarting component blackbox_exporter Starting instance 172.21.210.33 Start 172.21.210.33 successStarting component tikv Starting instance tikv 172.21.210.35:20160 Starting instance tikv 172.21.210.34:20160 Starting instance tikv 172.21.210.39:20160 Starting instance tikv 172.21.210.36:20160 Start tikv 172.21.210.39:20160 success Start tikv 172.21.210.34:20160 success Start tikv 172.21.210.35:20160 success Start tikv 172.21.210.36:20160 successStarting component node_exporter Starting instance 172.21.210.35 Start 172.21.210.35 successStarting component blackbox_exporter Starting instance 172.21.210.35 Start 172.21.210.35 successStarting component node_exporter Starting instance 172.21.210.34 Start 172.21.210.34 successStarting component blackbox_exporter Starting instance 172.21.210.34 Start 172.21.210.34 successStarting component node_exporter Starting instance 172.21.210.39 Start 172.21.210.39 successStarting component blackbox_exporter Starting instance 172.21.210.39 Start 172.21.210.39 successStarting component node_exporter Starting instance 172.21.210.36 Start 172.21.210.36 successStarting component blackbox_exporter Starting instance 172.21.210.36 Start 172.21.210.36 successStarting component tidb Starting instance tidb 172.21.210.33:4000 Starting instance tidb 172.21.210.32:4000 Start tidb 172.21.210.32:4000 success Start tidb 172.21.210.33:4000 successStarting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 successStarting component grafana Starting instance grafana 172.21.210.32:3000 Start grafana 172.21.210.32:3000 successStarting component alertmanager Starting instance alertmanager 172.21.210.32:9093 Start alertmanager 172.21.210.32:9093 successChecking service state of pd 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days agoChecking service state of tikv 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days agoChecking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days agoChecking service state of prometheus 172.21.210.32 Active: active (running) since Sat 2020-10-17 02:25:27 CST; 2 weeks 5 days agoChecking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days agoChecking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago+ [Parallel] - UserSSH: user=tidb, host=172.21.210.38+ [Parallel] - UserSSH: user=tidb, host=172.21.210.37+ [ Serial ] - save meta+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Starting component tikv Starting instance tikv 172.21.210.38:20160 Starting instance tikv 172.21.210.37:20160 Start tikv 172.21.210.37:20160 success Start tikv 172.21.210.38:20160 successStarting component node_exporter Starting instance 172.21.210.37 Start 172.21.210.37 successStarting component blackbox_exporter Starting instance 172.21.210.37 Start 172.21.210.37 successStarting component node_exporter Starting instance 172.21.210.38 Start 172.21.210.38 successStarting component blackbox_exporter Starting instance 172.21.210.38 Start 172.21.210.38 successChecking service state of tikv 172.21.210.37 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 3s ago 172.21.210.38 Active: active (running) since Thu 2020-11-05 11:33:46 CST; 2s ago+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/alertmanager-9093.service, deploy_dir=/data1/tidb-deploy/alertmanager-9093, data_dir=[/data1/tidb-data/alertmanager-9093], log_dir=/data1/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.36, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.37, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tidb-4000.service, deploy_dir=/data1/tidb-deploy/tidb-4000, data_dir=[], log_dir=/data1/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.35, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/prometheus-9090.service, deploy_dir=/data1/tidb-deploy/prometheus-9090, data_dir=[/data1/tidb-data/prometheus-9090], log_dir=/data1/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.34, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.32, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/grafana-3000.service, deploy_dir=/data1/tidb-deploy/grafana-3000, data_dir=[], log_dir=/data1/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.38, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.33, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/pd-2379.service, deploy_dir=/data1/tidb-deploy/pd-2379, data_dir=[/data1/tidb-data/pd-2379], log_dir=/data1/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - InitConfig: cluster=tidb, user=tidb, host=172.21.210.39, path=/root/.tiup/storage/cluster/clusters/tidb/config-cache/tikv-20160.service, deploy_dir=/data1/tidb-deploy/tikv-20160, data_dir=[/data1/tidb-data/tikv-20160], log_dir=/data1/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb/config-cache+ [ Serial ] - ClusterOperate: operation=RestartOperation, options={Roles:[prometheus] Nodes:[] Force:false SSHTimeout:0 OptTimeout:120 APITimeout:0 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}Stopping component prometheus Stopping instance 172.21.210.32 Stop prometheus 172.21.210.32:9090 successStarting component prometheus Starting instance prometheus 172.21.210.32:9090 Start prometheus 172.21.210.32:9090 successStarting component node_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successStarting component blackbox_exporter Starting instance 172.21.210.32 Start 172.21.210.32 successChecking service state of pd 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:22 CST; 2 weeks 5 days ago 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:31 CST; 2 weeks 5 days agoChecking service state of tikv 172.21.210.35 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.39 Active: active (running) since Fri 2020-10-16 23:34:13 CST; 2 weeks 5 days ago 172.21.210.34 Active: active (running) since Fri 2020-10-16 22:50:19 CST; 2 weeks 5 days ago 172.21.210.36 Active: active (running) since Sat 2020-10-17 02:25:23 CST; 2 weeks 5 days agoChecking service state of tidb 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:50:49 CST; 2 weeks 5 days ago 172.21.210.33 Active: active (running) since Fri 2020-10-16 22:50:40 CST; 2 weeks 5 days agoChecking service state of prometheus 172.21.210.32 Active: active (running) since Thu 2020-11-05 11:33:53 CST; 2s agoChecking service state of grafana 172.21.210.32 Active: active (running) since Fri 2020-10-16 23:55:07 CST; 2 weeks 5 days agoChecking service state of alertmanager 172.21.210.32 Active: active (running) since Fri 2020-10-16 22:51:06 CST; 2 weeks 5 days ago+ [ Serial ] - UpdateTopology: cluster=tidbScaled cluster `tidb` out successfully |
4、查看集群状态、重启grafana
|
1
2
3
4
|
检查集群状态 tiup cluster display <cluster-name>重启grafana tiup cluster restart tidb -R grafana |
[转帖]tidb4.0.4使用tiup扩容TiKV 节点的更多相关文章
- Tidb进行缩减扩容tikv节点
这两天接到任务说是要进行测试缩减机器给集群带来的负面效果有哪些. 然后我就按照官方的教程将机器进行了缩减,主要是缩减tikv节点 我们先来看看官方的文章是怎么写的: 步骤都没有什么问题,就是进行到第二 ...
- [转帖]springboot2.0配置连接池(hikari、druid)
springboot2.0配置连接池(hikari.druid) 原文链接:https://www.cnblogs.com/blog5277/p/10660689.html 原文作者:博客园--曲高终 ...
- [转帖]从0开始的高并发(一)--- Zookeeper的基础概念
从0开始的高并发(一)--- Zookeeper的基础概念 https://juejin.im/post/5d0bd358e51d45105e0212db 前言 前面几篇以spring作为主题也是有些 ...
- [转帖]mysql8.0忘记密码如何操作?
mysql8.0忘记密码如何操作? https://www.cnblogs.com/gspsuccess/p/11245314.html mark 一下 上次竟然不知道怎么弄. 很不幸,刚安装了MYS ...
- zookeeper集群扩容/下线节点实践
环境:zookeeper版本 3.4.6jdk版本 1.7.0_8010.111.1.29 zk110.111.1.44 zk210.111.1.45 zk310.111.1.46 zk410.111 ...
- MongoDB 3.0.6的主,从,仲裁节点搭建
在MongoDB所在路径创建log和data目录mkdir logmkdir data 在data目录下 创建master.slaver.arbiter路径 mkdir master mkdir sl ...
- 【转】Rancher 2.0 里程碑版本:支持添加自定义节点!
原文链接: http://mp.weixin.qq.com/s?__biz=MzIyMTUwMDMyOQ==&mid=2247487533&idx=1&sn=c70258577 ...
- 11.2.0.3 RAC(VCS)节点crash以及hang的问题分析
昨天某个客户的一套双节RAC当中一个节点crash,同一时候最后导致另外一个节点也hang住,仅仅能shutdown abort. 且出现shutdown abort实例之后,还有部分进程无法通过ki ...
- Hadoop 动态扩容 增加节点
基础准备 在基础准备部分,主要是设置hadoop运行的系统环境 修改系统hostname(通过hostname和/etc/sysconfig/network进行修改) 修改hosts文件,将集群所有节 ...
- 在线tidb+tipd+tikv扩容,迁移,从UC到阿里云
集群现状: 共有五个节点,配置为16核32g内存,数据节点为1T ssd盘,非数据节点为100g ssd盘: 角色规划: node1 tidb tipd node2 tidb tipd node3 t ...
随机推荐
- JavaFx之使用指定字体样式(二十九)
JavaFx之使用指定字体样式(二十九) javafx use specified font 29 javafx默认的字体样式太丑,可能需要我们自定义字体样式. 之前说好放弃学习javafx,没想到越 ...
- CSS3学习笔记-句子排版效果
CSS3提供了丰富的排版效果,可以通过样式属性来控制文本的排列方式.字体样式.行高.字间距等.以下是一些常用的句子排版效果示例: 文本对齐方式: .text-center { text-align: ...
- 文心一言 VS 讯飞星火 VS chatgpt (178)-- 算法导论13.4 1题
一.用go语言,在执行 RB-DELETE-FIXUP 之后,证明:树根一定是黑色的. 文心一言: 在证明 RB-DELETE-FIXUP 之后,树根一定是黑色的的过程中,我们首先需要理解红黑树的性质 ...
- CodeForces 808G Anthem of Berland 前缀函数 KMP DP
原题链接 题意 第一行给我们一串长为s,只包含小写字母与问号的字符串A,第二行给我们一个长为t只有小写字母的字符串B, 同时满足 $ s * t \le 1e7 $ 我们可以把问号变成任意的字母,我们 ...
- maven系列:依赖管理和依赖范围
目录 一.依赖管理 使用坐标导入jar包 使用坐标导入 jar 包 – 快捷方式 使用坐标导入 jar 包 – 自动导入 二.依赖范围 三.可选依赖 四.排除依赖 一.依赖管理 使用坐标导入jar包 ...
- Asp .Net Core系列:对VS 2019中ASP.NET Core项目解决:The term 'Add-Migration' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name
错误: Add-Migration : The term 'Add-Migration' is not recognized as the name of a cmdlet, function, sc ...
- 使用 Python 将数据写入 Excel 工作表
在数据处理和报告生成等工作中,Excel 表格是一种常见且广泛使用的工具.然而,手动将大量数据输入到 Excel 表格中既费时又容易出错.为了提高效率并减少错误,使用 Python 编程语言来自动化数 ...
- Java 给PPT中的表格设置分布行和分布列
在表格中可设置"分布行"或"分布列"将行高.列宽调整为协调统一的高度或宽度,是一种快速实现表格排版的方法之一.下面,通过Java后端程序代码介绍如何在PPT幻灯 ...
- WPF 对Border 边框进行投影
画一个 Border 对边框进行投影 <Window x:Class="WpfApp1.MainWindow" xmlns="http://schemas.micr ...
- CompletableFuture 使用
Future的局限性,它没法直接对多个任务进行链式.组合等处理,而CompletableFuture是对Future的扩展和增强.CompletableFuture实现了Future接口,并在此基础上 ...