debian11使用kubeadm安装k8s
前言
节点信息:
- master1:192.168.0.33
- node1:192.168.0.31
- node2:192.168.0.32
版本:
- 系统:debian11 64bit
- linux内核:5.10
- docker:20.10.17
- kubeadm:1.19.0
- kubectl:1.19.0
- kubelet:1.19.0
docker的daemon.json配置文件至少有以下几项。(PS:在国内的话最好再加一个国内docker源)
{
"exec-opts": ["native.cgroupdriver=systemd"],
"storage-driver": "overlay2",
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"ip-forward": true
}
步骤
- 添加安装kubeadm、kubelet、kubectl的国内镜像源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt update
- 安装kubeadm、kubelet、kubectl(三个节点都要安装)
apt install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
- 编辑
/etc/hosts文件。(PS:大规模集群环境下最好使用DNS解析各节点主机名)
192.168.0.33 k8s-master1
192.168.0.31 k8s-node1
192.168.0.32 k8s-node2
- 初始化控制平面组件(master节点执行即可)
# --apiserver-advertise-address: apiserver监听的IP地址
# --control-plane-endpoint: 控制平面的IP或域名
# --pod-network-cidr: 指定pod网络可以使用的IP地址段
# --service-cidr: 指定service的虚拟网络IP段
# --token-ttl: 令牌过期时间,默认24小时,0表示永不过期。生产环境中最好设置一定的过期时间
kubeadm init \
--apiserver-advertise-address=192.168.0.33 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--control-plane-endpoint 192.168.0.33 \
--service-cidr=10.100.0.0/16 \
--token-ttl 0 \
--pod-network-cidr=10.244.0.0/16
- 根据初始化完成后的提示操作
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
- 在工作节点执行初始化完成后的提示操作(注意替换为实际的提示操作)
kubeadm join 192.168.0.33:6443 --token mhvreo.pjuproxrzqt28jy4 \
--discovery-token-ca-cert-hash sha256:fb263be999b120d4ba2bd771a34411b5f49d297dc1403c44d1a0cdf9e2af528b
- 查看节点列表
kubectl get nodes
# 因为网络插件没装,所以NotReady
- 安装Flannel网络插件(需要上github下载flanneld)
# 访问https://github.com/flannel-io/flannel#deploying-flannel-manually下载flanneld
# 将下载后的flanneld二进制文件放到/opt/bin目录,没可执行权限的话记得授权
# 执行以下命令,kube-flannel.yml内容见“附录 -> kube-flannel.yml”
kubectl apply -f kube-flannel.yml
# 安装完成后再执行一遍kubectl get nodes,查看状态是否为Ready
- 完成
部署nginx进行测试
- 创建一个pod,nginx的docker镜像可以提前pull
kubectl create deployment nginx --image=nginx
- 声明pod的NodePort为80
kubectl expose deployment nginx --port=80 --type=NodePort
- 查看状态
kubectl get pods,svc
# NAME READY STATUS RESTARTS AGE
# pod/nginx-85b98978db-dv2h4 1/1 Running 0 26s
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 39m
# service/nginx NodePort 10.100.192.149 <none> 80:31805/TCP 9s
- 浏览器访问
http://192.168.0.33:31805,能显示nginx页面的话说明部署正常
部署Dashboard
- 创建资源对象,kube-dashboard.yml见“附录 -> kube-dashboard.yml”
kubectl apply -f kube-dashboard.yml
- 修改Service对象类型为NodePort
kubectl patch svc kubenetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubenetes-dashboard
- 获取登录端口信息
kubectl get services/kubernetes-dashboard -n kubernetes-dashboard -o jsonpath='{.spec.ports[0].nodePort}'
# 输出结果:31641,浏览器打开:https://192.168.0.33:31641
- 创建名为dashboard-admin的 service account
kubectl create serviceaccount admin-user -n kubernetes-dashboard
# serviceaccount/admin-user created
kubectl create clusterrolebinding admin-user --clusterrole=cluster-admin \
--serviceaccount=kubernetes-dashboard:admin-user
# clusterrolebinding.rbac.authorization.k8s.io/admin-user created
- 获取token,将获取到的token粘贴到第三步打开的网页相应位置即可
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/^admin-user/{print $1}')
问题记录
kubernetes 1.24默认不支持docker
kubernetes 1.19最高只支持docker v19.03
kubeadm初始化失败,提示"/proc/sys/net/bridge/bridge-nf-call-iptables does not exist"
# 执行
modprobe br_netfilter
参考
附录
kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.18.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.18.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.18.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
kube-dashboard.yml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.6.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.8
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
debian11使用kubeadm安装k8s的更多相关文章
- 使用 --image-repository 解决kubeadm 安装k8s 集群 谷歌镜像墙的问题
从网上我们看到的好多kubeadm 安装k8s 的时候都说需要下拉取镜像,然后修改,实际上 我们可以使用配置参数,快速的跳过墙的问题 说明: 基础镜像,我们仍然存在,拉取的问题,但是dockerhub ...
- kubeadm安装k8s测试环境
目标是搭建一个可测试的k8s环境,使用的工具 kubeadm, 最终一个master节点(非高可用),2个node节点. 环境以及版本 Centos7.3 kubeadm 1.11.1 kubelet ...
- 通过 Kubeadm 安装 K8S 与高可用,版本1.13.4
环境介绍: CentOS: 7.6 Docker: 18.06.1-ce Kubernetes: 1.13.4 Kuberadm: 1.13.4 Kuberlet: 1.13.4 Kuberctl: ...
- kubeadm安装K8S单master双节点集群
宿主机:master:172.16.40.97node1:172.16.40.98node2:172.16.40.99 # 一.k8s初始化环境:(三台宿主机) 关闭防火墙和selinux syste ...
- 使用kubeadm安装k8s集群故障处理三则
最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...
- kubeadm安装k8s集群
安装kubeadm kubectl kubelet 对于Ubuntu/debian系统,添加阿里云k8s仓库key,非root用户需要加sudo apt-get update && a ...
- ubuntu 20.04使用kubeadm安装k8s集群
本文主要用于记录,步骤参考了:https://blog.csdn.net/weixin_44559544/article/details/123381441 一.设备相关准备 1.修改节点主机名,这样 ...
- 第一章 1.1.1节 Kubeadm安装K8S高可用集群
1.1 安装前必读 请不要使用带中文的服务器和克隆的虚拟机. 生产环境建议使用二进制的方式安装. 文档中的IP地址要更换成自己的IP地址,要谨记!!! 1.2 基本环境配置 kubeadm安装方式自1 ...
- k8s学习笔记之二:使用kubeadm安装k8s集群
一.集群环境信息及安装前准备 部署前操作(集群内所有主机): .关闭防火墙,关闭selinux(生产环境按需关闭或打开) .同步服务器时间,选择公网ntpd服务器或者自建ntpd服务器 .关闭swap ...
- CentOS部署Kubernetes1.13集群-1(使用kubeadm安装K8S)
参考:https://www.kubernetes.org.cn/4956.html 1.准备 说明:准备工作需要在集群所有的主机上执行 1.1系统配置 在安装之前,需要先做如下准备.三台CentOS ...
随机推荐
- 2022-03-06:金币路径。 给定一个数组 A(下标从 1 开始)包含 N 个整数:A1,A2,……,AN 和一个整数 B。 你可以从数组 A 中的任何一个位置(下标为 i)跳到下标 i+1,i+
2022-03-06:金币路径. 给定一个数组 A(下标从 1 开始)包含 N 个整数:A1,A2,--,AN 和一个整数 B. 你可以从数组 A 中的任何一个位置(下标为 i)跳到下标 i+1,i+ ...
- 2021-05-08:给定两个非负数组x和hp,长度都是N,再给定一个正数range。x有序,x[i]表示i号怪兽在x轴上的位置;hp[i]表示i号怪兽的血量 。range表示法师如果站在x位置,用A
2021-05-08:给定两个非负数组x和hp,长度都是N,再给定一个正数range.x有序,x[i]表示i号怪兽在x轴上的位置:hp[i]表示i号怪兽的血量 .range表示法师如果站在x位置,用A ...
- ET介绍—— 一切皆实体的设计
一切皆实体 目前十分流行ECS设计,主要是守望先锋的成功,引爆了这种技术.守望先锋采用了状态帧这种网络技术,客户端会进行预测,预测不准需要进行回滚,由于组件式的设计,回滚可以只回滚某些组件即可.ECS ...
- ERROR: Failed to install the following Android SDK packages as some licences have not been accepted.
android studio 配置sdk时提示如下错误 麻麻蛋~ 根据accepted 了解到是安装android-26时未被允许:于是执行如下步骤 1.cd 到sdk目录 D:\develop\An ...
- 【编程日记】搭建PyCharm集成开发环境
0.相关确定 本教程使用的版本号为专业版PyCharm 2022.3.2,如果您是初学者,为了更好的学习本教程,避免不必要的麻烦,请您下载使用与本教程一致的版本号. 1.PyCharm的下载 官网下载 ...
- Hackathon 代码黑客马拉松采访复盘
AIGC Hackathon 2023 北京站 我参加了选手采访提纲,这里我感觉有些点可以分享给大家.之前复盘的链接: 下面是采访我的回答内容: 1. 请向大家简单介绍一下自己吧? 子木,社区名称为程 ...
- Hugging News #0602: Transformers Agents 介绍、大语言模型排行榜发布!
每一周,我们的同事都会向社区的成员们发布一些关于 Hugging Face 相关的更新,包括我们的产品和平台更新.社区活动.学习资源和内容更新.开源库和模型更新等,我们将其称之为「Hugging Ne ...
- 为什么 Biopython 的在线 BLAST 这么慢?
用过网页版本 BLAST 的童鞋都会发现,提交的序列比对往往在几分钟,甚至几十秒就可以得到比对的结果:而通过调用 API 却要花费几十分钟或者更长的时间!这到底是为什么呢? NCBIWWW 基本用法 ...
- 从 0 到 1 搭建自己的脚手架(java 后端)
一.脚手架是什么 脚手架是一种基础设施工具,用于快速生成项目的框架代码和文件结构.它是一种标准化的开发工具,使开发人员能够在项目的早期阶段快速搭建出一个具备基本功能和结构的系统. 二.脚手架的意义 主 ...
- 华为云新一代分布式数据库GaussDB,给世界一个更优选择
摘要:与伙伴一起,共建繁荣开放的GaussDB数据库新生态. 本文分享自华为云社区<华为云新一代分布式数据库GaussDB,给世界一个更优选择>,作者:华为云头条. 6月7日,在华为全球智 ...