一.系统环境

服务器版本 docker软件版本 CPU架构
CentOS Linux release 7.4.1708 (Core) Docker version 20.10.12 x86_64

二.前言

当我们安装部署好一套Kubernetes集群,使用一段时间之后可能会有重新安装Kubernetes集群的需求,本文为了满足这个需求,模拟重装Kubernetes集群。

重新安装Kubernetes集群的前提是已经有一套可以正常运行的Kubernetes集群,关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html

三.重装Kubernetes集群

3.1 环境介绍

Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点

服务器 操作系统版本 CPU架构 进程 功能描述
k8scloude1/192.168.110.130 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico k8s master节点
k8scloude2/192.168.110.129 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点
k8scloude3/192.168.110.128 CentOS Linux release 7.4.1708 (Core) x86_64 docker,kubelet,kube-proxy,calico k8s worker节点

3.2 删除k8s所有节点(node)

kubectl drain 安全驱逐节点上面所有的 pod,--ignore-daemonsets往往需要指定的,这是因为deamonset会忽略SchedulingDisabled标签(使用kubectl drain时会自动给节点打上不可调度SchedulingDisabled标签),因此deamonset控制器控制的pod被删除后,可能马上又在此节点上启动起来,这样就会成为死循环.因此这里忽略daemonset.

[root@k8scloude1 ~]# kubectl drain k8scloude3 --ignore-daemonsets
node/k8scloude3 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-wmz4r, kube-system/kube-proxy-84gcx
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-rl2mh
pod/calico-kube-controllers-6b9fbfff44-rl2mh evicted
node/k8scloude3 evicted

k8scloude3变为SchedulingDisabled

[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 64m v1.21.0
k8scloude2 Ready <none> 56m v1.21.0
k8scloude3 Ready,SchedulingDisabled <none> 56m v1.21.0

删除节点k8scloude3

[root@k8scloude1 ~]# kubectl delete nodes k8scloude3
node "k8scloude3" deleted [root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 65m v1.21.0
k8scloude2 Ready <none> 57m v1.21.0

其余节点进行类似操作

[root@k8scloude1 ~]# kubectl drain k8scloude2 --ignore-daemonsets
node/k8scloude2 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-bbst4, kube-system/kube-proxy-8wf8t
evicting pod kube-system/coredns-545d6fc579-kgmfl
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-nq79f
evicting pod kube-system/coredns-545d6fc579-dln6p
pod/coredns-545d6fc579-dln6p evicted
pod/coredns-545d6fc579-kgmfl evicted
pod/calico-kube-controllers-6b9fbfff44-nq79f evicted
node/k8scloude2 evicted [root@k8scloude1 ~]# kubectl drain k8scloude1 --ignore-daemonsets
node/k8scloude1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-r57vx, kube-system/kube-proxy-zblkg
evicting pod kube-system/coredns-545d6fc579-tgcl4
evicting pod kube-system/calico-kube-controllers-6b9fbfff44-t9k45
evicting pod kube-system/coredns-545d6fc579-l9g7b
pod/calico-kube-controllers-6b9fbfff44-t9k45 evicted
pod/coredns-545d6fc579-tgcl4 evicted
pod/coredns-545d6fc579-l9g7b evicted
node/k8scloude1 evicted [root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready,SchedulingDisabled control-plane,master 66m v1.21.0
k8scloude2 Ready,SchedulingDisabled <none> 58m v1.21.0 [root@k8scloude1 ~]# kubectl delete nodes k8scloude2
node "k8scloude2" deleted [root@k8scloude1 ~]# kubectl delete nodes k8scloude1
node "k8scloude1" deleted

此时,k8s集群所有节点都被删除了

[root@k8scloude1 ~]# kubectl get nodes
No resources found

3.3 kubeadm初始化

此时重新进行kubeadm初始化,但是报错,看报错信息可以发现:端口被占用,配置文件已经存在

[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

当我们重新初始化k8s集群的时候,需要清空原先的设置

[root@k8scloude1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0109 16:17:15.936292 53177 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "k8scloude1" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0109 16:17:17.651795 53177 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重新进行kubeadm初始化

[root@k8scloude1 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8scloude1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8scloude1 localhost] and IPs [192.168.110.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 64.004984 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8scloude1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 45wtx2.gfb3j9obk0fz663z
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z \
--discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d

按照要求创建目录和配置文件

[root@k8scloude1 ~]# mkdir -p $HOME/.kube

[root@k8scloude1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp:是否覆盖"/root/.kube/config"? y [root@k8scloude1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.4 添加worker节点到k8s集群

接下来把另外的两个worker节点也加入到k8s集群。

把k8scloude2节点加入k8s集群

#另外两个节点执行加入集群的命令
[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

work节点重新加入k8s集群也需要清空原先的设置

[root@k8scloude2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0109 16:22:12.705575 59352 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni] The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command. If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables. The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

再次把k8scloude2节点加入k8s集群,可以看到k8scloude2节点加入k8s集群成功

[root@k8scloude2 ~]# kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

k8scloude3节点也进行类似操作

[root@k8scloude3 ~]# kubeadm reset

[root@k8scloude3 ~]# kubeadm join 192.168.110.130:6443 --token 45wtx2.gfb3j9obk0fz663z --discovery-token-ca-cert-hash sha256:d390e28ef900f9a17483bb2d230b9e5be76920d128eb020d472c21d594aa278d

查看k8s集群节点状态

#此时所有节点都显示Ready状态
[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 5m v1.21.0
k8scloude2 Ready <none> 63s v1.21.0
k8scloude3 Ready <none> 33s v1.21.0

3.5 安装calico

因为我们之前已经安装过一次k8s集群了,并且calico插件也安装好了,重装之后calico是没有装的,但是kubectl get nodes的状态都为Ready状态,是因为Ready这个状态已经写入了etcd数据库里了,状态没更新,所以需要重装一次calico

[root@k8scloude1 ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

现在集群才是完全正常的

[root@k8scloude1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8scloude1 Ready control-plane,master 9m11s v1.21.0
k8scloude2 Ready <none> 5m14s v1.21.0
k8scloude3 Ready <none> 4m44s v1.21.0

注意:如果k8s master节点没有执行kubeadm reset重置命令,只是重置了worker节点,则不需要重新安装calico

[root@k8scloude1 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6b9fbfff44-4jzkj 1/1 Running 0 3m16s 10.244.251.193 k8scloude3 <none> <none>
calico-node-bdlgm 1/1 Running 0 3m16s 192.168.110.130 k8scloude1 <none> <none>
calico-node-hx8bk 1/1 Running 0 3m16s 192.168.110.128 k8scloude3 <none> <none>
calico-node-nsbfs 1/1 Running 0 3m16s 192.168.110.129 k8scloude2 <none> <none>
coredns-545d6fc579-7wm95 1/1 Running 0 11m 10.244.158.65 k8scloude1 <none> <none>
coredns-545d6fc579-87q8j 1/1 Running 0 11m 10.244.158.66 k8scloude1 <none> <none>
etcd-k8scloude1 1/1 Running 0 12m 192.168.110.130 k8scloude1 <none> <none>
kube-apiserver-k8scloude1 1/1 Running 0 12m 192.168.110.130 k8scloude1 <none> <none>
kube-controller-manager-k8scloude1 1/1 Running 0 12m 192.168.110.130 k8scloude1 <none> <none>
kube-proxy-599xh 1/1 Running 0 7m48s 192.168.110.128 k8scloude3 <none> <none>
kube-proxy-lpj8z 1/1 Running 0 8m18s 192.168.110.129 k8scloude2 <none> <none>
kube-proxy-zxlk9 1/1 Running 0 11m 192.168.110.130 k8scloude1 <none> <none>
kube-scheduler-k8scloude1 1/1 Running 0 12m 192.168.110.130 k8scloude1 <none> <none>

自此,k8s集群重装完成!

模拟重装Kubernetes(k8s)集群:删除k8s集群然后重装的更多相关文章

  1. K8S学习笔记之二进制部署Kubernetes v1.13.4 高可用集群

    0x00 概述 本次采用二进制文件方式部署,本文过程写成了更详细更多可选方案的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansi ...

  2. Kubernetes 学习3 kubeadm初始化k8s集群

    一.k8s集群 1.k8s整体架构图 2.k8s网络架构图 二.基于kubeadm安装k8s步骤 1.master,nodes:安装kubelet,kubeadm,docker 2.master: k ...

  3. 利用 kubeadm 创建 kubernetes (k8s) 的高可用集群

    引言: kubeadm提供了两种不同的高可用方案. 堆叠方案:etcd服务和控制平面被部署在同样的节点中,对基础设施的要求较低,对故障的应对能力也较低 堆叠方案 最小三个Master(也称工作平面), ...

  4. [k8s]zookeeper集群在k8s的搭建(statefulset模式)-pod的调度

    之前一直docker-compose跑zk集群,现在把它挪到k8s集群里. docker-compose跑zk集群 zk集群in k8s部署 参考: https://github.com/kubern ...

  5. 阿里云-容器服务之集群服务 k8s(Jenkins+gitlab+k8s的devops)- 01

    由于docker官方停止更新Swarm,另外swarm在使用期间出现了很多bug,所以阿里云也在2019年7月发布公告:于2019年12月31日起停止技术支持,请您尽快迁移至容器服务Kubernete ...

  6. k8s极简史:K8s多集群技术发展的历史、现状与未来

    引子 随着云原生技术的普及,越来越多的企业使用Kubernetes来管理应用,并且集群规模也呈爆发式增长,企业也亟需应对随集群规模增长而带来的各种挑战.同时,为了更好地提供高可用.弹性伸缩的应用,企业 ...

  7. k8s集群部署rabbitmq集群

    1.构建rabbitmq镜像 RabbitMQ提供了一个Autocluster插件,可以自动创建RabbitMQ集群.下面我们将基于RabbitMQ的官方docker镜像,添加这个autocluste ...

  8. k8s第二回之k8s集群的安装

    1. k8s集群的安装 目录 1. k8s集群的安装 1.架构: 2.环境准备 3.master节点安装etcd 4. master节点安装kubernetes 5.node节点安装kubernete ...

  9. k8s暴露集群内和集群外服务的方法

    集群内服务 一般 pod 都是根据 service 资源来进行集群内的暴露,因为 k8s 在 pod 启动前就已经给调度节点上的 pod 分配好 ip 地址了,因此我们并不能提前知道提供服务的 pod ...

  10. k8s kubernetes 核心笔记 镜像仓库 项目k8s改造(含最新k8s v1.16.2版本)

    k8s kubernetes 核心笔记 镜像仓库 项目k8s改造 2019/10/24 Chenxin 一 基本资料 一 参考: https://kubernetes.io/ 官网 https://k ...

随机推荐

  1. JS 取Json中对象特定属性值

    解析JSON JSON 数据 var str = '[{"a": "1","b": "2"}, {"a&quo ...

  2. openGauss Sqlines 使用指导

    openGauss Sqlines 使用指导 Sqlines 简介 Sqlines 是一款开源软件,支持多种数据库之间的 SQL 语句语法的的转换,openGauss 将此工具修改适配,新增了 ope ...

  3. .NET Emit 入门教程:第六部分:IL 指令:7:详解 ILGenerator 指令方法:分支条件指令

    前言: 经过前面几篇的学习,我们了解到指令的大概分类,如: 参数加载指令,该加载指令以 Ld 开头,将参数加载到栈中,以便于后续执行操作命令. 参数存储指令,其指令以 St 开头,将栈中的数据,存储到 ...

  4. 【7】SpringBoot是什么?SpringBoot的优缺点有哪些?

    随着动态语言的流行(Ruby.Groovy.Scala.Node.js),Java 的开发显得格外的笨重,繁多的配置.低下的开发效率.复杂的部署流程以及第三方技术集成难度大. 在上述环境下,Sprin ...

  5. CentOS 6.4(64位)上安装错误libstdc++.so.6(GLIBCXX_3.4.14)解决办法

    CentOS 6.4(64位)上安装错误libstdc++.so.6(GLIBCXX_3.4.14)解决办法 (2013-07-29 13:18:01) 转载▼ 分类:linux系统 引用地址: ht ...

  6. sql 语句系列(插入系列)[八百章之第五章]

    复制数据到另外一个表 这个不解释,只是自我整理. insert EMP_EAST (DEPTNO,DNAME,LOC) select DEPTNO,DNAME,LOC from DEPT where ...

  7. IIS 部署到服务器上出现数据库连接失败

    前言 以前遇到过本地运行没有任何问题,部署到服务器上却出现数据库连接失败. 正文 排查顺序: 数据库权限 我们的sql的账号决定了我们的权限,那么要观察下是否我们的账号有该数据库的读写权限,先把权限归 ...

  8. 力扣569(MySQL)-员工薪水中位数(困难)

    题目: 写一个SQL查询,找出每个公司的工资中位数,以任意顺序返回结果表.查询结果个数如下所示. 输出结果如下:  解题思路: 中位数:位于集合正中间的元素.当数据总书为奇数时,最中间的数就是中位数, ...

  9. 一文剖析PolarDB HTAP的列存数据压缩

    简介: PolarDB MySQL是阿里云自研的云原生数据库,主要处理在线事务负载(OLTP, OnLine Transactional Processing),深受企业用户的青睐. 前言 数据库迁移 ...

  10. 基于MaxCompute SQL 的半结构化数据处理实践

    ​简介: MaxCompute作为企业级数据仓库服务,集中存储和管理企业数据资产.面向数据应用处理和分析数据,将数据转换为业务洞察.通过与阿里云内.外部服务灵活组合,可构建丰富的数据应用.全托管的数据 ...