K8s集群版本升级
k8s组件升级流程:
升级主管理节点→升级其他管理节点→升级工作节点
首先备份主管理节点的etcd,检查版本号,为了保证版本的兼容性,跨度最好不要超过两个版本。
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
查找最新版本号
[root@master ~]# yum list --showduplicates kubeadm

kubeadm.x86_64 1.21.0-0 @kubernetes
可安装的软件包
kubeadm.x86_64 1.6.0-0 kubernetes
kubeadm.x86_64 1.6.1-0 kubernetes
kubeadm.x86_64 1.6.2-0 kubernetes
kubeadm.x86_64 1.6.3-0 kubernetes
kubeadm.x86_64 1.6.4-0 kubernetes
kubeadm.x86_64 1.6.5-0 kubernetes
kubeadm.x86_64 1.6.6-0 kubernetes
kubeadm.x86_64 1.6.7-0 kubernetes
kubeadm.x86_64 1.6.8-0 kubernetes
kubeadm.x86_64 1.6.9-0 kubernetes
kubeadm.x86_64 1.6.10-0 kubernetes
kubeadm.x86_64 1.6.11-0 kubernetes
kubeadm.x86_64 1.6.12-0 kubernetes
kubeadm.x86_64 1.6.13-0 kubernetes
kubeadm.x86_64 1.7.0-0 kubernetes
kubeadm.x86_64 1.7.1-0 kubernetes
kubeadm.x86_64 1.7.2-0 kubernetes
kubeadm.x86_64 1.7.3-1 kubernetes
kubeadm.x86_64 1.7.4-0 kubernetes
kubeadm.x86_64 1.7.5-0 kubernetes
kubeadm.x86_64 1.7.6-1 kubernetes
kubeadm.x86_64 1.7.7-1 kubernetes
kubeadm.x86_64 1.7.8-1 kubernetes
kubeadm.x86_64 1.7.9-0 kubernetes
kubeadm.x86_64 1.7.10-0 kubernetes
kubeadm.x86_64 1.7.11-0 kubernetes
kubeadm.x86_64 1.7.14-0 kubernetes
kubeadm.x86_64 1.7.15-0 kubernetes
kubeadm.x86_64 1.7.16-0 kubernetes
kubeadm.x86_64 1.8.0-0 kubernetes
kubeadm.x86_64 1.8.0-1 kubernetes
kubeadm.x86_64 1.8.1-0 kubernetes
kubeadm.x86_64 1.8.2-0 kubernetes
kubeadm.x86_64 1.8.3-0 kubernetes
kubeadm.x86_64 1.8.4-0 kubernetes
kubeadm.x86_64 1.8.5-0 kubernetes
kubeadm.x86_64 1.8.6-0 kubernetes
kubeadm.x86_64 1.8.7-0 kubernetes
kubeadm.x86_64 1.8.8-0 kubernetes
kubeadm.x86_64 1.8.9-0 kubernetes
kubeadm.x86_64 1.8.10-0 kubernetes
kubeadm.x86_64 1.8.11-0 kubernetes
kubeadm.x86_64 1.8.12-0 kubernetes
kubeadm.x86_64 1.8.13-0 kubernetes
kubeadm.x86_64 1.8.14-0 kubernetes
kubeadm.x86_64 1.8.15-0 kubernetes
kubeadm.x86_64 1.9.0-0 kubernetes
kubeadm.x86_64 1.9.1-0 kubernetes
kubeadm.x86_64 1.9.2-0 kubernetes
kubeadm.x86_64 1.9.3-0 kubernetes
kubeadm.x86_64 1.9.4-0 kubernetes
kubeadm.x86_64 1.9.5-0 kubernetes
kubeadm.x86_64 1.9.6-0 kubernetes
kubeadm.x86_64 1.9.7-0 kubernetes
kubeadm.x86_64 1.9.8-0 kubernetes
kubeadm.x86_64 1.9.9-0 kubernetes
kubeadm.x86_64 1.9.10-0 kubernetes
kubeadm.x86_64 1.9.11-0 kubernetes
kubeadm.x86_64 1.10.0-0 kubernetes
kubeadm.x86_64 1.10.1-0 kubernetes
kubeadm.x86_64 1.10.2-0 kubernetes
kubeadm.x86_64 1.10.3-0 kubernetes
kubeadm.x86_64 1.10.4-0 kubernetes
kubeadm.x86_64 1.10.5-0 kubernetes
kubeadm.x86_64 1.10.6-0 kubernetes
kubeadm.x86_64 1.10.7-0 kubernetes
kubeadm.x86_64 1.10.8-0 kubernetes
kubeadm.x86_64 1.10.9-0 kubernetes
kubeadm.x86_64 1.10.10-0 kubernetes
kubeadm.x86_64 1.10.11-0 kubernetes
kubeadm.x86_64 1.10.12-0 kubernetes
kubeadm.x86_64 1.10.13-0 kubernetes
kubeadm.x86_64 1.11.0-0 kubernetes
kubeadm.x86_64 1.11.1-0 kubernetes
kubeadm.x86_64 1.11.2-0 kubernetes
kubeadm.x86_64 1.11.3-0 kubernetes
kubeadm.x86_64 1.11.4-0 kubernetes
kubeadm.x86_64 1.11.5-0 kubernetes
kubeadm.x86_64 1.11.6-0 kubernetes
kubeadm.x86_64 1.11.7-0 kubernetes
kubeadm.x86_64 1.11.8-0 kubernetes
kubeadm.x86_64 1.11.9-0 kubernetes
kubeadm.x86_64 1.11.10-0 kubernetes
kubeadm.x86_64 1.12.0-0 kubernetes
kubeadm.x86_64 1.12.1-0 kubernetes
kubeadm.x86_64 1.12.2-0 kubernetes
kubeadm.x86_64 1.12.3-0 kubernetes
kubeadm.x86_64 1.12.4-0 kubernetes
kubeadm.x86_64 1.12.5-0 kubernetes
kubeadm.x86_64 1.12.6-0 kubernetes
kubeadm.x86_64 1.12.7-0 kubernetes
kubeadm.x86_64 1.12.8-0 kubernetes
kubeadm.x86_64 1.12.9-0 kubernetes
kubeadm.x86_64 1.12.10-0 kubernetes
kubeadm.x86_64 1.13.0-0 kubernetes
kubeadm.x86_64 1.13.1-0 kubernetes
kubeadm.x86_64 1.13.2-0 kubernetes
kubeadm.x86_64 1.13.3-0 kubernetes
kubeadm.x86_64 1.13.4-0 kubernetes
kubeadm.x86_64 1.13.5-0 kubernetes
kubeadm.x86_64 1.13.6-0 kubernetes
kubeadm.x86_64 1.13.7-0 kubernetes
kubeadm.x86_64 1.13.8-0 kubernetes
kubeadm.x86_64 1.13.9-0 kubernetes
kubeadm.x86_64 1.13.10-0 kubernetes
kubeadm.x86_64 1.13.11-0 kubernetes
kubeadm.x86_64 1.13.12-0 kubernetes
kubeadm.x86_64 1.14.0-0 kubernetes
kubeadm.x86_64 1.14.1-0 kubernetes
kubeadm.x86_64 1.14.2-0 kubernetes
kubeadm.x86_64 1.14.3-0 kubernetes
kubeadm.x86_64 1.14.4-0 kubernetes
kubeadm.x86_64 1.14.5-0 kubernetes
kubeadm.x86_64 1.14.6-0 kubernetes
kubeadm.x86_64 1.14.7-0 kubernetes
kubeadm.x86_64 1.14.8-0 kubernetes
kubeadm.x86_64 1.14.9-0 kubernetes
kubeadm.x86_64 1.14.10-0 kubernetes
kubeadm.x86_64 1.15.0-0 kubernetes
kubeadm.x86_64 1.15.1-0 kubernetes
kubeadm.x86_64 1.15.2-0 kubernetes
kubeadm.x86_64 1.15.3-0 kubernetes
kubeadm.x86_64 1.15.4-0 kubernetes
kubeadm.x86_64 1.15.5-0 kubernetes
kubeadm.x86_64 1.15.6-0 kubernetes
kubeadm.x86_64 1.15.7-0 kubernetes
kubeadm.x86_64 1.15.8-0 kubernetes
kubeadm.x86_64 1.15.9-0 kubernetes
kubeadm.x86_64 1.15.10-0 kubernetes
kubeadm.x86_64 1.15.11-0 kubernetes
kubeadm.x86_64 1.15.12-0 kubernetes
kubeadm.x86_64 1.16.0-0 kubernetes
kubeadm.x86_64 1.16.1-0 kubernetes
kubeadm.x86_64 1.16.2-0 kubernetes
kubeadm.x86_64 1.16.3-0 kubernetes
kubeadm.x86_64 1.16.4-0 kubernetes
kubeadm.x86_64 1.16.5-0 kubernetes
kubeadm.x86_64 1.16.6-0 kubernetes
kubeadm.x86_64 1.16.7-0 kubernetes
kubeadm.x86_64 1.16.8-0 kubernetes
kubeadm.x86_64 1.16.9-0 kubernetes
kubeadm.x86_64 1.16.10-0 kubernetes
kubeadm.x86_64 1.16.11-0 kubernetes
kubeadm.x86_64 1.16.11-1 kubernetes
kubeadm.x86_64 1.16.12-0 kubernetes
kubeadm.x86_64 1.16.13-0 kubernetes
kubeadm.x86_64 1.16.14-0 kubernetes
kubeadm.x86_64 1.16.15-0 kubernetes
kubeadm.x86_64 1.17.0-0 kubernetes
kubeadm.x86_64 1.17.1-0 kubernetes
kubeadm.x86_64 1.17.2-0 kubernetes
kubeadm.x86_64 1.17.3-0 kubernetes
kubeadm.x86_64 1.17.4-0 kubernetes
kubeadm.x86_64 1.17.5-0 kubernetes
kubeadm.x86_64 1.17.6-0 kubernetes
kubeadm.x86_64 1.17.7-0 kubernetes
kubeadm.x86_64 1.17.7-1 kubernetes
kubeadm.x86_64 1.17.8-0 kubernetes
kubeadm.x86_64 1.17.9-0 kubernetes
kubeadm.x86_64 1.17.11-0 kubernetes
kubeadm.x86_64 1.17.12-0 kubernetes
kubeadm.x86_64 1.17.13-0 kubernetes
kubeadm.x86_64 1.17.14-0 kubernetes
kubeadm.x86_64 1.17.15-0 kubernetes
kubeadm.x86_64 1.17.16-0 kubernetes
kubeadm.x86_64 1.17.17-0 kubernetes
kubeadm.x86_64 1.18.0-0 kubernetes
kubeadm.x86_64 1.18.1-0 kubernetes
kubeadm.x86_64 1.18.2-0 kubernetes
kubeadm.x86_64 1.18.3-0 kubernetes
kubeadm.x86_64 1.18.4-0 kubernetes
kubeadm.x86_64 1.18.4-1 kubernetes
kubeadm.x86_64 1.18.5-0 kubernetes
kubeadm.x86_64 1.18.6-0 kubernetes
kubeadm.x86_64 1.18.8-0 kubernetes
kubeadm.x86_64 1.18.9-0 kubernetes
kubeadm.x86_64 1.18.10-0 kubernetes
kubeadm.x86_64 1.18.12-0 kubernetes
kubeadm.x86_64 1.18.13-0 kubernetes
kubeadm.x86_64 1.18.14-0 kubernetes
kubeadm.x86_64 1.18.15-0 kubernetes
kubeadm.x86_64 1.18.16-0 kubernetes
kubeadm.x86_64 1.18.17-0 kubernetes
kubeadm.x86_64 1.18.18-0 kubernetes
kubeadm.x86_64 1.18.19-0 kubernetes
kubeadm.x86_64 1.18.20-0 kubernetes
kubeadm.x86_64 1.19.0-0 kubernetes
kubeadm.x86_64 1.19.1-0 kubernetes
kubeadm.x86_64 1.19.2-0 kubernetes
kubeadm.x86_64 1.19.3-0 kubernetes
kubeadm.x86_64 1.19.4-0 kubernetes
kubeadm.x86_64 1.19.5-0 kubernetes
kubeadm.x86_64 1.19.6-0 kubernetes
kubeadm.x86_64 1.19.7-0 kubernetes
kubeadm.x86_64 1.19.8-0 kubernetes
kubeadm.x86_64 1.19.9-0 kubernetes
kubeadm.x86_64 1.19.10-0 kubernetes
kubeadm.x86_64 1.19.11-0 kubernetes
kubeadm.x86_64 1.19.12-0 kubernetes
kubeadm.x86_64 1.19.13-0 kubernetes
kubeadm.x86_64 1.19.14-0 kubernetes
kubeadm.x86_64 1.19.15-0 kubernetes
kubeadm.x86_64 1.19.16-0 kubernetes
kubeadm.x86_64 1.20.0-0 kubernetes
kubeadm.x86_64 1.20.1-0 kubernetes
kubeadm.x86_64 1.20.2-0 kubernetes
kubeadm.x86_64 1.20.4-0 kubernetes
kubeadm.x86_64 1.20.5-0 kubernetes
kubeadm.x86_64 1.20.6-0 kubernetes
kubeadm.x86_64 1.20.7-0 kubernetes
kubeadm.x86_64 1.20.8-0 kubernetes
kubeadm.x86_64 1.20.9-0 kubernetes
kubeadm.x86_64 1.20.10-0 kubernetes
kubeadm.x86_64 1.20.11-0 kubernetes
kubeadm.x86_64 1.20.12-0 kubernetes
kubeadm.x86_64 1.20.13-0 kubernetes
kubeadm.x86_64 1.20.14-0 kubernetes
kubeadm.x86_64 1.20.15-0 kubernetes
kubeadm.x86_64 1.21.0-0 kubernetes
kubeadm.x86_64 1.21.1-0 kubernetes
kubeadm.x86_64 1.21.2-0 kubernetes
kubeadm.x86_64 1.21.3-0 kubernetes
kubeadm.x86_64 1.21.4-0 kubernetes
kubeadm.x86_64 1.21.5-0 kubernetes
kubeadm.x86_64 1.21.6-0 kubernetes
kubeadm.x86_64 1.21.7-0 kubernetes
kubeadm.x86_64 1.21.8-0 kubernetes
kubeadm.x86_64 1.21.9-0 kubernetes
kubeadm.x86_64 1.21.10-0 kubernetes
kubeadm.x86_64 1.21.11-0 kubernetes
kubeadm.x86_64 1.21.12-0 kubernetes
kubeadm.x86_64 1.21.13-0 kubernetes
kubeadm.x86_64 1.21.14-0 kubernetes
kubeadm.x86_64 1.22.0-0 kubernetes
kubeadm.x86_64 1.22.1-0 kubernetes
kubeadm.x86_64 1.22.2-0 kubernetes
kubeadm.x86_64 1.22.3-0 kubernetes
kubeadm.x86_64 1.22.4-0 kubernetes
kubeadm.x86_64 1.22.5-0 kubernetes
kubeadm.x86_64 1.22.6-0 kubernetes
kubeadm.x86_64 1.22.7-0 kubernetes
kubeadm.x86_64 1.22.8-0 kubernetes
kubeadm.x86_64 1.22.9-0 kubernetes
kubeadm.x86_64 1.22.10-0 kubernetes
kubeadm.x86_64 1.22.11-0 kubernetes
kubeadm.x86_64 1.22.12-0 kubernetes
kubeadm.x86_64 1.23.0-0 kubernetes
kubeadm.x86_64 1.23.1-0 kubernetes
kubeadm.x86_64 1.23.2-0 kubernetes
kubeadm.x86_64 1.23.3-0 kubernetes
kubeadm.x86_64 1.23.4-0 kubernetes
kubeadm.x86_64 1.23.5-0 kubernetes
kubeadm.x86_64 1.23.6-0 kubernetes
kubeadm.x86_64 1.23.7-0 kubernetes
kubeadm.x86_64 1.23.8-0 kubernetes
kubeadm.x86_64 1.23.9-0 kubernetes
kubeadm.x86_64 1.24.0-0 kubernetes
kubeadm.x86_64 1.24.1-0 kubernetes
kubeadm.x86_64 1.24.2-0 kubernetes
kubeadm.x86_64 1.24.3-0 kubernetes
yum list --showduplicates kubeadm
升级kubeadm,上面查看版本是1.21.0,我们升级到1.21.1
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes

[root@master ~]# yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
没有可用软件包 kubeadm-。
没有可用软件包 1.21.1-0。
错误:无须任何处理
[root@master ~]# yum install -y kubeadm-1.21.1-0
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 升级
---> 软件包 kubeadm.x86_64.0.1.21.1-0 将被 更新
--> 解决依赖关系完成 依赖关系解决 ==========================================================================================================================================================================================================================================
Package 架构 版本 源 大小
==========================================================================================================================================================================================================================================
正在更新:
kubeadm x86_64 1.21.1-0 kubernetes 9.5 M 事务概要
==========================================================================================================================================================================================================================================
升级 1 软件包 总下载量:9.5 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
e0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.rpm | 9.5 MB 00:00:32
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : kubeadm-1.21.1-0.x86_64 1/2
清理 : kubeadm-1.21.0-0.x86_64 2/2
验证中 : kubeadm-1.21.1-0.x86_64 1/2
验证中 : kubeadm-1.21.0-0.x86_64 2/2 更新完毕:
kubeadm.x86_64 0:1.21.1-0 完毕!
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
驱逐要升级的节点上的pod
[root@master ~]# kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-6jk49, kube-system/kube-proxy-4r78x
evicting pod kube-system/coredns-545d6fc579-m2fc6
evicting pod kube-system/coredns-545d6fc579-c9889
pod/coredns-545d6fc579-c9889 evicted
pod/coredns-545d6fc579-m2fc6 evicted
node/master evicted
再次查看node,发现master不可调度
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready,SchedulingDisabled control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
检查集群是否可以升级,并获取升级的版本
[root@master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
I0813 16:46:29.019987 37514 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.21 series: v1.21.14 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.21.0 v1.21.14 Upgrade to the latest version in the v1.21 series: COMPONENT CURRENT TARGET
kube-apiserver v1.21.0 v1.21.14
kube-controller-manager v1.21.0 v1.21.14
kube-scheduler v1.21.0 v1.21.14
kube-proxy v1.21.0 v1.21.14
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.21.14 Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14. _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
执行上面命令,会自动生成一个可以升级的版本的命令
[root@master ~]# kubeadm upgrade apply v1.21.14
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.14"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/version] FATAL: the --version argument is invalid due to these errors: - Specified version to upgrade to "v1.21.14" is higher than the kubeadm version "v1.21.1". Upgrade kubeadm first using the tool you used to install kubeadm Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
当运行这个命令时会发现升级失败,原因是升级的版本高于kubeadm的版本,所以我们升级的版本一定要与kubeadm版本保持一致
[root@master ~]# kubeadm upgrade apply v1.21.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.1"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.1"...
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 49a143d1fd6753374a2970b880b3fe9a
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665603911"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: f188f9c5f1c338f870b0b13f31d3d667
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 7a467309ea170e9a8ab0a38462a67455
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: f0246658aa97264fd2ea2c6481d65a9d
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
又报错了,从上面信息看是kube-scheduler.yaml和kube-controller-manager.yaml文件在升级时被替换掉了,由于这两个组件禁用了非安全端口,所以导致secheduler和controller-manager失联。我们检擦集群状态看一下
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
将kube-scheduler.yaml和kube-controller-manager.yaml文件中的--port=0注释掉再重启一下kubelet,发现集群状态恢复正常。
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
取消节点上的master不可调度
[root@master ~]# kubectl uncordon master
node/master uncordoned
再升级kubectl和kubelet
[root@master ~]# yum install -y kubectl-1.21.1-0 kubelet-1.21.1-0 --disableexcludes=kubernetes
重启kubelet后查看,发现版本升级成功
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart kubelet
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.1
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
升级node也是同上面步骤一样。
K8s集群版本升级的更多相关文章
- 基于 kubeadm 搭建高可用的kubernetes 1.18.2 (k8s)集群一 环境准备
本k8s集群参考了 Michael 的 https://gitee.com/pa/kubernetes-ha-kubeadm-private 这个项目,再此表示感谢! Michael的项目k8s版本为 ...
- 万级K8s集群背后etcd稳定性及性能优化实践
背景与挑战 随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- 万级K8s集群背后 etcd 稳定性及性能优化实践
1背景与挑战随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- China Azure中部署Kubernetes(K8S)集群
目前China Azure还不支持容器服务(ACS),使用名称"az acs create --orchestrator-type Kubernetes -g zymtest -n kube ...
- k8s集群Canal的网络控制 原
1 简介 直接上干货 public class DispatcherServlet extends HttpServlet { private Properties contextConfigProp ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
- k8s重要概念及部署k8s集群(一)--技术流ken
重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...
- K8S集群 NOT READY的解决办法 1.13 错误信息:cni config uninitialized
今天给同事 一个k8s 集群 出现not ready了 花了 40min 才搞定 这里记录一下 避免下载 再遇到了 不清楚. 错误现象:untime network not ready: Networ ...
- Kubeadm安装的K8S集群1年证书过期问题的解决思路
这个问题,很多使用使用kubeadm的用户都会遇到. 网上也有类似的帖子,从源代码编译这种思路, 在生产环境,有些不现实. 还是使用kubeadm的命令操作,比较自然一点. 当然,自行生成一套证书,也 ...
- 自建k8s集群日志采集到阿里云日志服务
自建k8s集群 的master 节点安装 logtail 采集工具 wget http://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.c ...
随机推荐
- go_xml_learn
结构体转换为xml: type Person struct { XMLName xml.Name `xml:"person"` Name string `xml:"nam ...
- .NET性能优化-使用RecyclableMemoryStream替代MemoryStream
提到MemoryStream大家可能都不陌生,在编写代码中或多或少有使用过:比如Json序列化反序列化.导出PDF/Excel/Word.进行图片或者文字处理等场景.但是如果使用它高频.大数据量处理这 ...
- css预处理器scss/sass语法以及使用
scss scss在css基础语法上面增加了变量 (variables).嵌套 (nested rules).混合 (mixins).导入 (inline imports) 等高级功能,使用scss可 ...
- Springboot整合策略模式概念->使用场景->优缺点->企业级实战
一.前言 策略模式可能是在工作中使用最多的,也是在面试中最常提到的,代码重构和优化的必备! 小编之前也是一直说,其实没有真正的实战:最近有了机会实战了一下,来分享一下使用心得和在企业级的使用! 二.策 ...
- python进阶之路4基本运算符、格式化输出
内容回顾 PEP8规范 代码编写规范及美观 python注释语法 平时养成写注释的习惯 1.警号 2.三个单引号 3.三个双引号 常量与变量 1.变量语法结构 变量名 赋值符合 数据值 2.底层原理 ...
- 《Effective C++》定制new和delete
Item49:了解new_handler的行为 当operator new抛出异常以反映出一个未获得满足的内存需求之前,它会先调用一个用户制定的错误处理函数,一个所谓的new-handler,为了制定 ...
- cornerstone4.1破解版 for mac
百度网盘: https://pan.baidu.com/s/1l_0rHMF11mZsUP3qJrp7Uw 密码: 8ei9
- YMOI 2019.6.29
题解 YMOI 2019.6.29 放弃FAIOJ,用cena考了一次试.被全方位吊打.. T1 开灯 题面: 在一条无限长的路上,有一排无限长的路灯,编号为1,2,3,4,--. 每一盏灯只有两种可 ...
- ORM执行原生SQL语句、双下划线数据查询、ORM外键字段的创建、外键字段的相关操作、ORM跨表查询、基于对象的跨表查询、基于双下划线的跨表查询、进阶查询操作
今日内容 ORM执行SQL语句 有时候ROM的操作效率可能偏低 我们是可以自己编写sql的 方式1: models.User.objects.raw('select * from app01_user ...
- 刷题笔记——1267.A+B Problem
题目 1267.A+B Problem 代码 while True: try: a,b=map(int,input().strip().split()) print(a+b) except: brea ...