K8s集群版本升级
k8s组件升级流程:
升级主管理节点→升级其他管理节点→升级工作节点
首先备份主管理节点的etcd,检查版本号,为了保证版本的兼容性,跨度最好不要超过两个版本。
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
查找最新版本号
[root@master ~]# yum list --showduplicates kubeadm

kubeadm.x86_64 1.21.0-0 @kubernetes
可安装的软件包
kubeadm.x86_64 1.6.0-0 kubernetes
kubeadm.x86_64 1.6.1-0 kubernetes
kubeadm.x86_64 1.6.2-0 kubernetes
kubeadm.x86_64 1.6.3-0 kubernetes
kubeadm.x86_64 1.6.4-0 kubernetes
kubeadm.x86_64 1.6.5-0 kubernetes
kubeadm.x86_64 1.6.6-0 kubernetes
kubeadm.x86_64 1.6.7-0 kubernetes
kubeadm.x86_64 1.6.8-0 kubernetes
kubeadm.x86_64 1.6.9-0 kubernetes
kubeadm.x86_64 1.6.10-0 kubernetes
kubeadm.x86_64 1.6.11-0 kubernetes
kubeadm.x86_64 1.6.12-0 kubernetes
kubeadm.x86_64 1.6.13-0 kubernetes
kubeadm.x86_64 1.7.0-0 kubernetes
kubeadm.x86_64 1.7.1-0 kubernetes
kubeadm.x86_64 1.7.2-0 kubernetes
kubeadm.x86_64 1.7.3-1 kubernetes
kubeadm.x86_64 1.7.4-0 kubernetes
kubeadm.x86_64 1.7.5-0 kubernetes
kubeadm.x86_64 1.7.6-1 kubernetes
kubeadm.x86_64 1.7.7-1 kubernetes
kubeadm.x86_64 1.7.8-1 kubernetes
kubeadm.x86_64 1.7.9-0 kubernetes
kubeadm.x86_64 1.7.10-0 kubernetes
kubeadm.x86_64 1.7.11-0 kubernetes
kubeadm.x86_64 1.7.14-0 kubernetes
kubeadm.x86_64 1.7.15-0 kubernetes
kubeadm.x86_64 1.7.16-0 kubernetes
kubeadm.x86_64 1.8.0-0 kubernetes
kubeadm.x86_64 1.8.0-1 kubernetes
kubeadm.x86_64 1.8.1-0 kubernetes
kubeadm.x86_64 1.8.2-0 kubernetes
kubeadm.x86_64 1.8.3-0 kubernetes
kubeadm.x86_64 1.8.4-0 kubernetes
kubeadm.x86_64 1.8.5-0 kubernetes
kubeadm.x86_64 1.8.6-0 kubernetes
kubeadm.x86_64 1.8.7-0 kubernetes
kubeadm.x86_64 1.8.8-0 kubernetes
kubeadm.x86_64 1.8.9-0 kubernetes
kubeadm.x86_64 1.8.10-0 kubernetes
kubeadm.x86_64 1.8.11-0 kubernetes
kubeadm.x86_64 1.8.12-0 kubernetes
kubeadm.x86_64 1.8.13-0 kubernetes
kubeadm.x86_64 1.8.14-0 kubernetes
kubeadm.x86_64 1.8.15-0 kubernetes
kubeadm.x86_64 1.9.0-0 kubernetes
kubeadm.x86_64 1.9.1-0 kubernetes
kubeadm.x86_64 1.9.2-0 kubernetes
kubeadm.x86_64 1.9.3-0 kubernetes
kubeadm.x86_64 1.9.4-0 kubernetes
kubeadm.x86_64 1.9.5-0 kubernetes
kubeadm.x86_64 1.9.6-0 kubernetes
kubeadm.x86_64 1.9.7-0 kubernetes
kubeadm.x86_64 1.9.8-0 kubernetes
kubeadm.x86_64 1.9.9-0 kubernetes
kubeadm.x86_64 1.9.10-0 kubernetes
kubeadm.x86_64 1.9.11-0 kubernetes
kubeadm.x86_64 1.10.0-0 kubernetes
kubeadm.x86_64 1.10.1-0 kubernetes
kubeadm.x86_64 1.10.2-0 kubernetes
kubeadm.x86_64 1.10.3-0 kubernetes
kubeadm.x86_64 1.10.4-0 kubernetes
kubeadm.x86_64 1.10.5-0 kubernetes
kubeadm.x86_64 1.10.6-0 kubernetes
kubeadm.x86_64 1.10.7-0 kubernetes
kubeadm.x86_64 1.10.8-0 kubernetes
kubeadm.x86_64 1.10.9-0 kubernetes
kubeadm.x86_64 1.10.10-0 kubernetes
kubeadm.x86_64 1.10.11-0 kubernetes
kubeadm.x86_64 1.10.12-0 kubernetes
kubeadm.x86_64 1.10.13-0 kubernetes
kubeadm.x86_64 1.11.0-0 kubernetes
kubeadm.x86_64 1.11.1-0 kubernetes
kubeadm.x86_64 1.11.2-0 kubernetes
kubeadm.x86_64 1.11.3-0 kubernetes
kubeadm.x86_64 1.11.4-0 kubernetes
kubeadm.x86_64 1.11.5-0 kubernetes
kubeadm.x86_64 1.11.6-0 kubernetes
kubeadm.x86_64 1.11.7-0 kubernetes
kubeadm.x86_64 1.11.8-0 kubernetes
kubeadm.x86_64 1.11.9-0 kubernetes
kubeadm.x86_64 1.11.10-0 kubernetes
kubeadm.x86_64 1.12.0-0 kubernetes
kubeadm.x86_64 1.12.1-0 kubernetes
kubeadm.x86_64 1.12.2-0 kubernetes
kubeadm.x86_64 1.12.3-0 kubernetes
kubeadm.x86_64 1.12.4-0 kubernetes
kubeadm.x86_64 1.12.5-0 kubernetes
kubeadm.x86_64 1.12.6-0 kubernetes
kubeadm.x86_64 1.12.7-0 kubernetes
kubeadm.x86_64 1.12.8-0 kubernetes
kubeadm.x86_64 1.12.9-0 kubernetes
kubeadm.x86_64 1.12.10-0 kubernetes
kubeadm.x86_64 1.13.0-0 kubernetes
kubeadm.x86_64 1.13.1-0 kubernetes
kubeadm.x86_64 1.13.2-0 kubernetes
kubeadm.x86_64 1.13.3-0 kubernetes
kubeadm.x86_64 1.13.4-0 kubernetes
kubeadm.x86_64 1.13.5-0 kubernetes
kubeadm.x86_64 1.13.6-0 kubernetes
kubeadm.x86_64 1.13.7-0 kubernetes
kubeadm.x86_64 1.13.8-0 kubernetes
kubeadm.x86_64 1.13.9-0 kubernetes
kubeadm.x86_64 1.13.10-0 kubernetes
kubeadm.x86_64 1.13.11-0 kubernetes
kubeadm.x86_64 1.13.12-0 kubernetes
kubeadm.x86_64 1.14.0-0 kubernetes
kubeadm.x86_64 1.14.1-0 kubernetes
kubeadm.x86_64 1.14.2-0 kubernetes
kubeadm.x86_64 1.14.3-0 kubernetes
kubeadm.x86_64 1.14.4-0 kubernetes
kubeadm.x86_64 1.14.5-0 kubernetes
kubeadm.x86_64 1.14.6-0 kubernetes
kubeadm.x86_64 1.14.7-0 kubernetes
kubeadm.x86_64 1.14.8-0 kubernetes
kubeadm.x86_64 1.14.9-0 kubernetes
kubeadm.x86_64 1.14.10-0 kubernetes
kubeadm.x86_64 1.15.0-0 kubernetes
kubeadm.x86_64 1.15.1-0 kubernetes
kubeadm.x86_64 1.15.2-0 kubernetes
kubeadm.x86_64 1.15.3-0 kubernetes
kubeadm.x86_64 1.15.4-0 kubernetes
kubeadm.x86_64 1.15.5-0 kubernetes
kubeadm.x86_64 1.15.6-0 kubernetes
kubeadm.x86_64 1.15.7-0 kubernetes
kubeadm.x86_64 1.15.8-0 kubernetes
kubeadm.x86_64 1.15.9-0 kubernetes
kubeadm.x86_64 1.15.10-0 kubernetes
kubeadm.x86_64 1.15.11-0 kubernetes
kubeadm.x86_64 1.15.12-0 kubernetes
kubeadm.x86_64 1.16.0-0 kubernetes
kubeadm.x86_64 1.16.1-0 kubernetes
kubeadm.x86_64 1.16.2-0 kubernetes
kubeadm.x86_64 1.16.3-0 kubernetes
kubeadm.x86_64 1.16.4-0 kubernetes
kubeadm.x86_64 1.16.5-0 kubernetes
kubeadm.x86_64 1.16.6-0 kubernetes
kubeadm.x86_64 1.16.7-0 kubernetes
kubeadm.x86_64 1.16.8-0 kubernetes
kubeadm.x86_64 1.16.9-0 kubernetes
kubeadm.x86_64 1.16.10-0 kubernetes
kubeadm.x86_64 1.16.11-0 kubernetes
kubeadm.x86_64 1.16.11-1 kubernetes
kubeadm.x86_64 1.16.12-0 kubernetes
kubeadm.x86_64 1.16.13-0 kubernetes
kubeadm.x86_64 1.16.14-0 kubernetes
kubeadm.x86_64 1.16.15-0 kubernetes
kubeadm.x86_64 1.17.0-0 kubernetes
kubeadm.x86_64 1.17.1-0 kubernetes
kubeadm.x86_64 1.17.2-0 kubernetes
kubeadm.x86_64 1.17.3-0 kubernetes
kubeadm.x86_64 1.17.4-0 kubernetes
kubeadm.x86_64 1.17.5-0 kubernetes
kubeadm.x86_64 1.17.6-0 kubernetes
kubeadm.x86_64 1.17.7-0 kubernetes
kubeadm.x86_64 1.17.7-1 kubernetes
kubeadm.x86_64 1.17.8-0 kubernetes
kubeadm.x86_64 1.17.9-0 kubernetes
kubeadm.x86_64 1.17.11-0 kubernetes
kubeadm.x86_64 1.17.12-0 kubernetes
kubeadm.x86_64 1.17.13-0 kubernetes
kubeadm.x86_64 1.17.14-0 kubernetes
kubeadm.x86_64 1.17.15-0 kubernetes
kubeadm.x86_64 1.17.16-0 kubernetes
kubeadm.x86_64 1.17.17-0 kubernetes
kubeadm.x86_64 1.18.0-0 kubernetes
kubeadm.x86_64 1.18.1-0 kubernetes
kubeadm.x86_64 1.18.2-0 kubernetes
kubeadm.x86_64 1.18.3-0 kubernetes
kubeadm.x86_64 1.18.4-0 kubernetes
kubeadm.x86_64 1.18.4-1 kubernetes
kubeadm.x86_64 1.18.5-0 kubernetes
kubeadm.x86_64 1.18.6-0 kubernetes
kubeadm.x86_64 1.18.8-0 kubernetes
kubeadm.x86_64 1.18.9-0 kubernetes
kubeadm.x86_64 1.18.10-0 kubernetes
kubeadm.x86_64 1.18.12-0 kubernetes
kubeadm.x86_64 1.18.13-0 kubernetes
kubeadm.x86_64 1.18.14-0 kubernetes
kubeadm.x86_64 1.18.15-0 kubernetes
kubeadm.x86_64 1.18.16-0 kubernetes
kubeadm.x86_64 1.18.17-0 kubernetes
kubeadm.x86_64 1.18.18-0 kubernetes
kubeadm.x86_64 1.18.19-0 kubernetes
kubeadm.x86_64 1.18.20-0 kubernetes
kubeadm.x86_64 1.19.0-0 kubernetes
kubeadm.x86_64 1.19.1-0 kubernetes
kubeadm.x86_64 1.19.2-0 kubernetes
kubeadm.x86_64 1.19.3-0 kubernetes
kubeadm.x86_64 1.19.4-0 kubernetes
kubeadm.x86_64 1.19.5-0 kubernetes
kubeadm.x86_64 1.19.6-0 kubernetes
kubeadm.x86_64 1.19.7-0 kubernetes
kubeadm.x86_64 1.19.8-0 kubernetes
kubeadm.x86_64 1.19.9-0 kubernetes
kubeadm.x86_64 1.19.10-0 kubernetes
kubeadm.x86_64 1.19.11-0 kubernetes
kubeadm.x86_64 1.19.12-0 kubernetes
kubeadm.x86_64 1.19.13-0 kubernetes
kubeadm.x86_64 1.19.14-0 kubernetes
kubeadm.x86_64 1.19.15-0 kubernetes
kubeadm.x86_64 1.19.16-0 kubernetes
kubeadm.x86_64 1.20.0-0 kubernetes
kubeadm.x86_64 1.20.1-0 kubernetes
kubeadm.x86_64 1.20.2-0 kubernetes
kubeadm.x86_64 1.20.4-0 kubernetes
kubeadm.x86_64 1.20.5-0 kubernetes
kubeadm.x86_64 1.20.6-0 kubernetes
kubeadm.x86_64 1.20.7-0 kubernetes
kubeadm.x86_64 1.20.8-0 kubernetes
kubeadm.x86_64 1.20.9-0 kubernetes
kubeadm.x86_64 1.20.10-0 kubernetes
kubeadm.x86_64 1.20.11-0 kubernetes
kubeadm.x86_64 1.20.12-0 kubernetes
kubeadm.x86_64 1.20.13-0 kubernetes
kubeadm.x86_64 1.20.14-0 kubernetes
kubeadm.x86_64 1.20.15-0 kubernetes
kubeadm.x86_64 1.21.0-0 kubernetes
kubeadm.x86_64 1.21.1-0 kubernetes
kubeadm.x86_64 1.21.2-0 kubernetes
kubeadm.x86_64 1.21.3-0 kubernetes
kubeadm.x86_64 1.21.4-0 kubernetes
kubeadm.x86_64 1.21.5-0 kubernetes
kubeadm.x86_64 1.21.6-0 kubernetes
kubeadm.x86_64 1.21.7-0 kubernetes
kubeadm.x86_64 1.21.8-0 kubernetes
kubeadm.x86_64 1.21.9-0 kubernetes
kubeadm.x86_64 1.21.10-0 kubernetes
kubeadm.x86_64 1.21.11-0 kubernetes
kubeadm.x86_64 1.21.12-0 kubernetes
kubeadm.x86_64 1.21.13-0 kubernetes
kubeadm.x86_64 1.21.14-0 kubernetes
kubeadm.x86_64 1.22.0-0 kubernetes
kubeadm.x86_64 1.22.1-0 kubernetes
kubeadm.x86_64 1.22.2-0 kubernetes
kubeadm.x86_64 1.22.3-0 kubernetes
kubeadm.x86_64 1.22.4-0 kubernetes
kubeadm.x86_64 1.22.5-0 kubernetes
kubeadm.x86_64 1.22.6-0 kubernetes
kubeadm.x86_64 1.22.7-0 kubernetes
kubeadm.x86_64 1.22.8-0 kubernetes
kubeadm.x86_64 1.22.9-0 kubernetes
kubeadm.x86_64 1.22.10-0 kubernetes
kubeadm.x86_64 1.22.11-0 kubernetes
kubeadm.x86_64 1.22.12-0 kubernetes
kubeadm.x86_64 1.23.0-0 kubernetes
kubeadm.x86_64 1.23.1-0 kubernetes
kubeadm.x86_64 1.23.2-0 kubernetes
kubeadm.x86_64 1.23.3-0 kubernetes
kubeadm.x86_64 1.23.4-0 kubernetes
kubeadm.x86_64 1.23.5-0 kubernetes
kubeadm.x86_64 1.23.6-0 kubernetes
kubeadm.x86_64 1.23.7-0 kubernetes
kubeadm.x86_64 1.23.8-0 kubernetes
kubeadm.x86_64 1.23.9-0 kubernetes
kubeadm.x86_64 1.24.0-0 kubernetes
kubeadm.x86_64 1.24.1-0 kubernetes
kubeadm.x86_64 1.24.2-0 kubernetes
kubeadm.x86_64 1.24.3-0 kubernetes
yum list --showduplicates kubeadm
升级kubeadm,上面查看版本是1.21.0,我们升级到1.21.1
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes

[root@master ~]# yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
没有可用软件包 kubeadm-。
没有可用软件包 1.21.1-0。
错误:无须任何处理
[root@master ~]# yum install -y kubeadm-1.21.1-0
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.ustc.edu.cn
* extras: mirrors.ustc.edu.cn
* updates: mirror.lzu.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/repomd.xml"
正在尝试其它镜像。
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.21.0-0 将被 升级
---> 软件包 kubeadm.x86_64.0.1.21.1-0 将被 更新
--> 解决依赖关系完成 依赖关系解决 ==========================================================================================================================================================================================================================================
Package 架构 版本 源 大小
==========================================================================================================================================================================================================================================
正在更新:
kubeadm x86_64 1.21.1-0 kubernetes 9.5 M 事务概要
==========================================================================================================================================================================================================================================
升级 1 软件包 总下载量:9.5 M
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
e0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.rpm | 9.5 MB 00:00:32
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在更新 : kubeadm-1.21.1-0.x86_64 1/2
清理 : kubeadm-1.21.0-0.x86_64 2/2
验证中 : kubeadm-1.21.1-0.x86_64 1/2
验证中 : kubeadm-1.21.0-0.x86_64 2/2 更新完毕:
kubeadm.x86_64 0:1.21.1-0 完毕!
yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
驱逐要升级的节点上的pod
[root@master ~]# kubectl drain master --ignore-daemonsets
node/master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-6jk49, kube-system/kube-proxy-4r78x
evicting pod kube-system/coredns-545d6fc579-m2fc6
evicting pod kube-system/coredns-545d6fc579-c9889
pod/coredns-545d6fc579-c9889 evicted
pod/coredns-545d6fc579-m2fc6 evicted
node/master evicted
再次查看node,发现master不可调度
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready,SchedulingDisabled control-plane,master 9d v1.21.0
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
检查集群是否可以升级,并获取升级的版本
[root@master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
I0813 16:46:29.019987 37514 version.go:254] remote version is much newer: v1.24.3; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.21 series: v1.21.14 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.21.0 v1.21.14 Upgrade to the latest version in the v1.21 series: COMPONENT CURRENT TARGET
kube-apiserver v1.21.0 v1.21.14
kube-controller-manager v1.21.0 v1.21.14
kube-scheduler v1.21.0 v1.21.14
kube-proxy v1.21.0 v1.21.14
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.21.14 Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14. _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
执行上面命令,会自动生成一个可以升级的版本的命令
[root@master ~]# kubeadm upgrade apply v1.21.14
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.14"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/version] FATAL: the --version argument is invalid due to these errors: - Specified version to upgrade to "v1.21.14" is higher than the kubeadm version "v1.21.1". Upgrade kubeadm first using the tool you used to install kubeadm Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
当运行这个命令时会发现升级失败,原因是升级的版本高于kubeadm的版本,所以我们升级的版本一定要与kubeadm版本保持一致
[root@master ~]# kubeadm upgrade apply v1.21.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.1"
[upgrade/versions] Cluster version: v1.21.0
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.1"...
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-master hash: 49a143d1fd6753374a2970b880b3fe9a
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests665603911"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: 99d9c6c8dd5e35d7e1fa9c4b3bdca894
Static pod: kube-apiserver-master hash: f188f9c5f1c338f870b0b13f31d3d667
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 106f403a9b8a3db9e0847819429ddb11
Static pod: kube-controller-manager-master hash: 7a467309ea170e9a8ab0a38462a67455
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-08-13-16-54-54/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: 63060c298f21d5f414dcdd04f2d5eaa0
Static pod: kube-scheduler-master hash: f0246658aa97264fd2ea2c6481d65a9d
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
又报错了,从上面信息看是kube-scheduler.yaml和kube-controller-manager.yaml文件在升级时被替换掉了,由于这两个组件禁用了非安全端口,所以导致secheduler和controller-manager失联。我们检擦集群状态看一下
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0 Healthy {"health":"true"}
将kube-scheduler.yaml和kube-controller-manager.yaml文件中的--port=0注释掉再重启一下kubelet,发现集群状态恢复正常。
[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
取消节点上的master不可调度
[root@master ~]# kubectl uncordon master
node/master uncordoned
再升级kubectl和kubelet
[root@master ~]# yum install -y kubectl-1.21.1-0 kubelet-1.21.1-0 --disableexcludes=kubernetes
重启kubelet后查看,发现版本升级成功
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart kubelet
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 9d v1.21.1
node1 Ready <none> 9d v1.21.0
node2 Ready <none> 9d v1.21.0
升级node也是同上面步骤一样。
K8s集群版本升级的更多相关文章
- 基于 kubeadm 搭建高可用的kubernetes 1.18.2 (k8s)集群一 环境准备
本k8s集群参考了 Michael 的 https://gitee.com/pa/kubernetes-ha-kubeadm-private 这个项目,再此表示感谢! Michael的项目k8s版本为 ...
- 万级K8s集群背后etcd稳定性及性能优化实践
背景与挑战 随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- 万级K8s集群背后 etcd 稳定性及性能优化实践
1背景与挑战随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...
- China Azure中部署Kubernetes(K8S)集群
目前China Azure还不支持容器服务(ACS),使用名称"az acs create --orchestrator-type Kubernetes -g zymtest -n kube ...
- k8s集群Canal的网络控制 原
1 简介 直接上干货 public class DispatcherServlet extends HttpServlet { private Properties contextConfigProp ...
- kubernetes系列03—kubeadm安装部署K8S集群
本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...
- k8s重要概念及部署k8s集群(一)--技术流ken
重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...
- K8S集群 NOT READY的解决办法 1.13 错误信息:cni config uninitialized
今天给同事 一个k8s 集群 出现not ready了 花了 40min 才搞定 这里记录一下 避免下载 再遇到了 不清楚. 错误现象:untime network not ready: Networ ...
- Kubeadm安装的K8S集群1年证书过期问题的解决思路
这个问题,很多使用使用kubeadm的用户都会遇到. 网上也有类似的帖子,从源代码编译这种思路, 在生产环境,有些不现实. 还是使用kubeadm的命令操作,比较自然一点. 当然,自行生成一套证书,也 ...
- 自建k8s集群日志采集到阿里云日志服务
自建k8s集群 的master 节点安装 logtail 采集工具 wget http://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.c ...
随机推荐
- Cryptohack的Adrien's Signs解题思路
题目如下: 输出的结果: 题目分析: 在原题的题目描述中并没有什么有用的消息,更多的信息是通过代码审计出来的.大致意思是,先把字节flag转换为二进制形式的字符串,然后判断字符串中每个字符,如果为1, ...
- WPF 自定义附加事件
我们都知道路由事件,而附加事件用的比较少. 但如果是通用的场景,类似附加属性,附加事件就很有必要的. 举个例子,输入设备有很多种,WPF中输入事件主要分为鼠标.触摸.触笔:WPF 屏幕点击的设备类型 ...
- Mac上离线安装rvm
上github下载rvm,https://github.com/rvm/rvm.git. 双击打开/bin/rvm-installer .../Users/ccy/.rvm/ is complete. ...
- 01-Sed简介
1 Sed简介 Sed(Stream EDitor)为Uninx系统上提供将编辑工作自动化的编辑器,使用者无需直接编辑数据.使用者可以利用Sed所提供的20多种不同的函数,进行不同的编辑动作. Sed ...
- ArcGIS实现打点、线路图、色块、自定义弹窗
闲聊: 马上就要过年了,不知道大家过年都放几天假,小颖公司就只放8天假,就这还有一天是集体调休扣年假,就很··············还不如不放,不过庆幸最近这两周项目也做完了也没啥事,不然就静不下心 ...
- elasticsearch之search template
一.search template简介 elasticsearch提供了search template功能,其会在实际执行查询之前,对search template进行预处理并将参数填充到templa ...
- 经典问题 1 —— DAG 上区间限制拓扑序
问题描述 给定一个 DAG,求一个拓扑序,使得节点 \(i\) 的拓扑序 \(\in [l_i,r_i]\). 题解 首先进行一个预处理:对于所有 \(u\),令 \(\forall (v,u)\in ...
- 超详细手把手教你cordova开发使用指南+自定义插件,jsbridge
Cordova是什么 使用前端技术 开发跨平台web App的工具 底层原理:HTML+CSS搭建页面, JS和原生交互 交互原理:Cordova插件 环境配置 安卓开发基础环境搭建的文章可以参考一下 ...
- vh 存在问题?试试动态视口单位之 dvh、svh、lvh
大部分同学都知道,在 CSS 世界中,有 vw.vh.vmax.vmin 这几个与视口 Viewport 相关的单位. 正常而言: 1vw 等于1/100的视口宽度 (Viewport Width) ...
- Java基础1-1-4—java基础语法(循环+随机数)
4.循环+随机数 4.1 循环语句-for循环 循环 特征:1.重复做某件事情 2.具有明确的开始和停止标志 for循环格式介绍 public static void main(String[] ar ...