1、基本概念

  升级之后所有的containers会重启,因为hash值会变。

  不可跨版本升级。

2、升级Master节点

  当前版本

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} [root@k8s-master02 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"} [root@k8s-master03 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

  查看所有kubeadm的版本

[root@k8s-master01 ~]# yum list kubeadm  --showduplicates | sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.9.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.8.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.7.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.6.- kubernetes
kubeadm.x86_64 1.13.- kubernetes
kubeadm.x86_64 1.12.- kubernetes
kubeadm.x86_64 1.12.- kubernetes
kubeadm.x86_64 1.12.- kubernetes
kubeadm.x86_64 1.12.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.11.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
kubeadm.x86_64 1.10.- kubernetes
* extras: mirrors.aliyun.com
* base: mirrors.aliyun.com
Available Packages

  所有Master节点升级kubeadm

yum install  kubeadm-1.12.3-0.x86_64 -y --disableexcludes=kubernetes

  查看版本

[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} [root@k8s-master02 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} [root@k8s-master03 ~]# kubeadm version
kubeadm version: &version.Info{Major:"", Minor:"", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-26T12:54:02Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

  所有Master节点修改kubeadm-config.yaml的kubernetesVersion(如果升级前集群不是参考我的文档部署的,请自行下载对应镜像)

[root@k8s-master01 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.12.3
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers

  提前下载镜像

[root@k8s-master01 ~]# kubeadm config images pull --config /root/kubeadm-config.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.

  备份/etc/kubernetes所有文件,自行备份

  在Master01上执行升级,一下操作在Master01执行

  修改Master01的configmap/kubeadm-config

[root@k8s-master01 ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

  主要修改以下信息:

# 将以下参数修改为本机IP
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
# 将以下参数修改为etcd集群节点
etcd.local.extraArgs.initial-cluster
# 添加参数至extraArgs
initial-cluster-state: existing
# 将以下参数修改为本机的IP和主机名
peerCertSANs:
- k8s-master01
- 192.168.20.20
serverCertSANs:
- k8s-master01
- 192.168.20.20

  大致如下

apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: 192.168.20.20
bindPort:
controlPlaneEndpoint: 192.168.20.10:
apiServerCertSANs:
- k8s-master01
- k8s-master02
- k8s-master03
- k8s-master-lb
- 192.168.20.20
- 192.168.20.21
- 192.168.20.22
- 192.168.20.10
apiServerExtraArgs:
authorization-mode: Node,RBAC
apiVersion: kubeadm.k8s.io/v1alpha2
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge:
path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManagerExtraArgs:
node-monitor-grace-period: 10s
pod-eviction-timeout: 10s
etcd:
local:
dataDir: /var/lib/etcd
extraArgs:
advertise-client-urls: https://192.168.20.20:2379
initial-advertise-peer-urls: https://192.168.20.20:2380
initial-cluster: k8s-master01=https://192.168.20.20:2380, k8s-master02=https://192.168.20.21:
, k8s-master02=https://192.168.20.22:2380
listen-client-urls: https://127.0.0.1:2379,https://192.168.20.20:2379
listen-peer-urls: https://192.168.20.20:2380
initial-cluster-state: existing
image: ""
peerCertSANs:
- k8s-master01
- 192.168.20.20
serverCertSANs:
- k8s-master01
- 192.168.20.20
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: MasterConfiguration
kubeProxy:
config:
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst:
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps:
clusterCIDR: 172.168.0.0/
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore:
min:
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit:
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:
mode: ""
nodePortAddresses: null
oomScoreAdj: -
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
kubeletConfiguration:
baseConfig:
address: 0.0.0.0
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles:
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst:
eventRecordQPS:
evictionHard:
imagefs.available: %
memory.available: 100Mi
nodefs.available: %
nodefs.inodesFree: %
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort:
httpCheckFrequency: 20s
imageGCHighThresholdPercent:
imageGCLowThresholdPercent:
imageMinimumGCAge: 2m0s
iptablesDropBit:
iptablesMasqueradeBit:
kubeAPIBurst:
kubeAPIQPS:
makeIPTablesUtilChains: true
maxOpenFiles:
maxPods:
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -
podPidsLimit: -
port:
registryBurst:
registryPullQPS:
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
kubernetesVersion: v1.11.1
networking:
dnsDomain: cluster.local
podSubnet: 172.168.0.0/
serviceSubnet: 10.96.0.0/
nodeRegistration: {}
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: --30T07::49Z
name: kubeadm-config
namespace: kube-system
resourceVersion: ""
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: f4c8386f-f473-11e8-a7c1-000c293bfe27

kubeadm-config-cm.yaml

  应用配置

[root@k8s-master01 ~]# kubectl apply -f kubeadm-config-cm.yaml --force
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kubeadm-config configured

  

  在Master01节点执行,检查是否可以升级,并且获得相应的升级版本

[root@k8s-master01 ~]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
I1205 14:16:59.024022 22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 14:16:59.024143 22267 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest stable version: v1.12.3
I1205 14:17:09.125120 22267 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 14:17:09.125157 22267 version.go:94] falling back to the local client version: v1.12.3
[upgrade/versions] Latest version in the v1.11 series: v1.12.3 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 5 x v1.11.1 v1.12.3 Upgrade to the latest version in the v1.11 series: COMPONENT CURRENT AVAILABLE
API Server v1.11.1 v1.12.3
Controller Manager v1.11.1 v1.12.3
Scheduler v1.11.1 v1.12.3
Kube Proxy v1.11.1 v1.12.3
CoreDNS 1.1.3 1.2.2
Etcd 3.2.18 3.2.24 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.12.3 _____________________________________________________________________

  Master01升级

[root@k8s-master01 ~]# kubeadm upgrade apply v1.12.3
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"...
Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20
Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136
Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/etcd.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: e7745286a4575364791f6404a81a2971
Static pod: etcd-k8s-master01 hash: 88da4629a02c29c8e1a6a72ede24f370
[apiclient] Found Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 0s for initial delay
[util/etcd] Attempting to see if all cluster endpoints are available /
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests895910117/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-k8s-master01 hash: 8e73c6033a7f7c0ed9de3c9fe358ff20
Static pod: kube-apiserver-k8s-master01 hash: 2434a94351059f81688e5e1c3275bed6
[apiclient] Found Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-k8s-master01 hash: 18c2a56f846a5cbbff74093ebc5b6136
Static pod: kube-controller-manager-k8s-master01 hash: 126c3dd53a5200d93342d93c456ef3ea
[apiclient] Found Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-14-55-23/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-k8s-master01 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: kube-scheduler-k8s-master01 hash: e36d0e66f8da9610f746f242ac8dca22
[apiclient] Found Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

  升级其他Master节点

  以下在Master02上执行

[root@k8s-master02 ~]# kubectl get configmap -n kube-system kubeadm-config -o yaml > kubeadm-config-cm.yaml

  修改对应配置

# 将以下参数修改为本机IP
api.advertiseAddress
etcd.local.extraArgs.advertise-client-urls
etcd.local.extraArgs.initial-advertise-peer-urls
etcd.local.extraArgs.listen-client-urls
etcd.local.extraArgs.listen-peer-urls
# 将以下参数修改为etcd集群节点
etcd.local.extraArgs.initial-cluster
# 添加参数至extraArgs
initial-cluster-state: existing
# 将以下参数修改为本机的IP和主机名
peerCertSANs:
- k8s-master02
- 192.168.20.21
serverCertSANs:
- k8s-master02
- 192.168.20.21
# 修改ClusterStatus的apiEndpoints
ClusterStatus: |
apiEndpoints:
k8s-master02:
advertiseAddress: 192.168.20.21

  为Master02添加annotation for the cri-socket

[root@k8s-master01 manifests]# kubectl annotate node k8s-master02 kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node/k8s-master02 annotated

  在Master02上应用配置

[root@k8s-master02 ~]# kubectl apply -f kubeadm-config-cm.yaml --force
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
configmap/kubeadm-config configured

  升级Master02

[root@k8s-master02 ~]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
I1205 ::19.322334 version.go:] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I1205 ::19.322407 version.go:] falling back to the local client version: v1.12.3
[upgrade/versions] Latest stable version: v1.12.3
I1205 ::29.364522 version.go:] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 ::29.364560 version.go:] falling back to the local client version: v1.12.3
[upgrade/versions] Latest version in the v1. series: v1.12.3 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet x v1.11.1 v1.12.3 Upgrade to the latest version in the v1. series: COMPONENT CURRENT AVAILABLE
API Server v1.11.1 v1.12.3
Controller Manager v1.11.1 v1.12.3
Scheduler v1.11.1 v1.12.3
Kube Proxy v1.11.1 v1.12.3
CoreDNS 1.2. 1.2.
Etcd 3.2. 3.2. You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.12.3 _____________________________________________________________________ [root@k8s-master02 ~]# kubeadm upgrade apply v1.12.3 -f
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.12.3"
[upgrade/versions] Cluster version: v1.11.1
[upgrade/versions] kubeadm version: v1.12.3
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.12.3"...
Static pod: kube-apiserver-k8s-master02 hash: 78dd4fc562855556d31d9bc488493105
Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46
Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests885285423/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-k8s-master02 hash: 6c2acfaa7a090019e60c740068e75eac
[apiclient] Found Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-k8s-master02 hash: 1f165d7dcb7bc7512482d1ee10f5cd46
Static pod: kube-controller-manager-k8s-master02 hash: 26a5d6ca3e6f688f9e684d1e8b894741
[apiclient] Found Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2018-12-05-16-36-19/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-k8s-master02 hash: 301c69426b9199b2b4f2ea0f0f7915f4
Static pod: kube-scheduler-k8s-master02 hash: e36d0e66f8da9610f746f242ac8dca22
[apiclient] Found Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master02" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.12.3". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

  同样方式升级MasterN

3、验证Master

  镜像

[root@k8s-master01 manifests]# grep "image:" *.yaml
etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.
kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 [root@k8s-master02 manifests]# grep "image:" *.yaml
etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.
kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3 [root@k8s-master03 manifests]# grep "image:" *.yaml
etcd.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.
kube-apiserver.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3
kube-controller-manager.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3
kube-scheduler.yaml: image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3

4、升级kubectl和kubelet

  将Master01节点改为维护状态禁止调度

[root@k8s-master01 ~]# kubectl drain k8s-master01 --ignore-daemonsets
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready,SchedulingDisabled master 5d1h v1.11.1
k8s-master02 Ready master 5d1h v1.11.1
k8s-master03 Ready master 5d1h v1.11.1
k8s-node01 Ready <none> 5d1h v1.11.1
k8s-node02 Ready <none> 5d1h v1.11.1
......

  升级Master01的kubectl和kubelet

[root@k8s-master01 ~]# yum install kubectl-1.12.-.x86_64 kubelet-1.12.-.x86_64 -y --disableexcludes=kubernetes

  重启kubelet

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl restart kubelet
[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─-kubeadm.conf
Active: active (running) since Wed -- :: CST; 11s ago
Docs: https://kubernetes.io/docs/
Main PID: (kubelet)
Tasks:
Memory: 43.3M

  恢复调度

[root@k8s-master01 ~]# kubectl uncordon k8s-master01

  查看状态

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 5d1h v1.12.3
k8s-master02 Ready master 5d1h v1.11.1
k8s-master03 Ready master 5d1h v1.11.1
k8s-node01 Ready <none> 5d1h v1.11.1
k8s-node02 Ready <none> 5d1h v1.11.1
......

  同样方式升级master02和master03

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 5d1h v1.12.3
k8s-master02 Ready master 5d1h v1.12.3
k8s-master03 Ready master 5d1h v1.12.3
k8s-node01 Ready <none> 5d1h v1.11.1
k8s-node02 Ready <none> 5d1h v1.11.1 
...... 

5、升级node节点

  On each node except the master node, upgrade the kubelet config

[root@k8s-master01 ~]# kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

  节点禁止调度参考上面

[root@k8s-nodeN ~]# yum install  kubeadm-1.12.-.x86_64 -y --disableexcludes=kubernetes
[root@k8s-nodeN ~]# yum install kubectl-1.12.-.x86_64 kubelet-1.12.-.x86_64 -y --disableexcludes=kubernetes

  重启kubelet

[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl restart kubelet
[root@k8s-node01 ~]# systemctl status !$
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─-kubeadm.conf
Active: active (running) since Thu -- :: CST; 25s ago
Docs: https://kubernetes.io/docs/
Main PID: (kubelet)
Tasks:
Memory: 38.0M
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 5d1h v1.12.3
k8s-master02 Ready master 5d1h v1.12.3
k8s-master03 Ready master 5d1h v1.12.3
k8s-node01 Ready <none> 5d1h v1.12.3
......

  同样方式升级其他Node节点

  查看最终状态

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 5d2h v1.12.3
k8s-master02 Ready master 5d1h v1.12.3
k8s-master03 Ready master 5d1h v1.12.3
k8s-node01 Ready <none> 5d1h v1.12.3
k8s-node02 Ready <none> 5d1h v1.12.3
......

6、其他说明

  本次升级未升级网络组件,升级Calico请点击

  升级过程中如果出现错误,可以重复执行,kubeadm upgrade apply,也可以强制升级kubeadm upgrade apply VERSION -f

  PS:升级完集群后,证书的过期时间就会被更新,所有升级可以解决k8s证书过期的问题,生产环境可以采用定期升级,这也是官方建议的方式。

赞助作者:

  

kubernetes实战(十六):k8s高可用集群平滑升级 v1.11.x 到v1.12.x的更多相关文章

  1. .Net Core2.1 秒杀项目一步步实现CI/CD(Centos7.2)系列一:k8s高可用集群搭建总结以及部署API到k8s

    前言:本系列博客又更新了,是博主研究很长时间,亲自动手实践过后的心得,k8s集群是购买了5台阿里云服务器部署的,这个集群差不多搞了一周时间,关于k8s的知识点,我也是刚入门,这方面的知识建议参考博客园 ...

  2. 阿里云搭建k8s高可用集群(1.17.3)

    首先准备5台centos7 ecs实例最低要求2c4G 开启SLB(私网) 这里我们采用堆叠拓扑的方式构建高可用集群,因为k8s 集群etcd采用了raft算法保证集群一致性,所以高可用必须保证至少3 ...

  3. 实战| Nginx+keepalived 实现高可用集群

    一个执着于技术的公众号 前言 今天通过两个实战案例,带大家理解Nginx+keepalived 如何实现高可用集群,在学习新知识之前您可以选择性复习之前的知识点: 给小白的 Nginx 10分钟入门指 ...

  4. 【葵花宝典】lvs+keepalived部署kubernetes(k8s)高可用集群

    一.部署环境 1.1 主机列表 主机名 Centos版本 ip docker version flannel version Keepalived version 主机配置 备注 lvs-keepal ...

  5. kubeadm实现k8s高可用集群环境部署与配置

    高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalive ...

  6. 一、k8s介绍(第一章、k8s高可用集群安装)

    作者:北京小远 出处:http://www.cnblogs.com/bj-xy/ 参考课程:Kubernetes全栈架构师(电脑端购买优惠) 文档禁止转载,转载需标明出处,否则保留追究法律责任的权利! ...

  7. 三、k8s集群可用性验证与调参(第一章、k8s高可用集群安装)

    作者:北京小远 出处:http://www.cnblogs.com/bj-xy/ 参考课程:Kubernetes全栈架构师(电脑端购买优惠) 文档禁止转载,转载需标明出处,否则保留追究法律责任的权利! ...

  8. 使用开源Breeze工具部署Kubernetes 1.12.1高可用集群

    Breeze项目是深圳睿云智合所开源的Kubernetes图形化部署工具,大大简化了Kubernetes部署的步骤,其最大亮点在于支持全离线环境的部署,且不需要FQ获取Google的相应资源包,尤其适 ...

  9. 使用睿云智合开源 Breeze 工具部署 Kubernetes v1.12.3 高可用集群

    一.Breeze简介 Breeze 项目是深圳睿云智合所开源的Kubernetes 图形化部署工具,大大简化了Kubernetes 部署的步骤,其最大亮点在于支持全离线环境的部署,且不需要FQ获取 G ...

随机推荐

  1. SpringBoot(二)-- 支持JSP

    SpringBoot虽然支持JSP,但是官方不推荐使用.看网上说,毕竟JSP是淘汰的技术了,泪奔,刚接触 就淘汰.. SpringBoot集成JSP的方法: 1.配置application.prope ...

  2. proxy chains 试用

    我的机子是通过一台windows机器上的CCProxy代理上网.可是在设置了系统代理以后,发现在终端下若要进行ftp或者ssh等操作,并不能使用代理(但是wget是可以的). 期间试过一些方法,比如在 ...

  3. Linux 常用文件

    /etc/exports /etc/services /etc/sysctl.conf /etc/logrotate.conf /etc/docker/key.json /etc/docker/dae ...

  4. linux实现开机自启动脚本

    Linux下(以RedHat为范本)添加开机自启动脚本有两种方法,先来简单的; 一.在/etc/rc.local中添加如果不想将脚本粘来粘去,或创建链接什么的,则:step1. 先修改好脚本,使其所有 ...

  5. 转:Android开发:使用DDMS Heap进行内存泄露调试

    无论怎么小心,想完全避免bad code是不可能的,此时就需要一些工具来帮助我们检查代码中是否存在会造成内存泄漏的地方.Android tools中的DDMS就带有一个很不错的内存监测工具Heap,本 ...

  6. 简述项目中优化sql的方法,从哪些方面,sql语句性能如何分析?

    查询速度慢的原因很多,常见如下几种 : .没有索引或者没有用到索引(这是查询慢最常见的问题,是程序设计的缺陷) .I/O吞吐量小,形成了瓶颈效应. .没有创建计算列导致查询不优化. .内存不足 .网络 ...

  7. numpy常用举例

    转自https://morvanzhou.github.io/tutorials/data-manipulation/np-pd/2-1-np-attributes/ numpy 的属性: ndim: ...

  8. python基础---->python的使用(六)

    这里记录一下python中关于class类的一些知识.不解释就弄不懂的事,就意味着怎样解释也弄不懂. python中的类知识 一.class的属性引用与实例 class MyClass(): '''A ...

  9. Bat脚本实现监控进程功能

    脚本不间断监控notepad.exe进程是否执行,若停止,则自动重启该进程,程序如下: @echo off set _task = notepad.exe set _svr = c:\windows\ ...

  10. 如何区分一个系统是redhat centos ubuntu fedora debian中的哪一种

    一.问题概述 有时候拿到一个环境,我们并不清楚是什么系统,是redhat啊,还是centos呢,是centos 6呢,还是centos 7呢. 这里参考了一篇博文: https://www.cnblo ...