目录

  • 二进制Metrics&Dashboard安装
  • 二进制高可用集群可用性验证
  • 生产环境k8s集群关键性配置
  • Bootstrapping: Kubelet启动过程
  • Bootstrapping: CSR申请和证书颁发原理
  • Bootstrapping: 证书自动续期原理

二进制Metrics&Dashboard安装

  • 安装CoreDNS
  • 安装Metrics Server
  • 安装dashboard

安装CoreDNS

安装对应版本(推荐)

cd /root/k8s-ha-install/

如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP

sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml

安装coredns

kubectl create -f CoreDNS/coredns.yaml

安装最新版CoreDNS(不推荐)

git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

查看状态

kubectl get po -n kube-system -l k8s-app=kube-dns

状态

NAME                      READY   STATUS    RESTARTS   AGE
coredns-fb4874468-nr5nx 1/1 Running 0 49s

强制删除一直处于Terminating的pod

[root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-fb4874468-fgs2h 1/1 Terminating 0 6d20h [root@k8s-master01 ~]# kubectl delete pods coredns-fb4874468-fgs2h --grace-period=0 --force -n kube-system
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "coredns-fb4874468-fgs2h" force deleted [root@k8s-master01 ~]# kubectl get po -n kube-system -l k8s-app=kube-dns
No resources found in kube-system namespace.

安装Metrics Server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

安装metrics server

cd /root/k8s-ha-install/metrics-server-0.4.x/

kubectl  create -f .

等待metrics server启动然后查看状态

kubectl  top node

节点状态

NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master01 263m 13% 1239Mi 66%
k8s-master02 213m 10% 1065Mi 57%
k8s-master03 207m 10% 1050Mi 56%
k8s-node01 89m 4% 514Mi 27%
k8s-node02 158m 7% 493Mi 26%

查看pod状态

kubectl  top po -A

pod状态

NAMESPACE     NAME                                      CPU(cores)   MEMORY(bytes)
kube-system calico-kube-controllers-cdd5755b9-4fzg9 3m 18Mi
kube-system calico-node-8xg62 26m 60Mi
kube-system calico-node-dczxz 24m 60Mi
kube-system calico-node-gn8ws 23m 62Mi
kube-system calico-node-qmwkd 26m 60Mi
kube-system calico-node-zfw8n 25m 59Mi
kube-system coredns-fb4874468-nr5nx 3m 10Mi
kube-system metrics-server-64c6c494dc-9x727 2m 18Mi

安装dashboard

  • 安装指定版本dashboard
  • 安装最新版dashboard
  • 登录dashboard

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

安装指定版本dashboard

cd /root/k8s-ha-install/dashboard/

kubectl  create -f .

安装最新版dashboard

官方GitHub地址:https://github.com/kubernetes/dashboard

可以在官方dashboard查看到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

创建管理员用户

vim admin.yaml
# 添加以下内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

执行

kubectl apply -f admin.yaml -n kube-system

登录dashboard

在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题,因为使用的证书是自签名(属性->快捷方式->目标,粘贴到最后)

 --test-type --ignore-certificate-errors

更改dashboard的svc为NodePort:

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

修改 type: ClusterIP 为 type:NodePort

修改完成之后会暴露一个端口号,查看端口号:

kubectl get svc kubernetes-dashboard -n kubernetes-dashboard

端口号

NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard NodePort 10.108.217.183 <none> 443:31874/TCP 9m37s

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard:访问Dashboard:https://192.168.232.236:31874(请更改18282为自己的端口),选择登录方式为令牌(即token方式)

查看token值:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

token值

Name:         admin-user-token-9c4tz
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: d1f2e528-0ef8-4c6b-a384-a18fbca6bc54 Type: kubernetes.io/service-account-token Data
====
ca.crt: 1411 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlNCbEdFa1RQZElhbTBRb29aTTNCTUE1dTJ2enBCeGZxMWJwbmpfZHBXdkEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTljNHR6Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkMWYyZTUyOC0wZWY4LTRjNmItYTM4NC1hMThmYmNhNmJjNTQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.KFH5ed0kJEaU1HSpxkitJxqKJGnSNAWogNSGjGn1wEh7R9zKYkAfNLES6Vl3GU9jvxBCEZW415ZFILr96kpgl_88mD-K-AMgQxKLdpghYDx_CnsLtI6e8rLTNkaPS2Uo3sYAy9U280Niop14Yzuar5FQ3AfSbeXGcF_9Jrgyeh5XWPA0h69Au8pUEOkVdpADmuIaFSqfTnmkOSdGqCgFb_QsUqvjo4ifIxKnN6uW8wfR1s4esWkPq569xhCINaUY6g3rnT1jfVTU2XmrURrKOVok0OfSmtXTKCSs2jliEdmx7qEFTrw2KCPnTfORUtTnmdZ2ZnGGx9Fvf_hGaKk1FQ

二进制高可用集群可用性验证

安装busybox

[root@k8s-master01 ~]# cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
EOF

查看状态

[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 29s
  • Pod必须能解析Service
  • Pod必须能解析跨namespace的Service
  • 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
  • Pod和Pod之前要能通(同namespace能通信、跨namespace能通信、跨机器能通信)

集群安装成功后默认的kubernetes server

[root@k8s-master01 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d22h

Pod必须能解析Service

[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

Pod必须能解析跨namespace的Service

[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53(发送键输入到所有的会话)

[root@k8s-master01 ~]# yum install telnet -y

[root@k8s-master01 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'. [root@k8s-master01 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 53m
metrics-server ClusterIP 10.107.95.145 <none> 443/TCP 4h33m
You have new mail in /var/spool/mail/root [root@k8s-master01 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'. [root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server

Pod和Pod之前要能通(同namespace能通信、跨namespace能通信、跨机器能通信)(取消发送键输入到所有的会话)

[root@k8s-master01 ~]# kubectl get po -n kube-system -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-cdd5755b9-4fzg9 1/1 Running 0 4h59m 192.168.232.131 k8s-node01 <none> <none>
calico-node-8xg62 1/1 Running 0 4h59m 192.168.232.129 k8s-master02 <none> <none>
calico-node-dczxz 1/1 Running 0 4h59m 192.168.232.131 k8s-node01 <none> <none>
calico-node-gn8ws 1/1 Running 0 4h59m 192.168.232.128 k8s-master01 <none> <none>
calico-node-qmwkd 1/1 Running 0 4h59m 192.168.232.130 k8s-master03 <none> <none>
calico-node-zfw8n 1/1 Running 2 (4h59m ago) 4h59m 192.168.232.132 k8s-node02 <none> <none>
coredns-fb4874468-fgs2h 1/1 Running 0 56m 172.25.92.66 k8s-master02 <none> <none>
metrics-server-64c6c494dc-9x727 1/1 Running 0 4h35m 172.27.14.193 k8s-node02 <none> <none> # 进入k8s-master02
[root@k8s-master01 ~]# kubectl exec -ti calico-node-8xg62 -n kube-system -- bash
Defaulted container "calico-node" out of: calico-node, install-cni (init), flexvol-driver (init) # ping k8s-master03
[root@k8s-master02 /]# ping 192.168.232.130
PING 192.168.232.130 (192.168.232.130) 56(84) bytes of data.
64 bytes from 192.168.232.130: icmp_seq=1 ttl=64 time=0.416 ms
64 bytes from 192.168.232.130: icmp_seq=2 ttl=64 time=0.240 ms
64 bytes from 192.168.232.130: icmp_seq=3 ttl=64 time=0.191 ms # 退出k8s-master02
[root@k8s-master02 /]# exit
exit [root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 22m 172.25.92.69 k8s-master02 <none> <none> # 进入k8s-master02容器ping k8s-master03
[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
/ # ping 192.168.232.130
PING 192.168.232.130 (192.168.232.130): 56 data bytes
64 bytes from 192.168.232.130: seq=0 ttl=63 time=0.329 ms
64 bytes from 192.168.232.130: seq=1 ttl=63 time=0.452 ms
64 bytes from 192.168.232.130: seq=2 ttl=63 time=0.675 ms

创建一个带有三个副本的deployment

[root@k8s-master01 ~]# kubectl create deploy nginx --image=nginx --replicas=3
deployment.apps/nginx created [root@k8s-master01 ~]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/3 3 2 35s # 查看部署所在的NODE
[root@k8s-master01 ~]# kubectl get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 0 28m 172.25.92.69 k8s-master02 <none> <none>
nginx 1/1 Running 0 2m37s 172.18.195.4 k8s-master03 <none> <none>
nginx-6799fc88d8-lbhgm 1/1 Running 0 54s 172.25.244.197 k8s-master01 <none> <none>
nginx-6799fc88d8-nq2gz 1/1 Running 0 54s 172.17.125.1 k8s-node01 <none> <none>
nginx-6799fc88d8-tzgz8 1/1 Running 0 54s 172.27.14.194 k8s-node02 <none> <none> # 删除
[root@k8s-master01 ~]# kubectl delete deploy nginx
deployment.apps "nginx" deleted [root@k8s-master01 ~]# kubectl delete po busybox

生产环境k8s集群关键性配置

  • docker配置
  • controller-manager配置
  • kubelet配置
  • kubelet-conf.yml
  • 安装总结

docker配置

(发送键输入到所有的会话)

vim /etc/docker/daemon.json

{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"max-concurrent-downloads": 10, # 并发下载的线程数
"max-concurrent-uploads": 5, # 并发上传的线程数
"log-opts": {
"max-size": "300m", # 限制日志文件最大容量,超过则分割
"max-file": "2" # 日志保存最大数量
},
"live-restore": true # 更改docker配置之后需要重启docker才能生效,这个参数可以使得重启docker不影响正在运行的容器进程
}

更新配置

systemctl daemon-reload

controller-manager配置

注释新版本已经默认配置,设置证书过期时间

(发送键输入到所有的会话,取消node节点)

vim /usr/lib/systemd/system/kube-controller-manager.service

# --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \
--cluster-signing-duration=876000h0m0s \

更新配置

systemctl daemon-reload

systemctl restart kube-controller-manager

kubelet配置

(发送键输入到所有的会话)

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
Environment="KUBELET_EXTRA_ARGS=--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

k8s默认加密方式会被识别为漏洞,需要修改加密方式

--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

kubelet下载镜像的deadline,避免重复循环加载

--image-pull-progress-deadline=30m

kubelet-conf.yml

vim /etc/kubernetes/kubelet-conf.yml

# 添加如下配置
rotateServerCertificates: true
allowedUnsafeSysctls: # 允许修改内核,才能修改内核参数,比如修改最大并发量,但是涉及到安全问题,所以按需配置
- "net.core*"
- "net.ipv4.*"
kubeReserved: # 预留资源,生产环境需要设置高一点,预留足够资源
cpu: "10m"
memory: 10Mi
ephemeral-storage: 10Mi
systemReserved:
cpu: "10m"
memory: 20Mi
ephemeral-storage: 1Gi

更新配置

systemctl daemon-reload

systemctl restart kubelet

查看日志

tail -f /var/log/messages

验证

[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready <none> 7h3m v1.22.0-beta.1
k8s-master02 Ready <none> 7h3m v1.22.0-beta.1
k8s-master03 Ready <none> 7h3m v1.22.0-beta.1
k8s-node01 Ready <none> 7h3m v1.22.0-beta.1
k8s-node02 Ready <none> 7h3m v1.22.0-beta.1

安装总结

  • kubeadm
  • 二进制
  • 自动化安装
  • 安装需要注意的细节

自动化安装(Ansible)

  • Master节点安装不需要写自动化。
  • 添加Node节点,playbook。

安装需要注意的细节

  • 上面的细节配置
  • 生产环境中etcd一定要和系统盘分开,一定要用ssd硬盘。
  • Docker数据盘也要和系统盘分开,有条件的话可以使用ssd硬盘

Bootstrapping: Kubelet启动过程

Bootstrapping:自动为node节点颁发证书

二进制高可用安装生成证书的时候,我们为每个k8s组件都生成了一个kubeconfig文件,比如controller-manager.kubeconfig,配置中保存了apiserver的一些信息,以及它去连接apiserver的证书

kube-controller-manager.service指定了kubeconfig,组件启动的时候会通过config文件去连接apiserver进行认证,通讯

[root@k8s-master01 kubernetes]# vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ # 指定了kubeconfig
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
--node-cidr-mask-size=24 Restart=always
RestartSec=10s [Install]
WantedBy=multi-user.target

kube-controller-manager.service和kube-scheduler.service是我们自己颁发证书,但是kubelet不建议通过这种方式,建议使用TLS BootStrapping,它会自动地为node节点的kubelet颁发证书,那么它是如何生成的呢?

使用node02节点举例

[root@k8s-node02 ~]# cd /etc/kubernetes
[root@k8s-node02 kubernetes]# ls
bootstrap-kubelet.kubeconfig kubelet-conf.yml kubelet.kubeconfig kube-proxy.conf kube-proxy.kubeconfig manifests pki

当我们配置一个节点的时候,节点上面是没有kubelet.kubeconfig文件的,它是通过bootstrap-kubelet.kubeconfig与apiserver进行交互,然后自动申请了一个kubelet.kubeconfig文件,kubelet启动的时候如果缺少kubelet.kubeconfig文件,就会这样申请

删除kubelet.kubeconfig

[root@k8s-node02 kubernetes]# rm -rf kubelet.kubeconfig

查看kubelet配置文件

[root@k8s-node02 kubernetes]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 --image-pull-progress-deadline=30m" "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

在配置中可以看到指定了bootstrap-kubeconfig文件

--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

指定了kubeconfig文件,但是这个文件是没有的,kubelet启动的时候会通过前面描述的方式申请一个新的kubelet.kubeconfig

--kubeconfig=/etc/kubernetes/kubelet.kubeconfig

重启kubelet,可以看到生成了kubelet.kubeconfig证书,生成之后就可以和apiserver进行交互

[root@k8s-node02 kubernetes]# systemctl restart kubelet
[root@k8s-node02 kubernetes]# ls
bootstrap-kubelet.kubeconfig kubelet-conf.yml kubelet.kubeconfig kube-proxy.conf kube-proxy.kubeconfig manifests pki

TLS BootStrapping 官方文档:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#initialization-process

在一个k8s集群中,工作节点的组件kubelet和kube-proxy需要连接master节点的组件,尤其是apiserver。为了确保连接是私有的,强烈建议为每一个客户端配置一个TLS证书在所有的节点上

kubelet启动过程

  • 查找kubeconfig文件,文件一般位于/etc/kubernetes/kubelet.kubeconfig
  • 从kubeconfig文件中检索APIServer的URL和证书
  • 然后去和APIServer进行交互

查看证书

[root@k8s-node02 kubernetes]# more kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.232.236:8443
name: default-cluster
contexts:
- context:
cluster: default-cluster
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
user:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem

查看kubelet.kubeconfig证书有效期:

[root@k8s-node02 kubernetes]# echo "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
U5EIENFUlRJRklDQVRFLS0tLS0K" | base64 --decode >/tmp/1

然后使用OpenSSL即可查看证书过期时间(100年有效期):

[root@k8s-node02 kubernetes]# openssl x509 -in /tmp/1 -noout -dates
notBefore=Jul 9 07:35:00 2021 GMT
notAfter=Jun 15 07:35:00 2121 GMT

Bootstrapping: CSR申请和证书颁发原理

官方文档:https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#bootstrap-initialization

1、 Kubelet启动

2、 Kubelet查看kubelet.kubeconfig文件,假设没有这个文件

3、 Kubelet会查看本地的bootstrap.kubeconfig

4、 Kubelet读取bootstrap.kubeconfig文件,检索apiserver的url和一个token

5、 Kubelet链接apiserver,使用这个token进行认证

a) Apiserver会识别tokenid,apiserver会查看该tokenid对于的bootstrap的一个secret

secret就是二进制高可用安装集群的时候创建的bootstrap.secret.yaml

如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证 c8ad9c 字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致

[root@k8s-master01 kubernetes]# more bootstrap-kubelet.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ1RENDQXN5Z0F3SUJBZ0lVYlFPRnd0dDVmdTlFZndnNUFjMnB2a1JYWWFFd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2R6
RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEV6QVJCZ05WQkFvVENrdDFZbVZ5Ym1WMFpYTXhHakFZQmdOVkJBc1RFVXQxWW1WeWJtVjBaWE10
CmJXRnVkV0ZzTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1DQVhEVEl4TURjd09UQTNNelV3TUZvWUR6SXgKTWpFd05qRTFNRGN6TlRBd1dqQjNNUXN3Q1FZRFZRUUdFd0pEVGpFUU1BNEdBMVVFQ0JNSFFtVnBh
bWx1WnpFUQpNQTRHQTFVRUJ4TUhRbVZwYW1sdVp6RVRNQkVHQTFVRUNoTUtTM1ZpWlhKdVpYUmxjekVhTUJnR0ExVUVDeE1SClMzVmlaWEp1WlhSbGN5MXRZVzUxWVd3eEV6QVJCZ05WQkFNVENtdDFZbVZ5Ym1WMFpY
TXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzhaMUgya2d2QTltZDVnQ0Z3TjNSeGFOck9KQ3F3ampJOApNRG5TZkk2ZldzcjlDR2dvb0VpQTZRdDk0c2szcDNkMkZ0c3hNaWNkdHRO
QVJZQWJQU3JSdkZBdGhkeGpvNWVCCjFTYVFXcXh5ckp3ZFN4UW5hUkMyaXZPRm55NUNmU0VOekYyVnBOQVIwVTVLUjRLWko2MzQyQk1yYzF3aEE5VjkKd05aaXVySi95emZPSTY5dzJaWUFwQTVYaldDSXczOXQzdjBr
WVM3clZkQkhkMWFnUzJMcWNMb0dlZnJvT1BzZgp1ZEZCa0tkbnE5T0tLejRSVHpsQ2hnMXFTSk1wK0xmT1hiYTUzRmQ5c01WRFhvS3lDQVk4N1E3RHltWmlsZ2F5Cjl3YTFGZnNXajRKQzJzVy9lSWtWczQzazV4QVR2
ems0ZmF6QVhZZDdvUXUrbStKZUZvL1JBZ01CQUFHalpqQmsKTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWREZ1FXQkJTLwpnN2xDaXBuZlkycGhVbWY4RXdXdE5o
ZE9LREFmQmdOVkhTTUVHREFXZ0JTL2c3bENpcG5mWTJwaFVtZjhFd1d0Ck5oZE9LREFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBamdRcE1hbndsaHN3dkNRWkd4NitwTXdqVm00LzNQbnYKcWZNU1F0YmVFWEpPRndY
WGN5T0h0clphRVNrbTV2eHZiWkRwMXZFcFc0aWN0V3U4emhGS3JKQWtRN0NKNHZDaAo3THBnVUdFSFFzTkJEU09reUxPcUhEN1RNOHFZV3hvWUZPbWFhdm1KelMwbFhZRkx1VmNYTm1GTTBPWlRsL01OCjVpdE5vMEVz
bWxBaUVYTFpJRkdpL0dZQkZXRHBQNVB2SlJIdUptd2JVQTdiMkNoVmhreFBXa0FYaDgvZjNzMjUKOWZSTC9ya0VTMGJqQVlHQy9lTGRDL09uUng2VFRyMHVRTUFSVjBqZGs3Qkg1dElXQjFtakthNjViR1Z6SlBZYQox
anNIRVhvOUo0MnBaQ3lWNnQxTklzRkFrL3pyazJzSFZHbFZzUzB4ZHpPaDhZaEhsKzVwVEE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.232.236:8443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: tls-bootstrap-token-user
name: tls-bootstrap-token-user@kubernetes
current-context: tls-bootstrap-token-user@kubernetes
kind: Config
preferences: {}
users:
- name: tls-bootstrap-token-user
user:
token: c8ad9c.2e4d610cf3e7426e

根据后缀c8ad9c找到token

[root@k8s-master01 kubernetes]# kubectl get secret -n kube-system
NAME TYPE DATA AGE
bootstrap-token-c8ad9c bootstrap.kubernetes.io/token 6 3d18h

读取secret中的内容,得到token-id,token-secret

[root@k8s-master01 kubernetes]# kubectl get secret -n kube-system bootstrap-token-c8ad9c -n kube-system -oyaml
apiVersion: v1
data:
auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWxldCAnLg==
token-id: YzhhZDlj
token-secret: MmU0ZDYxMGNmM2U3NDI2ZQ==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: "2021-07-09T09:54:36Z"
name: bootstrap-token-c8ad9c
namespace: kube-system
resourceVersion: "1408"
uid: 9b77f873-d449-4ab2-aed7-fce9c32bdb21
type: bootstrap.kubernetes.io/token

解密token-id,与配置文件kubectl config中的token=c8ad9c.2e4d610cf3e7426e的前缀一致

[root@k8s-master01 kubernetes]# echo "YzhhZDlj" | base64 -d
c8ad9c

解密token-secret,与配置文件kubectl config中的token=c8ad9c.2e4d610cf3e7426e的后缀一致

[root@k8s-master01 kubernetes]# echo "MmU0ZDYxMGNmM2U3NDI2ZQ==" | base64 -d
2e4d610cf3e7426e

验证通过之后才会进入下一步

b) 找个这个secret中的一个字段auth-extra-groups,apiserver把这个token识别成一个username,名称是system:bootstrap:,属于system:bootstrappers这个组,这个组具有申请csr的权限,该组的权限绑定在一个叫system:node-bootstrapper的clusterrole

解密auth-extra-groups查询组名

[root@k8s-master01 kubernetes]# echo "c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz" | base64 -d
system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
  • clusterrole是k8s集群级别的权限控制,它作用整个k8s集群
  • clusterrolebinding是集群权限的绑定,它可以帮某个clusterrole绑定到一个用户、组或者seviceaccount

查看clusterrole

[root@k8s-master01 kubernetes]# kubectl get clusterrole
NAME CREATED AT
system:node-bootstrapper 2021-07-09T09:34:55Z

查看system:node-bootstrapper

[root@k8s-master01 kubernetes]# kubectl get clusterrole system:node-bootstrapper -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2021-07-09T09:34:55Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:node-bootstrapper
resourceVersion: "96"
uid: b44bed52-fce4-4cf3-b3b1-38f887749d70
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests # 定义了一个csr权限
verbs:
- create
- get
- list
- watch

查看clusterrolebinding

[root@k8s-master01 kubernetes]# kubectl get clusterrolebinding kubelet-bootstrap -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2021-07-09T09:54:36Z"
name: kubelet-bootstrap
resourceVersion: "1409"
uid: 40de9025-a8f6-41c2-be85-763daca80bb2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:default-node-token

可以看到clusterrolebinding把一个名为system:node-bootstrapper的ClusterRole绑定到一个名为system:bootstrappers:default-node-token的Group

查看Group是否一致,解密auth-extra-groups查询组名

[root@k8s-master01 kubernetes]# echo "c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmYXVsdC1ub2RlLXRva2VuLHN5c3RlbTpib290c3RyYXBwZXJzOndvcmtlcixzeXN0ZW06Ym9vdHN0cmFwcGVyczppbmdyZXNz" | base64 -d
system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress

c) CSR:相当于一个申请表,可以拿着这个申请表去申请我们的证书。

6、 经过上面的认证,kubelet就有了一个创建和检索CSR的权限

7、 Kubelet为自己创建一个CSR,名称为kubernetes.io/kube-apiserver-client-kubelet

8、 CSR被允许有两种方式:

a) K8s管理员使用kubectl手动的颁发证书

b) 如果配置了相关权限,kube-controller-manager会自动同意。

i. Controller-manager有一个CSRApprovingController。他会校验kubelet发来的csr的username和group是否有创建csr的权限,而且还要验证签发着是否是kubernetes.io/kube-apiserver-client-kubelet

ii. Controller-manager同意CSR请求

Bootstrapping: 证书自动续期原理

9、 CSR被同意后,controller-manager创建kubelet的证书文件

10、 Controller-manager将证书更新至csr的status字段

11、 Kubelet从apiserver获取证书

12、 Kubelet从获取到的key和证书文件创建kubelet.kubeconfig

13、 Kubelet启动完成并正常工作

14、 可选:如果配置了自动续期,kubelet会在证书文件过期的时候利用之前的kubeconfig文件去申请一个新的证书,相当于续约。

15、 新的证书被同意或签发,取决于我们的配置

查看node-autoapprove-certificate-rotation

[root@k8s-master01 kubernetes]# kubectl get clusterrolebinding node-autoapprove-certificate-rotation -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: "2021-07-09T09:54:36Z"
name: node-autoapprove-certificate-rotation
resourceVersion: "1411"
uid: d7c618c8-0860-4d03-949f-4e2bbd33659a
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes

查看名为system:certificates.k8s.io:certificatesigningrequests:selfnodeclient的ClusterRole

[root@k8s-master01 kubernetes]# kubectl get clusterrole system:certificates.k8s.io:certificatesigningrequests:selfnodeclient -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: "2021-07-09T09:34:55Z"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
resourceVersion: "103"
uid: d5215b54-8dd4-47ba-9e6e-bf664eb56dce
rules:
- apiGroups:
- certificates.k8s.io
resources:
- certificatesigningrequests/selfnodeclient # 自动续期权限
verbs:
- create

a) Kubelet创建的CSR是属于一个组织:system:nodes

b) CN(比如域名):system:nodes:主机名

课程链接(私信领取福利)

http://www.kubeasy.com/

本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。

欢迎转载、使用、重新发布,但务必保留文章署名 郑子铭 (包含链接: http://www.cnblogs.com/MingsonZheng/ ),不得用于商业目的,基于本文修改后的作品务必以相同的许可发布。

Kubernetes全栈架构师(二进制高可用安装k8s集群扩展篇)--学习笔记的更多相关文章

  1. Kubernetes全栈架构师(二进制高可用安装k8s集群部署篇)--学习笔记

    目录 二进制高可用基本配置 二进制系统和内核升级 二进制基本组件安装 二进制生成证书详解 二进制高可用及etcd配置 二进制K8s组件配置 二进制使用Bootstrapping自动颁发证书 二进制No ...

  2. kubernetes教程第一章-kubeadm高可用安装k8s集群

    目录 Kubeadm高可用安装k8s集群 kubeadm高可用安装1.18基本说明 k8s高可用架构解析 kubeadm基本环境配置 kubeadm基本组件安装 kubeadm集群初始化 高可用Mas ...

  3. Kubernetes实战指南(三十四): 高可用安装K8s集群1.20.x

    @ 目录 1. 安装说明 2. 节点规划 3. 基本配置 4. 内核配置 5. 基本组件安装 6. 高可用组件安装 7. 集群初始化 8. 高可用Master 9. 添加Node节点 10. Cali ...

  4. 【Containerd版】Kubeadm高可用安装K8s集群1.23+

    目录 基本环境配置 节点规划 网段规划及软件版本 基本配置 内核升级配置 K8s组件及Runtime安装 Containerd安装 K8s组件安装 高可用实现 集群初始化 Master01初始化 添加 ...

  5. Kubernetes全栈架构师(Kubeadm高可用安装k8s集群)--学习笔记

    目录 k8s高可用架构解析 Kubeadm基本环境配置 Kubeadm系统及内核升级 Kubeadm基本组件安装 Kubeadm高可用组件安装 Kubeadm集群初始化 高可用Master及Token ...

  6. 高可用的K8S集群部署方案

    涉及到的内容 LVS HAProxy Harbor etcd Kubernetes (Master Worker) 整体拓补图 以上是最小生产可用的整体拓补图(相关节点根据需要进行增加,但不能减少) ...

  7. 从零开始搭建高可用的k8s集群

    一.环境准备 使用Hyper-V虚拟机功能搭建三台Centos虚拟机系统,配置好静态IP,分别为k8s-node1(192.168.0.8),k8s-node2(192.168.0.9),k8s-no ...

  8. Kubernetes全栈架构师(资源调度下)--学习笔记

    目录 StatefulSet扩容缩容 StatefulSet更新策略 StatefulSet灰度发布 StatefulSet级联删除和非级联删除 守护进程服务DaemonSet DaemonSet的使 ...

  9. Kubernetes全栈架构师(基本概念)--学习笔记

    目录 为什么要用Kubernetes? K8s控制节点-Master概念 K8s计算节点-Node概念 什么是Pod? 为什么要引入Pod? 创建一个Pod 零宕机发布应用必备知识:Pod三种探针 零 ...

随机推荐

  1. bat使用方法汇总

    前言 由于日常科研工作中使用C/C++比较多,在进行大规模运行时涉及到的批量处理操作较多,遂将目前遇到的情况记录如下,以便查看: 1.for循环 最基本的for循环操作为在一些数中遍历,如下例子.se ...

  2. 【Azure Developer】使用 Python SDK连接Azure Storage Account, 计算Blob大小代码示例

    问题描述 在微软云环境中,使用python SDK连接存储账号(Storage Account)需要计算Blob大小?虽然Azure提供了一个专用工具Azure Storage Explorer可以统 ...

  3. 图分析Rapids cuGraph

    图分析Rapids cuGraph 英伟达(Nvidia)建立的新的开源库可能是推进分析和使图形数据库更快的秘密要素. 在Nvidia GPU上进行并行处理. Nvidia很久以前就不再只是" ...

  4. 激光雷达数据到云cloud

    激光雷达数据到云cloud 在美国地质调查局的3D提升计划(3DEP)被激发到一个新的方式可用性宣布从3DEP仓库的访问和处理激光雷达点云数据. 3DEP一直在美国使用光检测和测距(激光)技术获取三维 ...

  5. [NOIP1998 提高组] 拼数

    题目描述 设有 n 个正整数​ a1-an,将它们联接成一排,相邻数字首尾相接,组成一个最大的整数. 输入格式 第一行有一个整数,表示数字个数 n. 第二行有 n 个整数,表示给出的 n 个整数 a_ ...

  6. C++标准模板库(STL)——vector常见用法详解

    vector的定义 vector<typename> name; 相当于定义了一个一维数组name[SIZE],只不过其长度可以根据需要进行变化,比较节省空间,通俗来讲,vector就是& ...

  7. redis为什么要提供pipeline功能

    通常我们用redis做接口缓存后,查询接口的性能就能提升到ms级别: 但是redis是纯内存操作啊,总不至于要到ms吧,根据官方的 benchmark 单实例也是能抗 7w+ qps 也就是说单个re ...

  8. NOIP模拟测试8「匹配·回家」

    匹配 哈希能A 水到爆炸 回家 事实上我做过一个原题,甚至比这个回家难的多,而且那个题多组询问必经点 然后我做一组询问就打炸了 大约就是删了很多东西,然后自己想的太简单了 直接统计了割点,懒得打lca ...

  9. redis不完整的事务实现Transaction

    使用场景 redis一个命令执行是单线程的,不用担心并发冲突,如果你想有几个命令想像一个命令一样,在这几个命令执行过程中不会执行别的客户端发来的命令 ,也就是原子性,就可以用 redis Transa ...

  10. Java-学习日记(Java8异步)

    今天用到的中异步操作:异步编程与异步处理数据 //里面返回其他接口服务使用CompletableFuture CompletableFuture.runAsync(()->{ driverNoR ...