kubernetes 1.20版本 二进制部署

1. 前言

之前文章安装 kubernetes 集群,都是使用 kubeadm 安装,然鹅很多公司也采用二进制方式搭建集群。这篇文章主要讲解,如何采用二进制包来搭建完整的高可用集群。相比使用 kubeadm 搭建,二进制搭建要繁琐很多,需要自己配置签名证书,每个组件都需要一步步配置安装。

2. 环境准备

2.1 机器规划

IP地址 机器名称 机器配置 操作系统 机器角色 安装软件
172.10.1.11 master01 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.12 master02 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.13 master03 2C4G CentOS7.6 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
172.10.1.14 node01 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.1.15 node02 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.1.16 node03 2C4G CentOS7.6 worker kubelet、kube-proxy
172.10.0.20 / / / 负载均衡VIP /

注:此处VIP是采用的云厂商的SLB,你也可以使用haproxy + keepalived的方式实现。

2.2 软件版本

软件 版本
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.20.2
kube-apiserver、kube-controller-manager、kube-scheduler、kubelet、kube-proxy v1.20.2
etcd v3.4.13
calico v3.14
coredns 1.7.0

3. 搭建集群

3.1 机器基本配置

以下配置在6台机器上面操作

3.1.1 修改主机名

修改主机名称:master01、master02、master03、node01、node02、node03

3.1.2 配置hosts文件

修改机器的/etc/hosts文件

cat >> /etc/hosts << EOF
172.10.1.11 master01
172.10.1.12 master02
172.10.1.13 master03
172.10.1.14 node01
172.10.1.15 node02
172.10.1.16 node03
EOF hostnamectl set-hostname node02
bash

3.1.3 关闭防火墙和selinux

systemctl stop firewalld
setenforce 0
sed -i 's/^SELINUX=.\*/SELINUX=disabled/' /etc/selinux/config

3.1.4 关闭交换分区

swapoff -a
永久关闭,修改/etc/fstab,注释掉swap一行

3.1.5 时间同步

yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources

3.1.6 修改内核参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

3.1.7 加载ipvs模块

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
lsmod | grep ip_vs
lsmod | grep nf_conntrack_ipv4
yum install -y ipvsadm

3.2 配置工作目录

每台机器都需要配置证书文件、组件的配置文件、组件的服务启动文件,现专门选择 master01 来统一生成这些文件,然后再分发到其他机器。以下操作在 master01 上进行

[root@master01 ~]# mkdir -p /data/work
注:该目录为配置文件和证书文件生成目录,后面的所有文件生成相关操作均在此目录下进行
[root@master01 ~]# ssh-keygen -t rsa -b 2048
将秘钥分发到另外五台机器,让 master01 可以免密码登录其他机器

3.3 搭建etcd集群

3.3.1 配置etcd工作目录

[root@master01 ~]# mkdir -p /etc/etcd                     # 配置文件存放目录
[root@master01 ~]# mkdir -p /etc/etcd/ssl # 证书文件存放目录

3.3.2 创建etcd证书

工具下载

[root@master01 work]# cd /data/work/
[root@master01 work]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master01 work]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@master01 work]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

工具配置

[root@master01 work]# chmod +x cfssl*
[root@master01 work]# mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@master01 work]# mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@master01 work]# mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

配置ca请求文件

[root@master01 work]# vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}

注:

CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;

O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

创建ca证书

[root@master01 work]# cfssl gencert -initca ca-csr.json  | cfssljson -bare ca

配置ca证书策略

[root@master01 work]# vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}

配置etcd请求csr文件

[root@master01 work]# vim etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.10.1.11",
"172.10.1.12",
"172.10.1.13"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "system"
}]
} #注意:这里要写所有的etcd节点

生成证书

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json|cfssljson -bare etcd
[root@master01 work]# ls etcd*.pem
etcd-key.pem etcd.pem

3.3.3 部署etcd集群

下载etcd软件包

[root@master01 work]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@master01 work]# tar -xf etcd-v3.4.13-linux-amd64.tar.gz
[root@master01 work]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
[root@master01 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master02:/usr/local/bin/
[root@master01 work]# rsync -vaz etcd-v3.4.13-linux-amd64/etcd* master03:/usr/local/bin/

创建配置文件

[root@master01 work]# vim etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.10.1.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.10.1.11:2379,http://127.0.0.1:2379" #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.10.1.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.10.1.11:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

注:

ETCD_NAME:节点名称,集群中唯一

ETCD_DATA_DIR:数据目录

ETCD_LISTEN_PEER_URLS:集群通信监听地址

ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址

ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址

ETCD_INITIAL_CLUSTER:集群节点地址

ETCD_INITIAL_CLUSTER_TOKEN:集群Token

ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

创建启动服务文件

方式一:

有配置文件的启动

[root@master01 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

方式二:

无配置文件的启动方式

[root@master01 work]# vim etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--name=etcd1 \
--data-dir=/var/lib/etcd/default.etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--listen-peer-urls=https://172.10.1.11:2380 \
--listen-client-urls=https://172.10.1.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://172.10.1.11:2379 \
--initial-advertise-peer-urls=https://172.10.1.11:2380 \
--initial-cluster=etcd1=https://172.10.1.11:2380,etcd2=https://172.10.1.12:2380,etcd3=https://172.10.1.13:2380 \
--initial-cluster-token=etcd-cluster \
--initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

注:本文采用第一种方式

同步相关文件到各个节点

[root@master01 work]# cp ca*.pem /etc/etcd/ssl/
[root@master01 work]# cp etcd*.pem /etc/etcd/ssl/
[root@master01 work]# cp etcd.conf /etc/etcd/
[root@master01 work]# cp etcd.service /usr/lib/systemd/system/
[root@master01 work]# for i in master02 master03;do rsync -vaz etcd.conf $i:/etc/etcd/;done
[root@master01 work]# for i in master02 master03;do rsync -vaz etcd*.pem ca*.pem $i:/etc/etcd/ssl/;done
[root@master01 work]# for i in master02 master03;do rsync -vaz etcd.service $i:/usr/lib/systemd/system/;done

注:master02和master03分别修改配置文件中etcd名字和ip,并创建目录 /var/lib/etcd/default.etcd

启动etcd集群

[root@master01 work]# mkdir -p /var/lib/etcd/default.etcd
[root@master01 work]# systemctl daemon-reload
[root@master01 work]# systemctl enable etcd.service
[root@master01 work]# systemctl start etcd.service
[root@master01 work]# systemctl status etcd

注:第一次启动可能会卡一段时间,因为节点会等待其他节点启动

查看集群状态

[root@master01 work]# ETCDCTL_API=3 /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 endpoint health

3.4 kubernetes组件部署

3.4.1 下载安装包

[root@master01 work]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@master01 work]# cd kubernetes/server/bin/
[root@master01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@master01 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master02:/usr/local/bin/
[root@master01 bin]# rsync -vaz kube-apiserver kube-controller-manager kube-scheduler kubectl master03:/usr/local/bin/
[root@master01 bin]# for i in node01 node02 node03;do rsync -vaz kubelet kube-proxy $i:/usr/local/bin/;done
[root@master01 bin]# cd /data/work/

3.4.2 创建工作目录

[root@master01 work]# mkdir -p /etc/kubernetes/        # kubernetes组件配置文件存放目录
[root@master01 work]# mkdir -p /etc/kubernetes/ssl # kubernetes组件证书文件存放目录
[root@master01 work]# mkdir /var/log/kubernetes # kubernetes组件日志文件存放目录

3.4.3 部署api-server

创建csr请求文件

[root@master01 work]# vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"172.10.1.11",
"172.10.1.12",
"172.10.1.13",
"172.10.1.14",
"172.10.1.15",
"172.10.1.16",
"172.10.0.20",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "system"
}
]
}

注:

如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。

由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的首个IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.254.0.1)

生成证书和token文件

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
[root@master01 work]# cat > token.csv << EOF
$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

创建配置文件

[root@master01 work]# vim kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=172.10.1.11 \
--secure-port=6443 \
--advertise-address=172.10.1.11 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ # 1.20以上版本必须有此参数
--service-account-issuer=https://kubernetes.default.svc.cluster.local \ # 1.20以上版本必须有此参数
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://172.10.1.11:2379,https://172.10.1.12:2379,https://172.10.1.13:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"

注:

--logtostderr:启用日志

--v:日志等级

--log-dir:日志目录

--etcd-servers:etcd集群地址

--bind-address:监听地址

--secure-port:https安全端口

--advertise-address:集群通告地址

--allow-privileged:启用授权

--service-cluster-ip-range:Service虚拟IP地址段

--enable-admission-plugins:准入控制模块

--authorization-mode:认证授权,启用RBAC授权和节点自管理

--enable-bootstrap-token-auth:启用TLS bootstrap机制

--token-auth-file:bootstrap token文件

--service-node-port-range:Service nodeport类型默认分配端口范围

--kubelet-client-xxx:apiserver访问kubelet客户端证书

--tls-xxx-file:apiserver https证书

--etcd-xxxfile:连接Etcd集群证书

--audit-log-xxx:审计日志

创建服务启动文件

[root@master01 work]# vim kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service [Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master01 work]# cp ca*.pem /etc/kubernetes/ssl/
[root@master01 work]# cp kube-apiserver*.pem /etc/kubernetes/ssl/
[root@master01 work]# cp token.csv /etc/kubernetes/
[root@master01 work]# cp kube-apiserver.conf /etc/kubernetes/
[root@master01 work]# cp kube-apiserver.service /usr/lib/systemd/system/
[root@master01 work]# rsync -vaz token.csv master02:/etc/kubernetes/
[root@master01 work]# rsync -vaz token.csv master03:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-apiserver*.pem master02:/etc/kubernetes/ssl/ # 主要rsync同步文件,只能创建最后一级目录,如果ssl目录不存在会自动创建,但是上一级目录kubernetes必须存在
[root@master01 work]# rsync -vaz kube-apiserver*.pem master03:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz ca*.pem master02:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz ca*.pem master03:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz kube-apiserver.conf master02:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-apiserver.conf master03:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-apiserver.service master02:/usr/lib/systemd/system/
[root@master01 work]# rsync -vaz kube-apiserver.service master03:/usr/lib/systemd/system/

注:master02和master03配置文件的IP地址修改为实际的本机IP

启动服务

[root@master01 work]# systemctl daemon-reload
[root@master01 work]# systemctl enable kube-apiserver
[root@master01 work]# systemctl start kube-apiserver
[root@master01 work]# systemctl status kube-apiserver
测试
[root@master01 work]# curl --insecure https://172.10.1.11:6443/
有返回说明启动正常

3.4.4 部署kubectl

创建csr请求文件

[root@master01 work]# vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "system"
}
]
}

说明:

后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;

kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;

O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

注:

这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;

"O": "system:masters", 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

生成证书

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@master01 work]# cp admin*.pem /etc/kubernetes/ssl/

创建kubeconfig配置文件

kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube.config
设置客户端认证参数
[root@master01 work]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
设置上下文参数
[root@master01 work]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
设置默认上下文
[root@master01 work]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@master01 work]# mkdir ~/.kube
[root@master01 work]# cp kube.config ~/.kube/config
授权kubernetes证书访问kubelet api权限
[root@master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

查看集群组件状态

上面步骤完成后,kubectl就可以与kube-apiserver通信了

[root@master01 work]# kubectl cluster-info
[root@master01 work]# kubectl get componentstatuses
[root@master01 work]# kubectl get all --all-namespaces

同步kubectl配置文件到其他节点

[root@master01 work]# rsync -vaz /root/.kube/config master02:/root/.kube/
[root@master01 work]# rsync -vaz /root/.kube/config master03:/root/.kube/

配置kubectl子命令补全

[root@master01 work]# yum install -y bash-completion
[root@master01 work]# source /usr/share/bash-completion/bash_completion
[root@master01 work]# source <(kubectl completion bash)
[root@master01 work]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master01 work]# source '/root/.kube/completion.bash.inc'
[root@master01 work]# source $HOME/.bash_profile

3.4.5 部署kube-controller-manager

创建csr请求文件

[root@master01 work]# vim kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"172.10.1.11",
"172.10.1.12",
"172.10.1.13"
],
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}

注:

hosts 列表包含所有 kube-controller-manager 节点 IP;

CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

生成证书

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@master01 work]# ls kube-controller-manager*.pem

创建kube-controller-manager的kubeconfig

设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-controller-manager.kubeconfig
设置客户端认证参数
[root@master01 work]# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
设置上下文参数
[root@master01 work]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
设置默认上下文
[root@master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

创建配置文件

[root@master01 work]# vim kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.255.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.0.0.0/16 \
--experimental-cluster-signing-duration=87600h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

创建启动文件

[root@master01 work]# vim kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master01 work]# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master01 work]# cp kube-controller-manager.kubeconfig /etc/kubernetes/
[root@master01 work]# cp kube-controller-manager.conf /etc/kubernetes/
[root@master01 work]# cp kube-controller-manager.service /usr/lib/systemd/system/
[root@master01 work]# rsync -vaz kube-controller-manager*.pem master02:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz kube-controller-manager*.pem master03:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master02:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-controller-manager.kubeconfig kube-controller-manager.conf master03:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-controller-manager.service master02:/usr/lib/systemd/system/
[root@master01 work]# rsync -vaz kube-controller-manager.service master03:/usr/lib/systemd/system/

启动服务

[root@master01 work]# systemctl daemon-reload
[root@master01 work]# systemctl enable kube-controller-manager
[root@master01 work]# systemctl start kube-controller-manager
[root@master01 work]# systemctl status kube-controller-manager

3.4.6 部署kube-scheduler

创建csr请求文件

[root@master01 work]# vim kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"172.10.1.11",
"172.10.1.12",
"172.10.1.13"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}

注:

hosts 列表包含所有 kube-scheduler 节点 IP;

CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@master01 work]# ls kube-scheduler*.pem

创建kube-scheduler的kubeconfig

设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-scheduler.kubeconfig
设置客户端认证参数
[root@master01 work]# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
设置上下文参数
[root@master01 work]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
设置默认上下文
[root@master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

创建配置文件

[root@master01 work]# vim kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"

创建服务启动文件

[root@master01 work]# vim kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target

同步相关文件到各个节点

[root@master01 work]# cp kube-scheduler*.pem /etc/kubernetes/ssl/
[root@master01 work]# cp kube-scheduler.kubeconfig /etc/kubernetes/
[root@master01 work]# cp kube-scheduler.conf /etc/kubernetes/
[root@master01 work]# cp kube-scheduler.service /usr/lib/systemd/system/
[root@master01 work]# rsync -vaz kube-scheduler*.pem master02:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz kube-scheduler*.pem master03:/etc/kubernetes/ssl/
[root@master01 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master02:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-scheduler.kubeconfig kube-scheduler.conf master03:/etc/kubernetes/
[root@master01 work]# rsync -vaz kube-scheduler.service master02:/usr/lib/systemd/system/
[root@master01 work]# rsync -vaz kube-scheduler.service master03:/usr/lib/systemd/system/

启动服务

[root@master01 work]# systemctl daemon-reload
[root@master01 work]# systemctl enable kube-scheduler
[root@master01 work]# systemctl start kube-scheduler
[root@master01 work]# systemctl status kube-scheduler

3.4.7 部署docker

在三个work节点上安装

安装docker

[root@node01 ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@node01 ~]# yum install -y docker-ce
[root@node01 ~]# systemctl enable docker
[root@node01 ~]# systemctl start docker
[root@node01 ~]# docker --version

修改docker源和驱动

[root@node01 ~]# cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://1nj0zren.mirror.aliyuncs.com",
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"http://f1361db2.m.daocloud.io",
"https://registry.docker-cn.com"
]
}
EOF
[root@node01 ~]# systemctl restart docker
[root@node01 ~]# docker info | grep "Cgroup Driver"

下载依赖镜像

[root@node01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
[root@node01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
[root@node01 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 [root@node01 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[root@node01 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
[root@node01 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

3.4.8 部署kubelet

以下操作在master01上操作

创建kubelet-bootstrap.kubeconfig

[root@master01 work]# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
设置集群参数
[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
设置客户端认证参数
[root@master01 work]# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
设置上下文参数
[root@master01 work]# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
设置默认上下文
[root@master01 work]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
创建角色绑定
[root@master01 work]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

创建配置文件

[root@master01 work]#  vim kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "172.10.1.14",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "cgroupfs", # 如果docker的驱动为systemd,处修改为systemd。此处设置很重要,否则后面node节点无法加入到集群
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletClientCertificate": true,
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": ["10.255.0.2"]
}

创建启动文件

[root@master01 work]# vim kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service [Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5 [Install]
WantedBy=multi-user.target

注:

–hostname-override:显示名称,集群中唯一

–network-plugin:启用CNI

–kubeconfig:空路径,会自动生成,后面用于连接apiserver

–bootstrap-kubeconfig:首次启动向apiserver申请证书

–config:配置参数文件

–cert-dir:kubelet证书生成目录

–pod-infra-container-image:管理Pod网络容器的镜像

同步相关文件到各个节点

[root@master01 work]# cp kubelet-bootstrap.kubeconfig /etc/kubernetes/
[root@master01 work]# cp kubelet.json /etc/kubernetes/
[root@master01 work]# cp kubelet.service /usr/lib/systemd/system/
以上步骤,如果master节点不安装kubelet,则不用执行
[root@master01 work]# for i in node01 node02 node03;do rsync -vaz kubelet-bootstrap.kubeconfig kubelet.json $i:/etc/kubernetes/;done
[root@master01 work]# for i in node01 node02 node03;do rsync -vaz ca.pem $i:/etc/kubernetes/ssl/;done
[root@master01 work]# for i in node01 node02 node03;do rsync -vaz kubelet.service $i:/usr/lib/systemd/system/;done

注:kubelete.json配置文件address改为各个节点的ip地址

启动服务

各个work节点上操作

[root@node01 ~]# mkdir /var/lib/kubelet
[root@node01 ~]# mkdir /var/log/kubernetes
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl enable kubelet
[root@node01 ~]# systemctl start kubelet
[root@node01 ~]# systemctl status kubelet

确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:

[root@master01 work]# kubectl get csr

[root@master01 work]# kubectl certificate approve node-csr-HlX3cExsZohWsu8Dd6Rp_ztFejmMdpzvti_qgxo4SAQ
[root@master01 work]# kubectl certificate approve node-csr-oykYfnH_coRF2PLJH4fOHlGznOZUBPDg5BPZXDo2wgk
[root@master01 work]# kubectl certificate approve node-csr-ytRB2fikhL6dykcekGg4BdD87o-zw9WPU44SZ1nFT50
[root@master01 work]# kubectl get csr
[root@master01 work]# kubectl get nodes

3.4.9 部署kube-proxy

创建csr请求文件

[root@master01 work]# vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "system"
}
]
}

生成证书

[root@master01 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@master01 work]# ls kube-proxy*.pem

创建kubeconfig文件

[root@master01 work]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://172.10.0.20:6443 --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
[root@master01 work]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

创建kube-proxy配置文件

[root@master01 work]# vim kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 172.10.1.14
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 192.168.0.0/16 # 此处网段必须与网络组件网段保持一致,否则部署网络组件时会报错
healthzBindAddress: 172.10.1.14:10256
kind: KubeProxyConfiguration
metricsBindAddress: 172.10.1.14:10249
mode: "ipvs"

创建服务启动文件

[root@master01 work]# vim kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target [Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536 [Install]
WantedBy=multi-user.target

同步文件到各个节点

[root@master01 work]# cp kube-proxy*.pem /etc/kubernetes/ssl/
[root@master01 work]# cp kube-proxy.kubeconfig kube-proxy.yaml /etc/kubernetes/
[root@master01 work]# cp kube-proxy.service /usr/lib/systemd/system/
master节点不安装kube-proxy,则以上步骤不用执行
[root@master01 work]# for i in node01 node02 node03;do rsync -vaz kube-proxy.kubeconfig kube-proxy.yaml $i:/etc/kubernetes/;done
[root@master01 work]# for i in node01 node02 node03;do rsync -vaz kube-proxy.service $i:/usr/lib/systemd/system/;done

注:配置文件kube-proxy.yaml中address修改为各节点的实际IP

启动服务

[root@node01 ~]# mkdir -p /var/lib/kube-proxy
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl enable kube-proxy
[root@node01 ~]# systemctl restart kube-proxy
[root@node01 ~]# systemctl status kube-proxy

3.4.10 配置网络组件

[root@master01 work]# wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml
[root@master01 work]# kubectl apply -f calico.yaml

此时再来查看各个节点,均为Ready状态

[root@master01 work]# kubectl get pods -A
[root@master01 work]# kubectl get nodes

3.4.11 部署coredns

下载coredns yaml文件:https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

修改yaml文件:

kubernetes cluster.local in-addr.arpa ip6.arpa

forward . /etc/resolv.conf

clusterIP为:10.255.0.2(kubelet配置文件中的clusterDNS)

[root@master01 work]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.8.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.255.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
[root@master01 work]# kubectl apply -f coredns.yaml

3.5 验证

3.5.1 部署nginx

[root@master01 ~]# vim nginx.yaml
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.6
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-nodeport
spec:
ports:
- port: 80
targetPort: 80
nodePort: 30001
protocol: TCP
type: NodePort
selector:
name: nginx
[root@master01 ~]# kubectl apply -f nginx.yaml
[root@master01 ~]# kubectl get svc
[root@master01 ~]# kubectl get pods

3.5.2 验证

ping验证nginx service

访问nginx

3.5.3 验证dns

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh

If you don't see a command prompt, try pressing enter. 

/ # nslookup kubernetes
Server: 10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local Name: kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

[root@master1 work]# vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.5.100",
"192.168.5.101",
"192.168.5.102",
"192.168.5.103",
"10.255.0.1",
"192.168.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Beijing",
"L": "Beijing",
"O": "k8s",
"OU": "system"
}
]

kubernetes 1.20版本 二进制部署的更多相关文章

  1. 【云原生 · Kubernetes】kubernetes v1.23.3 二进制部署(一)

    kubernetes v1.23.3 二进制部署 1. 组件版本和配置策略 1.1 主要组件版本 1.2 主要配置策略 2. 初始化系统和全局变量 2.1 集群规划 2.2 kubelet cri-o ...

  2. 【云原生 · Kubernetes】kubernetes v1.23.3 二进制部署(三)

    5 部署 etcd 集群 etcd 是基于 Raft 的分布式 KV 存储系统,由 CoreOS 开发,常用于服务发现.共享配置以及并发控制(如 leader 选举.分布式锁等). kubernete ...

  3. k8s1.20环境搭建部署(二进制版本)

    1.前提知识 1.1 生产环境部署K8s集群的两种方式 kubeadm Kubeadm是一个K8s部署工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群 ...

  4. K8S学习笔记之二进制部署Kubernetes v1.13.4 高可用集群

    0x00 概述 本次采用二进制文件方式部署,本文过程写成了更详细更多可选方案的ansible部署方案 https://github.com/zhangguanzhang/Kubernetes-ansi ...

  5. 二进制部署kubernetes集群(上篇)

    1.实验架构 1.1.硬件环境 准备5台2c/2g/50g虚拟机,使用10.4.7.0/24 网络 .//因后期要直接向k8s交付java服务,因此运算节点需要4c8g.不交付服务,全部2c2g足够. ...

  6. Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)

    0. 前言 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习 尝试通过源码编译二进制的方式在单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理 ...

  7. 5.基于二进制部署kubernetes(k8s)集群

    1 kubernetes组件 1.1 Kubernetes 集群图 官网集群架构图 1.2 组件及功能 1.2.1 控制组件(Control Plane Components) 控制组件对集群做出全局 ...

  8. 二进制部署 Kubernetes 集群

    二进制部署 Kubernetes 集群   提供的几种Kubernetes部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernet ...

  9. [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

    CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146   一.概述 kubernetes 1.13 ...

  10. Kubernetes V1.15 二进制部署集群

    1. 架构篇 1.1 kubernetes 架构说明              1.2 Flannel网络架构图 1.3 Kubernetes工作流程             2. 组件介绍 2.1 ...

随机推荐

  1. 4G5G和上网带宽与下载速度的换算方法

    前言 2020年5G越来越火热,而且运营商多次推出免费宽带升级,免费升级到100M,20M升级50M等等.很多人疑惑我们平时 的下载速度也就几百K或者有时候能上1M,但是就算升级到10M的宽带,也从来 ...

  2. LeetCode贪心算法习题讲解

    实验室的算法课程,今天轮到我给师弟师妹们讲贪心算法,顺便也复习一下. 贪心算法这个名字听起来唬人,其实通常是比较简单的.虽然通常贪心算法的实现非常容易,但是,一个问题是否能够使用贪心算法,是一定要小心 ...

  3. git 上传错误This oplation equires one of the flowi vrsionsot the NET Framework:.NETFramework

    相关文章链接: 码云(gitee)配置SSH密钥 码云gitee创建仓库并用git上传文件 git 上传错误This oplation equires one of the flowi vrsions ...

  4. a标签download属性跨域问题

    1.如果是加载了非同源的内容,该属性将失效,等于导航功能 2.在服务端设置Content-Disposition,使用HTTP响应头Content-disposition进行处理 3.先下载数据文件, ...

  5. Netty-介绍-1

    Netty介绍和应用场景 要求 已经掌握了 主要技术构成: Java OOP 编程. Java 多线程编程. Java IO 编程 . Java 网络编程. 常用的Java 设计模式(比如 观察者模式 ...

  6. XXE注入详解

    XML介绍 XML全称可扩展标记语言(EXtensible Markup Language),XML跟HTML格式类似,但是作用不同,XML侧重于数据传输,HTML注重于标记语言,也就是说XML其实是 ...

  7. JuiceFS v1.0 正式发布,首个面向生产环境的 LTS 版本

    今天,JuiceFS v1.0 发布了 经过了 18 个月的持续迭代和大量生产环境的广泛验证,此版本将成为第一个被长期维护的稳定版(LTS).同时,该版本提供完整的向前兼容,所有用户可以直接升级. J ...

  8. IntPtr 来把指针转换为 Int

    由于想得到指针的值,这个时候,不能把指针强制转换为 integer 因为 integer 只适合32位的系统,64位的系统下,需要用 int64, 通过这个函数来转换,就可以屏蔽掉系统是32位 还是 ...

  9. NC16122 郊区春游

    题目链接 题目 题目描述 今天春天铁子的班上组织了一场春游,在铁子的城市里有n个郊区和m条无向道路,第i条道路连接郊区Ai和Bi,路费是Ci.经过铁子和顺溜的提议,他们决定去其中的R个郊区玩耍(不考虑 ...

  10. 轻松玩转makefile | 变量与模式

    前言 本文通过简单的几个示例,以及对同一个Makefile进行几个版本的迭代,帮助快速的理解变量和模式规则的使用. 1.回顾 在上一篇文章中,我们使用Makefile编译fun.c和main.c这两个 ...