二进制安装K8S
参考链接:https://zhuanlan.zhihu.com/p/408967897
准备工作
3台Centos7.9虚拟机
虚拟机配置:2C4G,能连接外网
虚机规划
| ip | 用途 |
|---|---|
| 192.168.0.148 | k8S-master01 |
| 192.168.0.246 | k8S-node01 |
| 192.168.0.104 | k8S-node02 |
修改主机名、添加hosts
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
vi /etc/hosts,追加下面内容
192.168.0.148 k8s-master01
192.168.0.246 k8s-node01
192.168.0.104 k8s-node02
关闭防火墙、selinux、swap
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
swapoff -a
vim /etc/fstab
# 编辑etc/fstab文件,注释swap所在的行
同步时间
ntpdate time.windows.com
Master准备文件
mkdir /root/kubernetes/resources -p
cd /root/kubernetes/resources
wget https://dl.k8s.io/v1.18.20/kubernetes-server-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
wget https://docs.projectcalico.org/v3.19/manifests/calico.yaml
Node准备文件
mkdir /root/kubernetes/resources -p
cd /root/kubernetes/resources
wget https://dl.k8s.io/v1.18.20/kubernetes-node-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
部署etcd集群
etcd是一个分布式的数据库系统,为了模拟etcd的高可用,部署在K8S集群所使用的三台机器
启用HTTPS安全机制。K8S提供了基于CA签名的双向数字证书认证方式和简单的基于HTTP Base或Token的认证方式,其中CA证书方式的安全性最高。
我们使用cfssl为我们的K8S集群配置CA证书,此外也可以使用openssl。
安装cfssl
在Master机器执行:
cd /root/kubernetes/resources
cp cfssl_linux-amd64 /usr/bin/cfssl
cp cfssljson_linux-amd64 /usr/bin/cfssljson
cp cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
chmod +x /usr/bin/cfssl /usr/bin/cfssljson /usr/bin/cfssl-certinfo
在所有机器执行:
mkdir /etc/etcd/ssl -p
制作etcd证书
在Master机器执行:
mkdir /root/kubernetes/resources/cert/etcd -p ; cd /root/kubernetes/resources/cert/etcd
编辑ca-config.json,ca-csr.json
vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
vim ca-csr.json
{
"CN": "etcd ca",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
生成ca证书和密钥:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
编辑server-csr.json:
vim server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.0.148",
"192.168.0.246",
"192.168.0.104"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
# hosts中配置所有Master和Node的IP列表。
生成etcd证书和密钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
此时目录下会生成7个文件
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
拷贝证书
cp ca.pem server-key.pem server.pem /etc/etcd/ssl
scp ca.pem server-key.pem server.pem 192.168.0.246:/etc/etcd/ssl
scp ca.pem server-key.pem server.pem 192.168.0.104:/etc/etcd/ssl
安装etcd
在所有机器执行:
cd /root/kubernetes/resources
tar -zxvf /root/kubernetes/resources/etcd-v3.4.9-linux-amd64.tar.gz
cp ./etcd-v3.4.9-linux-amd64/etcd ./etcd-v3.4.9-linux-amd64/etcdctl /usr/bin
配置etcd
这里开始命令需要分别在Master和Node机器执行,配置etcd.conf
# vim /etc/etcd/etcd.conf
# k8s-master01写入文件内容如下:
[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.148:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.148:2379"
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.148:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.148:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.148:2380,etcd02=https://192.168.0.246:2380,etcd03=https://192.168.0.246:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# k8s-node01写入文件内容如下:
[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.246:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.246:2379"
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.246:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.246:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.148:2380,etcd02=https://192.168.0.246:2380,etcd03=https://192.168.0.104:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
# k8s-node02写入文件内容如下:
[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.0.104:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.0.104:2379"
[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.104:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.104:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.148:2380,etcd02=https://192.168.0.246:2380,etcd03=https://192.168.0.104:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
配置etcd服务配置文件
这里开始在所有机器执行,
mkdir -p /var/lib/etcd
执行上行命令,写入文件内容如下:
# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd \
--cert-file=/etc/etcd/ssl/server.pem \
--key-file=/etc/etcd/ssl/server-key.pem \
--peer-cert-file=/etc/etcd/ssl/server.pem \
--peer-key-file=/etc/etcd/ssl/server-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
etcd3.4版本会自动EnvironmentFile文件中的环境变量,不需要再ExecStart的命令参数重复设置,否则会报:"xxx" is shadowed by corresponding command-line flag的错误信息。
启动etcd,并且设置开机自动运行etcd
systemctl daemon-reload
systemctl start etcd.service
systemctl enable etcd.service
检查etcd集群的健康状态
etcdctl endpoint health --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/server.pem --key=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.0.148:2379,https://192.168.0.246:2379,https://192.168.0.104:2379"
部署k8s-master01
K8S的Master上存在着kube-apiserver、kube-controller-manager、kube-scheduler三大组件。
除此之外,如果想在Master机器上操作集群,还需要安装kubectl工具。
安装kubectl
kubernetes的安装包里已经将kubectl包含进去了,部署很简单:
cd /root/kubernetes/resources/
tar -zxvf ./kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/kubectl /usr/bin
制作kubernetes证书
mkdir /root/kubernetes/resources/cert/kubernetes /etc/kubernetes/{ssl,bin} -p
cp kubernetes/server/bin/kube-apiserver kubernetes/server/bin/kube-controller-manager kubernetes/server/bin/kube-scheduler /etc/kubernetes/bin
cd /root/kubernetes/resources/cert/kubernetes
接下来都在Master机器上执行,编辑ca-config.json
vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
编辑ca-csr.json:
vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "kubernetes",
"OU": "System"
}
]
}
生成ca证书和密钥:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
制作kube-apiserver、kube-proxy、admin证书
编辑kube-apiserver-csr.json
vim kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"172.29.7.148",
"172.29.7.149",
"172.29.7.150"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "kubernetes",
"OU": "System"
}
]
}
编辑kube-proxy-csr.json:
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "kubernetes",
"OU": "System"
}
]
}
编辑admin-csr.json:
vim admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "system:masters",
"OU": "System"
}
]
}
生成证书和密钥
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
此时目录下生成的文件
# ll
-rw-r--r--. 1 root root 1001 May 28 00:32 admin.csr
-rw-r--r--. 1 root root 282 May 28 00:32 admin-csr.json
-rw-------. 1 root root 1679 May 28 00:32 admin-key.pem
-rw-r--r--. 1 root root 1407 May 28 00:32 admin.pem
-rw-r--r--. 1 root root 294 May 28 00:30 ca-config.json
-rw-r--r--. 1 root root 1013 May 28 00:31 ca.csr
-rw-r--r--. 1 root root 284 May 28 00:30 ca-csr.json
-rw-------. 1 root root 1675 May 28 00:31 ca-key.pem
-rw-r--r--. 1 root root 1383 May 28 00:31 ca.pem
-rw-r--r--. 1 root root 1273 May 28 00:32 kube-apiserver.csr
-rw-r--r--. 1 root root 597 May 28 00:31 kube-apiserver-csr.json
-rw-------. 1 root root 1679 May 28 00:32 kube-apiserver-key.pem
-rw-r--r--. 1 root root 1655 May 28 00:32 kube-apiserver.pem
-rw-r--r--. 1 root root 1009 May 28 00:32 kube-proxy.csr
-rw-r--r--. 1 root root 287 May 28 00:31 kube-proxy-csr.json
-rw-------. 1 root root 1679 May 28 00:32 kube-proxy-key.pem
-rw-r--r--. 1 root root 1411 May 28 00:32 kube-proxy.pem
将kube-proxy证书拷贝到Node
需要在Node机器创建目录,以下命令在Node机器上执行:
mkdir /etc/kubernetes/ -p
然后再在Master机器执行拷贝操作。
cp ca.pem ca-key.pem kube-apiserver.pem kube-apiserver-key.pem kube-proxy.pem kube-proxy-key.pem /etc/kubernetes/ssl
scp -r /etc/kubernetes/ssl 192.168.0.246:/etc/kubernetes
scp -r /etc/kubernetes/ssl 192.168.0.104:/etc/kubernetes
创建 TLSBootstrapping Token
cd /etc/kubernetes
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
# 执行上一步会得到一个token,例如d5c5d767b64db39db132b433e9c45fbc,编辑文件token.csv时需要
# vim token.csv
#写入文件内容,替换生成的token
d5c5d767b64db39db132b433e9c45fbc,kubelet-bootstrap,10001,"system:node-bootstrapper"
安装kube-apiserver
准备 kube-apiserver配置文件
# vim apiserver
# 执行上行命令,写入文件内容如下:
KUBE_API_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--etcd-servers=https://192.168.0.148:2379,https://192.168.0.246:2379,https://192.168.0.104:2379 \
--bind-address=192.168.0.148 \
--secure-port=6443 \
--advertise-address=192.168.0.148 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/server.pem \
--etcd-keyfile=/etc/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/logs/kubernetes/k8s-audit.log"
配置 kube-apiserver 服务文件
# vim /usr/lib/systemd/system/kube-apiserver.service
# 执行上行命令,写入文件内容如下:
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/etc/kubernetes/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kube-apiserver:
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
安装 kube-controller-manager
准备kube-controller-manger配置文件
# vim controller-manager
# 执行上行命令,写入文件内容如下:
KUBE_CONTROLLER_MANAGER_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
配置 kube-controller-manger 服务文件
# vim /usr/lib/systemd/system/kube-controller-manager.service
# 执行上行命令,写入文件内容如下:
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/etc/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kube-controller-manager:
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
安装 kube-scheduler
准备 kube-scheduler配置文件
# vim scheduler
# 执行上行命令,写入文件内容如下:
KUBE_SCHEDULER_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--master=127.0.0.1:8080 \
--leader-elect \
--bind-address=127.0.0.1"
配置 kube-scheduler 服务文件
# vim /usr/lib/systemd/system/kube-scheduler.service
# 执行上行命令,写入文件内容如下:
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/etc/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
启动kube-scheduler:
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
kubelet-bootstrap授权
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
查看Master状态
kubectl get cs
如果Master部署成功,应该输出:
"""
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
"""
apiserver授权kubelet
准备 apiserver-to-kubelet-rbac.yaml 文件
# cd /root/kubernetes/resources
# vim apiserver-to-kubelet-rbac.yaml
# 执行上行命令,写入文件内容如下:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
# This role allows full access to the kubelet API
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubelet-api-admin
labels:
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/log
- nodes/stats
- nodes/metrics
- nodes/spec
verbs:
- "*"
# This binding gives the kube-apiserver user full access to the kubelet API
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-apiserver-kubelet-api-admin
labels:
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubelet-api-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
执行以下命令:
kubectl apply -f apiserver-to-kubelet-rbac.yaml
部署node
K8S的Node上需要运行kubelet和kube-proxy。本篇介绍在Node机器安装这两个组件,除此之外,安装通信需要的cni插件。
本篇的执行命令需要在准备的两台Node机器上执行。
安装docker
可以参照官网:https://docs.docker.com/engine/install/
# 卸载老版本或重装docker时执行第一行
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine -y
# 安装docker
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
# 查看Docker版本
docker version
修改Docker镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
启动Docker
systemctl enable docker
systemctl start docker
安装kubelet
cd /root/kubernetes/resources
tar -zxvf ./kubernetes-node-linux-amd64.tar.gz
mkdir /etc/kubernetes/{ssl,bin} -p
cp kubernetes/node/bin/kubelet ./kubernetes/node/bin/kube-proxy /etc/kubernetes/bin
cd /etc/kubernetes
准备kubelet配置文件
# vim kubelet
# 执行上行命令,在k8s-node01写入文件内容如下:
KUBELET_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--enable-server=true \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--hostname-override=k8s-node01 \
--network-plugin=cni \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet-config.yml \
--cert-dir=/etc/kubernetes/ssl"
在k8s-node02写入文件内容如下:
KUBELET_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--enable-server=true \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--hostname-override=k8s-node02 \
--network-plugin=cni \
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet-config.yml \
--cert-dir=/etc/kubernetes/ssl"
准备bootstrap.kubeconfig文件
# vim /etc/kubernetes/bootstrap.kubeconfig
# 执行上行命令,写入文件内容如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://192.168.0.148:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: d5c5d767b64db39db132b433e9c45fbc
# 注意:token的值需要替换为master生成的token.csv中所用的token。
准备kubelet-config.yml文件
vim kubelet-config.yml
执行上行命令,写入文件内容如下:
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
准备kubelet.kubeconfig文件
vim kubelet.kubeconfig
执行上行命令,写入文件内容如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://192.168.0.148:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: default-auth
name: default-context
current-context: default-context
kind: Config
preferences: {}
users:
- name: default-auth
user:
client-certificate: /etc/kubernetes/ssl/kubelet-client-current.pem
client-key: /etc/kubernetes/ssl/kubelet-client-current.pem
准备kubelet服务配置文件
vim /usr/lib/systemd/system/kubelet.service
执行上行命令,写入文件内容如下:
[Unit]
Description=Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/etc/kubernetes/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动kubelet:
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
systemctl status kubelet
给Node颁发证书,在Master上执行:
kubectl get csr
输出如下
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
得到证书的NAME,给其Approve:
kubectl certificate approve node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY
kubectl certificate approve node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ
再次查看证书,证书的CONDITION就会更新了
kubectl get csr
输出如下
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-a-BmW9xMglOXlUdwBjD2QQphXLdu4iwtamEIIbhJKcY 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
node-csr-zDDrVyKH7ug8fTUcDjdvDgh-f9rVCyoHuLMGaWbykAQ 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
接下来使用查看Node的命令,应该可以获取到Node信息:
kubectl get node
输出如下
NAME STATUS ROLES AGE VERSION
k8s-node01 NotReady 50s v1.18.20
k8s-node02 NotReady 56s v1.18.20
安装kube-proxy
准备kube-proxy配置文件
vim kube-proxy
执行上行命令,写入文件内容如下:
KUBE_PROXY_ARGS="--logtostderr=false \
--v=2 \
--log-dir=/var/log/kubernetes \
--config=/etc/kubernetes/kube-proxy-config.yml"
准备kube-proxy-config.yml文件
vim /etc/kubernetes/kube-proxy-config.yml
执行上行命令,在k8s-node01写入文件内容如下:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: k8s-node01
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true
在k8s-node02写入文件内容如下:
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
hostnameOverride: k8s-node02
clusterCIDR: 10.0.0.0/24
mode: ipvs
ipvs:
scheduler: "rr"
iptables:
masqueradeAll: true
准备kube-proxy.kubeconfig文件
vim /etc/kubernetes/kube-proxy.kubeconfig
执行上行命令,写入文件内容如下:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
server: https://172.29.7.148:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate: /etc/kubernetes/ssl/kube-proxy.pem
client-key: /etc/kubernetes/ssl/kube-proxy-key.pem
准备kube-proxy服务配置文件
vim /usr/lib/systemd/system/kube-proxy.service
执行上行命令,写入文件内容如下:
[Unit]
Description=Kube-Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.target
[Service]
EnvironmentFile=/etc/kubernetes/kube-proxy
ExecStart=/etc/kubernetes/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动kubelet
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
systemctl status kube-proxy
部署cni网络插件
cd /root/kubernetes/resources
mkdir -p /opt/cni/bin /etc/cni/net.d
tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
部署Flannel集群网络
部署calico集群网络
需要在Master机器上执行
cd /root/kubernetes/resources
kubectl apply -f calico.yaml
# 查看部署CNI网络插件进度
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-86497987b6-2825f 1/1 Running 3 28h
calico-node-cc88n 1/1 Running 3 28h
calico-node-x7nkc 1/1 Running 1 24h
注意:pod状态为ImagePullBackoff,可能是由于网络原因拉取镜像失败导致。可以通过本通过以下方法。 同样如果是部署的Flannel集群网络也可用此种方式。
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
创建角色绑定
kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
K8S集群测试
部署一个nginx的deployment:
kubectl create deployment nginx --image=nginx
在等待几秒后,获取deployment
kubectl get deployment
ifconfig cni0
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get svc
# 可以看到nginx已经启动成功。
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 7m7s
# 注意:如果启动失败,可能是由于网络原因拉取镜像失败导致。可以通过kubectl describe pod 查看。
使用service暴露K8S集群内部Pod服务:
kubectl expose deployment nginx --port=80 --type=NodePort
# 获取service
kubectl get svc
可以看到,service将nginx的服务转发到了32063端口
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10h
nginx NodePort 10.0.0.101 <none> 80:32063/TCP 10s
此时,我们在master机器上使用该端口访问nginx,可以看到成功访问。
[root@k8s-master01 resources]# curl 192.168.0.246:32063
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
二进制安装K8S的更多相关文章
- 10、二进制安装K8s之部署CoreDNS 和Dashboard
二进制安装K8s之部署CoreDNS 和Dashboard CoreDNS 和Dashboard 的yaml文件在 k8s源代码压缩包里面可以找到对应的配置文件,很多人从网上直接下载使用别人的,会导致 ...
- 7、二进制安装K8s之部署kube-proxy
二进制安装K8s之部署kube-proxy 1.创建配置文件 cat > /data/k8s/config/kube-proxy.conf << EOF KUBE_PROXY_OPT ...
- 8、二进制安装K8s之部署CIN网络
二进制安装K8s之部署CIN网络 部署CIN网络可以使用flannel或者calico,这里介绍使用calico ecd 方式部署. 1.下载calico二进制安装包 创建所需目录 mkdir -p ...
- 9、二进制安装K8s之增加node
二进制安装K8s之增加node 1.复制文件,要部署几台就直接复制即可 #二进制文件 scp /data/k8s/bin/{kubelet,kube-proxy} root@192.168.100.1 ...
- 6、二进制安装K8s之部署kubectl
二进制安装K8s之部署kubectl 我们把k8s-master 也设置成node,所以先master上面部署node,在其他机器上部署node也适用,更换名称即可. 1.在所有worker node ...
- 3、二进制安装K8s之部署kube-apiserver
二进制安装K8s之部署kube-apiserver 一.生成 kube-apiserver 证书 1.自签证书颁发机构(CA) cat > ca-config.json <<EOF ...
- 4、二进制安装K8s 之 部署kube-controller-manager
二进制安装K8s 之 部署kube-controller-manager 1.创建配置文件 cat > /data/k8s/config/kube-controller-manager.conf ...
- 5、二进制安装K8s 之 部署kube-scheduler
二进制安装K8s之部署kube-scheduler 1.创建配置文件 cat > /data/k8s/config/kube-scheduler.conf << EOF KUBE_S ...
- 2、二进制安装K8s 之 部署ETCD集群
二进制安装K8s 之 部署ETCD集群 一.下载安装cfssl,用于k8s证书签名 二进制包地址:https://pkg.cfssl.org/ 所需软件包: cfssl 1.6.0 cfssljson ...
- 1、二进制安装K8s 之 环境准备
二进制安装K8s 之 环境准备 1.系统&软件 序号 设备\系统 版本 1 宿主机 MacBook Pro 11.4 2 系统 Centos 7.8 3 虚拟机 Parallels Deskt ...
随机推荐
- pinia的使用
1. pinia和vuex的区别 pinia没有mutations,只有:state. getters. actions pinia分模块不需要modules(之前vuex分模块需要modules) ...
- KMP算法学习笔记
总算把这个东西搞懂了...... KMP是一个求解字符串匹配问题的算法. 这个东西的核心是一个\(next\)数组,\(next_i\)表示字符串第\(0\sim i\)项的相同的前缀和后缀的最大长度 ...
- dp杂题选做
树的数量 题目其实挺简单的,难点在于状态的设计(其实也没多难). 令 \(f_i\) 表示 \(i\) 个点的 \(m\) 叉树的数量,发现无法转移.设 \(g_{i,j}\) 表示根节点所在子树内有 ...
- 2022-06-11:注意本文件中,graph不是邻接矩阵的含义,而是一个二部图。 在长度为N的邻接矩阵matrix中,所有的点有N个,matrix[i][j]表示点i到点j的距离或者权重, 而在二部
2022-06-11:注意本文件中,graph不是邻接矩阵的含义,而是一个二部图. 在长度为N的邻接矩阵matrix中,所有的点有N个,matrix[i][j]表示点i到点j的距离或者权重, 而在二部 ...
- 2021-02-27:假设一个固定大小为W的窗口,依次划过arr,返回每一次滑出状况的最大值。例如,arr = [4,3,5,4,3,3,6,7], W = 3。返回:[5,5,5,4,6,7]。
2021-02-27:假设一个固定大小为W的窗口,依次划过arr,返回每一次滑出状况的最大值.例如,arr = [4,3,5,4,3,3,6,7], W = 3.返回:[5,5,5,4,6,7]. 福 ...
- 2021-08-26:长度为N的数组arr,一定可以组成N^2个数字对。例如arr = [3,1,2],数字对有(3,3) (3,1) (3,2) (1,3) (1,1) (1,2) (2,3) (2
2021-08-26:长度为N的数组arr,一定可以组成N^2个数字对.例如arr = [3,1,2],数字对有(3,3) (3,1) (3,2) (1,3) (1,1) (1,2) (2,3) (2 ...
- 2021-08-22:定义什么是可整合数组:一个数组排完序之后,除了最左侧的数外,有arr[i] = arr[i-1]+1,则称这个数组为可整合数组,比如{5,1,2,4,3}、{6,2,3,1,5,
2021-08-22:定义什么是可整合数组:一个数组排完序之后,除了最左侧的数外,有arr[i] = arr[i-1]+1,则称这个数组为可整合数组,比如{5,1,2,4,3}.{6,2,3,1,5, ...
- ICLR 2018-A Simple Neural Attentive Meta-Learner
Key 时序卷积+注意力机制(前者从过去的经验中收集信息,而后者则精确定位具体的信息.) 解决的主要问题 手工设计的限制:最近的许多元学习方法都是大量手工设计的,要么使用专门用于特定应用程序的架构,要 ...
- ARM DMA Controller PL330 使用经验分享
总体简介 DMAC提供一个AXI主接口来执行DMA传输,并提供两个APB从接口来控制其操作.DMAC采用TrustZone技术,其中一个APB接口运行在secure状态,另一个运行在非secure状态 ...
- 为什么有了 HTTP 还要 RPC
哈喽大家好,我是咸鱼 随着互联网技术的发展,分布式架构越来越被人们所采用.在分布式架构下,为了实现复杂的业务逻辑,应用程序需要分布式通信实现远程调用 而这时候就需要一种协议来支持远程过程调用,以便实现 ...