一、环境准备

k8s集群角色 IP 主机名 安装组件 配置
控制节点 192.168.10.10 master apiserver、controller-manager、scheduler、etcd、kube-proxy、docker、calico、contained 2核4G
工作节点 192.168.10.11 node1 kubelet-1.26、kube-proxy、docker、calico、coredns、contained 2核4G
工作节点 192.168.10.12 node2 kubelet-1.26、kube-proxy、docker、calico、coredns、contained 2核4G

1.1 基础环境配置

# 控制节点和工作节点都需要运行
# 1.设置主机名
hostnamectl set-hostname master # 2.配置hosts
192.168.10.10 master
192.168.10.11 node1
192.168.10.12 node2 # 3.ssh信任
ssh-keygen -t rsa
ssh-copy-id node1 # 4.关闭交换分区
swapoff -a # 临时关闭 # 5.修改机器内核参数
modprobe br_netfilter
echo "modprobe br_netfilter" >> /etc/profile cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF sysctl -p /etc/sysctl.d/k8s.conf # 6. 关闭防火墙
systemctl stop firewalld ; systemctl disable firewalld # 7.关闭selinux,修改 x selinux 配置文件之后,重启
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 8.配置阿里云yum源
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast # 9.配置kubernets源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0 # 10.时间同步并定时同步
yum install ntpdate -y
ntpdate time1.aliyun.com
* */1 * * * /usr/sbin/ntpdate time1.aliyun.com
systemctl restart crond

1.2 基础软件包安装

yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl- devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet ipvsadm

1.3 安装containerd

# 1.安装containerd服务
yum -y install containerd # 2.生成containerd配置文件
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml # 3.修改配置文件
vim /etc/containerd/config.toml
SystemdCgroup = true # false改为true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9" # 如果版本不清楚后面kubeadm config images list --config=kubeadm.yml时可以看了再修改 # 4.配置为开机启动
systemctl enable containerd --now # 5.修改/etc/crictl.yaml 文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF systemctl restart containerd # 6.配置镜像加速器
# 编辑 vim /etc/containerd/config.toml 文件,修改
config_path = "/etc/containerd/certs.d" mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml
[host."https://pft7f97f.mirror.aliyuncs.com",host."https://registry.docker-cn.com",host."https://docker.mirrors.ustc.edu.cn"]
capabilities = ["pull"]

systemctl restart containerd

1.4 安装docker辅助制作镜像

# 1. 安装docker并设置自启动
yum install docker-ce -y
systemctl enable docker --now # 2.配置镜像加速器
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors":["https://pft7f97f.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","http://hub-mirror.c.163.com"]
}
EOF systemctl daemon-reload
systemctl restart docker

二、安装k8s

1.1 安装k8s所需安装包

# 1.安装k8s软件包,master和node都需要
yum install -y kubelet-1.26.7 kubeadm-1.26.7 kubectl-1.26.7
systemctl enable kubelet 注:每个软件包的作用
Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的
kubelet: 安装在集群所有节点上,用于启动 Pod 的,kubeadm 安装k8s,k8s 控制节点和工作节点的组件,都是基于 pod 运行的,只要 pod 启动,就需要 kubelet
kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

1.2 kubeadm初始化k8s配置文件生成

# 安装单机版k8s
# 1.设置容器运行时,master,node
crictl config runtime-endpoint unix:///run/containerd/containerd.sock #2.使用配置文件初始化k8s:master
kubeadm config print init-defaults > kubeadm.yaml # 3.根据需求修改配置文件
[root@master ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 指定控制节点IP端口
advertiseAddress: 192.168.10.10
bindPort: 6443
nodeRegistration:
# 制定containerd容器运行时
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
# 指定阿里云镜像仓库地址,k8s版本
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.7
networking:
dnsDomain: cluster.local
# 指定pod网段和service网段
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
# 新增:启动ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
# 新增:申明cgroup使用systemd
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd

1.3 k8s初始化

查看所需下载的镜像

[root@master ~]# kubeadm config images list --config=kubeadm.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.7
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

下载镜像命令:

[root@master ~]# kubeadm config images pull --config=kubeadm.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.7
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

控制节点初始化:

[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification

[root@master ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.26.7
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.10.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.512119 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553

初始化内容


或者单独执行
kubeadm init --kubernetes-version=1.26.7 --apiserver-advertise-address=192.168.10.10 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --cri-socket /run/containerd/containerd.sock --ignore-preflight-errors=SystemVerification

控制节点执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.4 添加工作节点

# 添加工作节点node:
kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553

[root@node1 ~]# kubeadm join 192.168.10.10:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:c88d0ee03c6f2bf28a387899713d0f965f4742f5e4e96cb836faba4611ace553
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

工作节点执行日志

# 生成token命令
kubeadm token create --print-join-command
[root@node1 ~]# crictl images ls
IMAGE TAG IMAGE ID SIZE
registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.26.7 1e7eac3bc5c0b 21.8MB

查看:

# 控制节点运行:
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 14m v1.26.7
node1 NotReady <none> 3m13s v1.26.7 # master给node1打上标签
[root@master ~]# kubectl label nodes node1 node-role.kubernetes.io/work=work
node/node1 labeled [root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 18m v1.26.7
node1 NotReady work 7m33s v1.26.7

三、安装网络组件calico

查看calico支持的版本:https://docs.tigera.io/calico/latest/getting-started/kubernetes/requirements

查看calico配置文件下载地址:https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises

下载:curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico-typha.yaml -o calico.yaml

# 修改:IP_AUTODETECTION_METHOD:获取 Node IP 地址的方式,默认使用第 1 个网络接口的 IP 地址,对于安装了多块网卡的 Node,可以使用正则表达式选择正确的网卡,
# 例如"interface=eth.*"表示选择名称以 eth 开头的网卡的 IP 地址。 # Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
- name: IP_AUTODETECTION_METHOD
value: "interface=ens33"

[root@master home]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
poddisruptionbudget.policy/calico-typha created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
service/calico-typha created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
deployment.apps/calico-typha created

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 65m v1.26.7
node1 Ready work 54m v1.26.7
[root@master home]# crictl images ls
IMAGE TAG IMAGE ID SIZE
docker.io/calico/cni v3.26.1 9dee260ef7f59 93.4MB
docker.io/calico/node v3.26.1 8065b798a4d67 86.6MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 322kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns v1.9.3 5185b96f0becf 14.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.5.6-0 fce326961ae2d 103MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.26.7 6ac727c486d08 36.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.26.7 17314033c0a0b 32.9MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.26.7 1e7eac3bc5c0b 21.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.26.7 c1902187a39f8 18MB [root@master home]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-949d58b75-269j4 1/1 Running 0 9m54s 10.244.166.129 node1 <none> <none>
calico-node-6xcvq 1/1 Running 0 9m54s 192.168.10.10 master <none> <none>
calico-node-s2tqg 1/1 Running 0 9m54s 192.168.10.11 node1 <none> <none>
calico-typha-7575dd9f6f-gxm6l 1/1 Running 0 9m54s 192.168.10.11 node1 <none> <none>
coredns-567c556887-4wh8v 1/1 Running 0 3h 10.244.166.131 node1 <none> <none>
coredns-567c556887-7j2sq 1/1 Running 0 3h 10.244.166.130 node1 <none> <none>
etcd-master 1/1 Running 0 3h 192.168.10.10 master <none> <none>
kube-apiserver-master 1/1 Running 0 3h 192.168.10.10 master <none> <none>
kube-controller-manager-master 1/1 Running 0 3h 192.168.10.10 master <none> <none>
kube-proxy-hfdjj 1/1 Running 0 3h 192.168.10.10 master <none> <none>
kube-proxy-kspmr 1/1 Running 0 169m 192.168.10.11 node1 <none> <none>
kube-scheduler-master 1/1 Running 0 3h 192.168.10.10 master <none> <none>

四、测试k8s正常访问网络

# 导入busybox镜像至node节点
ctr -n=k8s.io images import busybox.tar.gz # master节点运行
[root@master home]# kubectl run busybox --image docker.io/library/busybox:latest --image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (110.242.68.66): 56 data bytes
64 bytes from 110.242.68.66: seq=0 ttl=127 time=34.621 ms
64 bytes from 110.242.68.66: seq=1 ttl=127 time=32.764 ms / # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

五、安装k8s可视化UI界面dashboard

5.1 安装kubernetes-dashboard

# 1. 下载dashboard配置文件到本地
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml # 2.修改image,可提前下载到本地,并修改下载策略
image: kubernetesui/dashboard:v2.7.0
imagePullPolicy: IfNotPresent
---
image: kubernetesui/metrics-scraper:v1.0.8
imagePullPolicy: IfNotPresent
# 导入下载好的镜像
ctr -n=k8s.io images import dashboard-2.7.tar.gz
ctr -n=k8s.io images import metrics-scraper-v1.0.8.tar.gz

# 3.安装
[root@master xc]# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created [root@master xc]# kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
kubernetes-dashboard-certs Opaque 0 25s
kubernetes-dashboard-csrf Opaque 1 25s
kubernetes-dashboard-key-holder Opaque 2 25s [root@master xc]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7bc864c59-q2wtn 1/1 Running 0 12m
kubernetes-dashboard-7b8b7d8965-rng97 1/1 Running 0 12m # 3. 修改 service type 类型变成 NodePor
[root@master xc]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
service/kubernetes-dashboard edited [root@master xc]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.8.208 <none> 8000/TCP 6m34s
kubernetes-dashboard NodePort 10.104.45.35 <none> 443:30606/TCP 6m34s

访问dashboard页面:https://192.168.10.11:30606/

5.2 通过token令牌访问dashboard

# 创建管理员账号,具有查看任何空间的权限,可以管理所有资源对象
[root@master xc]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboardclusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created # 查看角色是否创建成功
[root@master xc]# kubectl -n kubernetes-dashboard get serviceaccounts |grep kubernetes-dashboard
kubernetes-dashboard 0 17m # 创建token
# 'v1.24.0 更新之后进行创建 ServiceAccount 不会自动生成 Secret 需要对其手动创建'
# --duration 设置过期时间,也可以不加
[root@master xc]# kubectl -n kubernetes-dashboard create token kubernetes-dashboard --duration 604800s
eyJhbGciOiJSUzI1NiIsImtpZCI6IjBGRkFWSFpOOG9mbzRHcDFxS1lwVnR0cDhlQjlOZ04ybDl1U3dPbUFua0UifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk0MDc2ODU5LCJpYXQiOjE2OTM0NzIwNTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInVpZCI6ImIwNGNhNTU5LTc5MjMtNDU3OC05ZGUzLTE2MmM2ZTMwMjIwMSJ9fSwibmJmIjoxNjkzNDcyMDU5LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQifQ.PUO3l1_vNxWkCtRtkMOba7C27rcp8DhgmU_eLX2vbL0ikLhSi2w4w403rweG7P0LjYuzxPBBL08I6ZAnCQ9scMa-PzL80gsvVZ9DZPDKBn08OPD7EI7Infk7oDbdA1PFVaUT_Zw0YRKjb0_oDj725w8aQOfsb4fh96_V03ahdzOBDU1hO7ijt1x1-egcKetJ_HHBk_hVyds7uEsaDiQGJkjufMH5o5MREpnc6F-q23Myltqj0x_Soj7pz4GSbCZtWwu-JUqLQUbZinHy8TO70OuOVl4e_fsBwl-pFF_9APbhoObvPCS7Vm-k3POK3ErXWm2tvywHBRXz85DJhaiFjg # 输入token验证

其余方法:

通过kubeconfig文件访问dashboard

kubernetes-1.26安装的更多相关文章

  1. kubernetes入门(08)kubernetes单机版的安装和使用

    kubectl get - 类似于 docker ps ,查询资源列表 kubectl describe - 类似于 docker inspect ,获取资源的详细信息 kubectl logs - ...

  2. Kubernetes控制节点安装配置

    #环境安装Centos 7 Linux release 7.3.1611网络: 互通配置主机名设置各个服务器的主机名hosts#查找kubernetes支持的docker版本Kubernetes v1 ...

  3. 【Kubernetes学习之二】Kubernetes集群安装

    环境 centos 7 Kubernetes有三种安装方式:yum.二进制.kubeadm,这里演示kubeadm. 一.准备工作1.软件版本 软件 版本 kubernetes v1.15.3 Cen ...

  4. kubernetes离线包安装教程

    kubernetes离线包安装教程: 安装包中不包含docker,如没装docker 请先安装之yum install -y docker 1 2 3 1. master上: cd shell &am ...

  5. Windows玩转Kubernetes系列2-Centos安装Docker

    接上一章,Windows玩转Kubernetes系列1-VirtualBox安装Centos,我们开始学习如何在Centos中安装Docker 准备 关闭防火墙 防火墙一定要提前关闭,否则在后续安装K ...

  6. Windows玩转Kubernetes系列3-Centos安装K8S

    以往文章参考: Windows玩转Kubernetes系列1-VirtualBox安装Centos Windows玩转Kubernetes系列2-Centos安装Docker 安装K8S yum in ...

  7. Kubernetes(k8s)完整安装教程

    Kubernetes(k8s)完整安装教程  2019-08-27 2.3k 记录 发表评论 目录 1 安装 1.1 安装 Docker 1.2 安装 VirtualBox 1.3 安装 kubect ...

  8. kubernetes之kubeadm 安装kubernetes 高可用集群

    1. 架构信息 系统版本:CentOS 7.6 内核:3.10.0-957.el7.x86_64 Kubernetes: v1.14.1 Docker-ce: 18.09.5 推荐硬件配置:4核8G ...

  9. Kubernetes 集群安装部署

    etcd集群配置 master节点配置 1.安装kubernetes etcd [root@k8s ~]# yum -y install kubernetes-master etcd 2.配置 etc ...

  10. Kubernetes Dashboard的安装与坑【h】

    1.前言 https://github.com/kubernetes/dashboard/releases kubectl apply -f https://raw.githubusercontent ...

随机推荐

  1. 记录--你不知道的Js高级方法

    这里给大家分享我在网上总结出来的一些知识,希望对大家有所帮助 前言 在Js中有一些比较冷门但是非常好用的方法,我在这里称之为高级方法,这些方法没有被广泛使用或多或少是因为存在一些兼容性的问题,不是所有 ...

  2. 记录--通过手写,分析async await核心原理

    这里给大家分享我在网上总结出来的一些知识,希望对大家有所帮助 前言 async await 语法是 ES7出现的,是基于ES6的 promise和generator实现的 generator函数 在之 ...

  3. kali局域网断网攻击

    首先我们打开我们熟悉的kali linux操作系统,利用指令: ifconfig 来确认本机的ip地址 确认了本机的ip地址之后,利用一下的指令查看局域网下所有ip: fping -g 本机IP地址/ ...

  4. Puppet 2024年度报告:平台工程发掘 DevOps 无限潜质

    Puppet 于本周发布了一份2024年的 DevOps 现状报告 The State of DevOps Report: The Evolution of Platform Engineering. ...

  5. KingbaseES V8R6集群案例---一主二备架构单个备库宕机事务影响测试

    KingbaseES V8R6集群案例---一主二备架构单个备库宕机事务影响测试 案例说明: 对于KingbaseES V8R6集群,在sync模式下,对于一主一备架构,如果备库宕机时,主库事务com ...

  6. #状压dp,拓扑排序,内向基环树#CF1242C Sum Balance

    题目 有 \(k\) 个盒子, 第 \(i\) 个盒子有 \(n_i\) 个数. 保证所有数互不相同. 从每个盒子各拿出一个数, 并按照某种顺序放回去(每个盒子恰好放入一个数). 判断是否能使操作后所 ...

  7. Lustre架构介绍的阅读笔记-基础知识

    本文是在阅读Introduction to Lustre* Architecture的如下章节时的笔记. Lustre – Fast, Scalable Storage for HPC Lustre ...

  8. OpenAtom OpenHarmony三方库创建发布及安全隐私检测

    OpenAtom OpenHarmony三方库(以下简称"三方库"或"包"),是经过验证可在OpenHarmony系统上可重复使用的软件组件,可帮助开发者快速开 ...

  9. 【直播回顾】OpenHarmony知识赋能五期第五课——多媒体子系统之视频解读

    5月19日晚上19点,知识赋能第五期第五节课<OpenHarmony标准系统多媒体子系统之视频解读>,在OpenHarmony开发者成长计划社群内成功举行. 本期课程,由深开鸿资深技术专家 ...

  10. HarmonyOS振动效果开发指导

      Vibrator开发概述 振动器模块服务最大化开放硬工最新马达器件能力,通过拓展原生马达服务实现振动与交互融合设计,打造细腻精致的一体化振动体验和差异化体验,提升用户交互效率和易用性.提升用户体验 ...