Kubernetes快速部署

kubernetes简介

kubernetes,是一个全新的基于容器技术的分布式架构领先方案,是谷歌严格保密十几年的秘密武器----Borg系统的一个开源版本,于2014年9月发布第一个版本,2015年7月发布第一个正式版本。

kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理。目的是实现资源管理的自动化,主要提供了如下的主要功能:

  • 自我修复:一旦某一个容器崩溃,能够在1秒中左右迅速启动新的容器
  • 弹性伸缩:可以根据需要,自动对集群中正在运行的容器数量进行调整
  • 服务发现:服务可以通过自动发现的形式找到它所依赖的服务
  • 负载均衡:如果一个服务起动了多个容器,能够自动实现请求的负载均衡
  • 版本回退:如果发现新发布的程序版本有问题,可以立即回退到原来的版本
  • 存储编排:可以根据容器自身的需求自动创建存储卷

kubernetes组件

一个kubernetes集群主要是由控制节点(master)工作节点(node)构成,每个节点上都会安装不同的组件。

master:集群的控制平面,负责集群的决策 ( 管理 )

ApiServer : 资源操作的唯一入口,接收用户输入的命令,提供认证、授权、API注册和发现等机制

Scheduler : 负责集群资源调度,按照预定的调度策略将Pod调度到相应的node节点上

ControllerManager : 负责维护集群的状态,比如程序部署安排、故障检测、自动扩展、滚动更新等

Etcd :负责存储集群中各种资源对象的信息

node:集群的数据平面,负责为容器提供运行环境 ( 干活 )

Kubelet : 负责维护容器的生命周期,即通过控制docker,来创建、更新、销毁容器

KubeProxy : 负责提供集群内部的服务发现和负载均衡

Docker : 负责节点上容器的各种操作

下面,以部署一个nginx服务来说明kubernetes系统各个组件调用关系:

  1. 首先要明确,一旦kubernetes环境启动之后,master和node都会将自身的信息存储到etcd数据库中

  2. 一个nginx服务的安装请求会首先被发送到master节点的apiServer组件

  3. apiServer组件会调用scheduler组件来决定到底应该把这个服务安装到哪个node节点上

    在此时,它会从etcd中读取各个node节点的信息,然后按照一定的算法进行选择,并将结果告知apiServer

  4. apiServer调用controller-manager去调度Node节点安装nginx服务

  5. kubelet接收到指令后,会通知docker,然后由docker来启动一个nginx的pod

    pod是kubernetes的最小操作单元,容器必须跑在pod中至此,

  6. 一个nginx服务就运行了,如果需要访问nginx,就需要通过kube-proxy来对pod产生访问的代理

这样,外界用户就可以访问集群中的nginx服务了

kubernetes概念

Master:集群控制节点,每个集群需要至少一个master节点负责集群的管控

Node:工作负载节点,由master分配容器到这些node工作节点上,然后node节点上的docker负责容器的运行

Pod:kubernetes的最小控制单元,容器都是运行在pod中的,一个pod中可以有1个或者多个容器

Controller:控制器,通过它来实现对pod的管理,比如启动pod、停止pod、伸缩pod的数量等等

Service:pod对外服务的统一入口,下面可以维护者同一类的多个pod

Label:标签,用于对pod进行分类,同一类pod会拥有相同的标签

NameSpace:命名空间,用来隔离pod的运行环境

Kubernetes快速部署

安装要求

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

-至少3台机器,操作系统 CentOS7+

  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘20GB或更多
  • 集群中所有机器之间网络互通
  • 可以访问外网,需要拉取镜像
  • 禁止swap分区

环境

名称 IP 系统
k8s-master 192.168.78.144 centos8
k8s-node1 192.168.78.145 centos8
k8s-node2 192.168.78.146 centos8

  1. //关闭所有主机的防火墙
  2. [root@k8s-master ~]# systemctl disable --now firewalld
  3. Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
  4. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  5. [root@k8s-master ~]# setenforce 0
  6. [root@k8s-master ~]# vim /etc/selinux/config
  7. [root@k8s-node1 ~]# systemctl disable --now firewalld
  8. Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
  9. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  10. [root@k8s-node1 ~]# setenforce 0
  11. [root@k8s-node1 ~]# vim /etc/selinux/config
  12. [root@k8s-node2 ~]# systemctl disable --now firewalld
  13. Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
  14. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  15. [root@k8s-node2 ~]# setenforce 0
  16. [root@k8s-node2 ~]# vim /etc/selinux/config
  17. //关闭所有主机的swap分区:
  18. # vim /etc/fstab
  19. //注释掉swap分区
  20. [root@k8s-master ~]# free -m
  21. total used free shared buff/cache available
  22. Mem: 3752 556 2656 10 539 2956
  23. Swap: 4051 0 4051
  24. [root@k8s-master ~]# vim /etc/fstab
  25. [root@k8s-node1 ~]# free -m
  26. total used free shared buff/cache available
  27. Mem: 1800 550 728 10 521 1084
  28. Swap: 2047 0 2047
  29. [root@k8s-node1 ~]# vim /etc/fstab
  30. [root@k8s-node2 ~]# free -m
  31. total used free shared buff/cache available
  32. Mem: 1800 559 711 10 529 1072
  33. Swap: 2047 0 2047
  34. [root@k8s-node2 ~]# vim /etc/fstab
  35. //在hosts文件里面添加主机信息
  36. [root@k8s-master ~]# cat >> /etc/hosts << EOF
  37. 192.168.78.144 k8s-master
  38. 192.168.78.145 k8s-node1
  39. 192.168.78.146 k8s-node2
  40. EOF
  41. [root@k8s-node1 ~]# cat >> /etc/hosts << EOF
  42. > 192.168.78.144 k8s-master
  43. > 192.168.78.145 k8s-node1
  44. > 192.168.78.146 k8s-node2
  45. > EOF
  46. [root@k8s-node2 ~]# cat >> /etc/hosts << EOF
  47. > 192.168.78.144 k8s-master
  48. > 192.168.78.145 k8s-node1
  49. > 192.168.78.146 k8s-node2
  50. > EOF
  51. //ping测试
  52. [root@k8s-master ~]# ping k8s-master
  53. PING k8s-master (192.168.78.144) 56(84) bytes of data.
  54. 64 bytes from k8s-master (192.168.78.144): icmp_seq=1 ttl=64 time=0.072 ms
  55. 64 bytes from k8s-master (192.168.78.144): icmp_seq=2 ttl=64 time=0.080 ms
  56. ^C
  57. --- k8s-master ping statistics ---
  58. 2 packets transmitted, 2 received, 0% packet loss, time 41ms
  59. rtt min/avg/max/mdev = 0.072/0.076/0.080/0.004 ms
  60. [root@k8s-master ~]# ping k8s-node1
  61. PING k8s-node1 (192.168.78.145) 56(84) bytes of data.
  62. 64 bytes from k8s-node1 (192.168.78.145): icmp_seq=1 ttl=64 time=0.512 ms
  63. 64 bytes from k8s-node1 (192.168.78.145): icmp_seq=2 ttl=64 time=0.285 ms
  64. ^C
  65. [root@k8s-master ~]# ping k8s-node2
  66. PING k8s-node2 (192.168.78.146) 56(84) bytes of data.
  67. 64 bytes from k8s-node2 (192.168.78.146): icmp_seq=1 ttl=64 time=1.60 ms
  68. 64 bytes from k8s-node2 (192.168.78.146): icmp_seq=2 ttl=64 time=0.782 ms
  69. 64 bytes from k8s-node2 (192.168.78.146): icmp_seq=3 ttl=64 time=1.32 ms
  70. [root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
  71. > net.bridge.bridge-nf-call-ip6tables = 1
  72. > net.bridge.bridge-nf-call-iptables = 1
  73. > EOF
  74. [root@k8s-master ~]# sysctl --system
  75. #省略过程
  76. * Applying /etc/sysctl.d/99-sysctl.conf ...
  77. * Applying /etc/sysctl.d/k8s.conf ...
  78. * Applying /etc/sysctl.conf ...
  79. //时间同步
  80. [root@k8s-master ~]# vim /etc/chrony.conf
  81. # Use public servers from the pool.ntp.org project.
  82. # Please consider joining the pool (http://www.pool.ntp.org/join.html).
  83. pool time1.aliyun.com iburst
  84. [root@k8s-master ~]# systemctl enable chronyd
  85. [root@k8s-master ~]# systemctl restart chronyd
  86. [root@k8s-master ~]# systemctl status chronyd
  87. chronyd.service - NTP client/server
  88. Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>
  89. Active: active (running) since Tue 2022-09-06 15:54:27 CST; 9s ago
  90. [root@k8s-node1 ~]# vim /etc/chrony.conf
  91. [root@k8s-node1 ~]# systemctl enable chronyd
  92. [root@k8s-node1 ~]# systemctl restart chronyd
  93. [root@k8s-node1 ~]# systemctl status chronyd
  94. chronyd.service - NTP client/server
  95. Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>
  96. Active: active (running) since Tue 2022-09-06 15:57:52 CST; 8s ago
  97. [root@k8s-node2 ~]# vim /etc/chrony.conf
  98. [root@k8s-node2 ~]# systemctl enable chronyd
  99. [root@k8s-node2 ~]# systemctl restart chronyd
  100. [root@k8s-node2 ~]# systemctl status chronyd
  101. chronyd.service - NTP client/server
  102. Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enab>
  103. Active: active (running) since Tue 2022-09-06
  104. //配置免密登录
  105. [root@k8s-master ~]# ssh-keygen -t rsa
  106. Generating public/private rsa key pair.
  107. Enter file in which to save the key (/root/.ssh/id_rsa):
  108. Created directory '/root/.ssh'.
  109. Enter passphrase (empty for no passphrase):
  110. Enter same passphrase again:
  111. Your identification has been saved in /root/.ssh/id_rsa.
  112. Your public key has been saved in /root/.ssh/id_rsa.pub.
  113. The key fingerprint is:
  114. SHA256:LZeVhmrafNhs4eAGG8dNQltVYcGX/sXbKj/dPzR/wNo root@k8s-master
  115. The key's randomart image is:
  116. +---[RSA 3072]----+
  117. | . ...o=o.|
  118. | . o . o...|
  119. | o o + .o |
  120. | . * + .o|
  121. | o S * . =|
  122. | @ O . o+o|
  123. | o * * o.++|
  124. | . o o E.=|
  125. | o..=|
  126. +----[SHA256]-----+
  127. [root@k8s-master ~]# ssh-copy-id k8s-master
  128. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  129. The authenticity of host 'k8s-master (192.168.106.16)' can't be established.
  130. ECDSA key fingerprint is SHA256:1x2Tw0BYQrGTk7wpwsIy+TtFN72hWbHYYiU6WtI/Ojk.
  131. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
  132. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  133. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  134. root@k8s-master's password:
  135. Number of key(s) added: 1
  136. Now try logging into the machine, with: "ssh 'k8s-master'"
  137. and check to make sure that only the key(s) you wanted were added.
  138. [root@k8s-master ~]# ssh-copy-id k8s-node1
  139. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  140. The authenticity of host 'k8s-node1 (192.168.106.20)' can't be established.
  141. ECDSA key fingerprint is SHA256:75svPGZTNSPdFX6K4lCDkoQfG10Y478mu0NzQD7HpnA.
  142. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
  143. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  144. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  145. root@k8s-node1's password:
  146. Number of key(s) added: 1
  147. Now try logging into the machine, with: "ssh 'k8s-node1'"
  148. and check to make sure that only the key(s) you wanted were added.
  149. [root@k8s-master ~]# ssh-copy-id k8s-node2
  150. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  151. The authenticity of host 'k8s-node2 (192.168.106.21)' can't be established.
  152. ECDSA key fingerprint is SHA256:75svPGZTNSPdFX6K4lCDkoQfG10Y478mu0NzQD7HpnA.
  153. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
  154. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  155. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  156. root@k8s-node2's password:
  157. Number of key(s) added: 1
  158. Now try logging into the machine, with: "ssh 'k8s-node2'"
  159. and check to make sure that only the key(s) you wanted were added.
  160. [root@k8s-master ~]# ssh k8s-master
  161. Activate the web console with: systemctl enable --now cockpit.socket
  162. This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
  163. To register this system, run: insights-client --register
  164. Last login: Tue Sep 6 15:10:17 2022 from 192.168.106.1
  165. [root@k8s-master ~]# ssh k8s-node1
  166. Activate the web console with: systemctl enable --now cockpit.socket
  167. This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
  168. To register this system, run: insights-client --register
  169. Last login: Tue Sep 6 15:10:18 2022 from 192.168.106.1
  170. [root@k8s-node1 ~]# exit
  171. 注销
  172. Connection to k8s-node1 closed.
  173. [root@k8s-master ~]# ssh k8s-node2
  174. Activate the web console with: systemctl enable --now cockpit.socket
  175. This system is not registered to Red Hat Insights. See https://cloud.redhat.com/
  176. To register this system, run: insights-client --register
  177. Last login: Tue Sep 6 15:10:18 2022 from 192.168.106.1
  178. [root@k8s-node2 ~]# exit
  179. 注销
  180. Connection to k8s-node2 closed.
  181. [root@k8s-master ~]# reboot //重启确保他永久生效

在所有节点安装Docker/kubeadm/kubelet

安装Docker

  1. ##注意所有docker的版本要一致,
  2. //所有主机都做下面的操作
  3. [root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
  4. --2022-09-06 16:31:03-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. 正在解析主机 mirrors.aliyun.com (mirrors.aliyun.com)... 45.253.17.211, 45.253.17.216, 45.253.17.213, ...
  6. 正在连接 mirrors.aliyun.com (mirrors.aliyun.com)|45.253.17.211|:443... 已连接。
  7. 已发出 HTTP 请求,正在等待回应... 200 OK
  8. 长度:2081 (2.0K) [application/octet-stream]
  9. 正在保存至: “/etc/yum.repos.d/docker-ce.repo
  10. /etc/yum.repos.d/docke 100%[=========================>] 2.03K --.-KB/s 用时 0.002s
  11. 2022-09-06 16:31:04 (1.05 MB/s) - 已保存 “/etc/yum.repos.d/docker-ce.repo [2081/2081])
  12. [root@k8s-master ~]# dnf list all|grep docker
  13. containerd.io.x86_64 1.6.8-3.1.el8 @docker-ce-stable
  14. docker-ce.x86_64 //这个 3:20.10.17-3.el8 @docker-ce-stable
  15. docker-ce-cli.x86_64 1:20.10.17-3.el8 @docker-ce-stable
  16. [root@k8s-master ~]# systemctl enable --now docker
  17. Created symlink /etc/systemd/system/multi-user.target.wants/docker.service /usr/lib/systemd/system/docker.service.
  18. [root@k8s-master ~]# docker version
  19. Client: Docker Engine - Community
  20. Version: 20.10.17 //版本要统一
  21. API version: 1.41
  22. Go version: go1.17.11
  23. Git commit: 100c701
  24. Built: Mon Jun 6 23:03:11 2022
  25. OS/Arch: linux/amd64
  26. Context: default
  27. Experimental: true
  28. Server: Docker Engine - Community
  29. Engine:
  30. Version: 20.10.17
  31. API version: 1.41 (minimum version 1.12)
  32. Go version: go1.17.11
  33. Git commit: a89b842
  34. Built: Mon Jun 6 23:01:29 2022
  35. OS/Arch: linux/amd64
  36. Experimental: false
  37. containerd:
  38. Version: 1.6.8
  39. GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
  40. runc:
  41. Version: 1.1.4
  42. GitCommit: v1.1.4-0-g5fd4c4d
  43. docker-init:
  44. Version: 0.19.0
  45. GitCommit: de40ad0
  46. [root@k8s-master ~]#
  47. //配置加速器
  48. [root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF
  49. > {
  50. > "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  51. > "exec-opts": ["native.cgroupdriver=systemd"],
  52. > "log-driver": "json-file",
  53. > "log-opts": {
  54. > "max-size": "100m"
  55. > },
  56. > "storage-driver": "overlay2"
  57. > }
  58. > EOF
  59. [root@k8s-master ~]# cat /etc/docker/daemon.json
  60. {
  61. "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"], //加速器
  62. "exec-opts": ["native.cgroupdriver=systemd"], //驱动
  63. "log-driver": "json-file", //格式
  64. "log-opts": {
  65. "max-size": "100m" //100m开始运行
  66. },
  67. "storage-driver": "overlay2" //存储驱动
  68. }

添加kubernetes yum软件源


  1. [root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

安装kubeadm,kubelet和kubectl

  1. //下载版本要一致
  2. [root@k8s-master ~]# dnf list all|grep kubelet
  3. kubelet.x86_64 1.25.0-0 kubernetes
  4. [root@k8s-master ~]# dnf list all|grep kubeadm
  5. kubeadm.x86_64 1.25.0-0 kubernetes
  6. [root@k8s-master ~]# dnf list all|grep kubectl
  7. kubectl.x86_64 1.25.0-0 kubernetes
  8. [root@k8s-master ~]# dnf -y install kubelet kubeadm kubectl
  9. [root@k8s-master ~]# systemctl enable kubelet
  10. Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service /usr/lib/systemd/system/kubelet.service.
  11. [root@k8s-master ~]# systemctl status kubelet
  12. kubelet.service - kubelet: The Kubernetes Node Agent
  13. Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor>
  14. Drop-In: /usr/lib/systemd/system/kubelet.service.d

在master部署Kubernetes Master

  1. [root@k8s-master ~]# kubeadm init -h //看帮助文档
  2. [root@k8s-master ~]# cd /etc/containerd/
  3. [root@k8s-master containerd]# containerd config default > config.toml //生成
  4. [root@k8s-master containerd]# vim config.toml
  5. sandbox_image = "k8s.gcr.io/pause:3.6" //改为 sandbox_image = "registry.cn-beijing.aliyuncs.com/abcdocker/pause:3.6"
  6. [root@k8s-master manifests]# systemctl stop kubelet
  7. [root@k8s-master manifests]# systemctl restart containerd
  8. [root@k8s-master manifests]# kubeadm init --apiserver-advertise-address 192.168.70.134 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.25.0 --service-cidr 10.96.0.0/12 --pod-network-cidr 10.244.0.0/1
  9. Your Kubernetes control-plane has initialized successfully!
  10. To start using your cluster, you need to run the following as a regular user:
  11. mkdir -p $HOME/.kube
  12. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  13. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  14. Alternatively, if you are the root user, you can run:
  15. export KUBECONFIG=/etc/kubernetes/admin.conf
  16. You should now deploy a pod network to the cluster.
  17. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: //配置网络插件
  18. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  19. Then you can join any number of worker nodes by running the following on each as root:
  20. kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae \
  21. --discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1
  22. //初始化成功
  23. //设置环境变量
  24. [root@k8s-master ~]# vim /etc/profile.d/k8s.sh
  25. [root@k8s-master ~]# cat /etc/profile.d/k8s.sh
  26. export KUBECONFIG=/etc/kubernetes/admin.conf
  27. [root@k8s-master ~]# source /etc/profile.d/k8s.sh
  28. [root@k8s-master ~]# echo $KUBECONFIG
  29. /etc/kubernetes/admin.conf
  1. //指定阿里云镜像仓库地址
  2. mkdir -p $HOME/.kube
  3. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  4. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  5. kubectl get nodes
  6. [root@k8s-master ~]# kubectl get nodes
  7. NAME STATUS ROLES AGE VERSION
  8. k8s-master NotReady control-plane 14m v1.25.0

复制

  1. //看到说明成功
  2. [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
  3. namespace/kube-flannel created
  4. clusterrole.rbac.authorization.k8s.io/flannel created
  5. clusterrolebinding.rbac.authorization.k8s.io/flannel created
  6. serviceaccount/flannel created
  7. configmap/kube-flannel-cfg created
  8. daemonset.apps/kube-flannel-ds created

加入集群

  1. [root@k8s-master ~]# kubectl get nodes
  2. NAME STATUS ROLES AGE VERSION
  3. k8s-master Ready control-plane 15h v1.25.0
  4. [root@k8s-node2 containerd]# ls
  5. config.toml
  6. [root@k8s-node2 containerd]# mv config.toml{,.bak}
  7. [root@k8s-node2 containerd]# ls
  8. config.toml.bak
  9. [root@k8s-master ~]# cd /etc/containerd/
  10. [root@k8s-master containerd]# ls
  11. config.toml
  12. //将文件传过去
  13. [root@k8s-master containerd]# scp /etc/containerd/config.toml k8s-node1:/etc/containerd/
  14. config.toml 100% 6952 3.4MB/s 00:00
  15. [root@k8s-master containerd]# scp /etc/containerd/config.toml k8s-node2:/etc/containerd/
  16. config.toml 100% 6952 3.8MB/s 00:00
  17. [root@k8s-node1 containerd]# ls
  18. config.toml
  19. [root@k8s-node1 containerd]# systemctl restart containerd
  20. [root@k8s-node1 containerd]# kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae --discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1
  21. [preflight] Running pre-flight checks
  22. [preflight] Reading configuration from the cluster...
  23. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  24. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  25. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  26. [kubelet-start] Starting the kubelet
  27. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  28. This node has joined the cluster: //看到下面这几行说明成功
  29. * Certificate signing request was sent to apiserver and a response was received.
  30. * The Kubelet was informed of the new secure connection details.
  31. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  32. [root@k8s-node2 containerd]# ls
  33. config.toml config.toml.bak
  34. [root@k8s-node2 containerd]# systemctl restart containerd
  35. [root@k8s-node2 containerd]# kubeadm join 192.168.70.134:6443 --token h9utko.9esdw3ge9j0urwae --discovery-token-ca-cert-hash sha256:8c36d378e51b8d01f1fe904e51e1b5d7215fc76dcbaf105c798c4cda70e84ca1
  36. [preflight] Running pre-flight checks
  37. [preflight] Reading configuration from the cluster...
  38. [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
  39. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  40. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  41. [kubelet-start] Starting the kubelet
  42. [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
  43. This node has joined the cluster:
  44. * Certificate signing request was sent to apiserver and a response was received.
  45. * The Kubelet was informed of the new secure connection details.
  46. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  47. [root@k8s-master containerd]# kubectl get nodes //添加节点
  48. NAME STATUS ROLES AGE VERSION
  49. k8s-master Ready control-plane 15h v1.25.0
  50. k8s-node1 Ready <none> 4m35s v1.25.0
  51. k8s-node2 Ready <none> 3m17s v1.25.0
  1. [root@k8s-node1 ~]# kubectl get nodes
  2. The connection to the server localhost:8080 was refused - did you specify the right host or port?
  3. //传变量文件
  4. [root@k8s-master ~]# scp /etc/kubernetes/admin.conf k8s-node1:/etc/kubernetes/admin.conf
  5. admin.conf 100% 5638 2.7MB/s 00:00
  6. [root@k8s-master ~]# scp /etc/kubernetes/admin.conf k8s-node2:/etc/kubernetes/admin.conf
  7. admin.conf 100% 5638 2.9MB/s 00:00
  8. [root@k8s-master ~]# scp /etc/profile.d/k8s.sh k8s-node2:/etc/profile.d/k8s.sh
  9. k8s.sh 100% 45 23.6KB/s 00:00
  10. [root@k8s-master ~]# scp /etc/profile.d/k8s.sh k8s-node1:/etc/profile.d/k8s.sh
  11. k8s.sh 100% 45 3.8KB/s 00:00
  12. //在node节点查看
  13. [root@k8s-node1 ~]# bash //让变量生效
  14. [root@k8s-node1 ~]# echo $KUBECONFIG
  15. /etc/kubernetes/admin.conf
  16. [root@k8s-node1 ~]# kubectl get nodes //查看节点
  17. NAME STATUS ROLES AGE VERSION
  18. k8s-master Ready control-plane 16h v1.25.0
  19. k8s-node1 Ready <none> 21m v1.25.0
  20. k8s-node2 Ready <none> 20m v1.25.0
  21. [root@k8s-node2 ~]# bash
  22. [root@k8s-node2 ~]# echo $KUBECONFIG
  23. /etc/kubernetes/admin.conf
  24. [root@k8s-node2 ~]# kubectl get nodes
  25. NAME STATUS ROLES AGE VERSION
  26. k8s-master Ready control-plane 16h v1.25.0
  27. k8s-node1 Ready <none> 21m v1.25.0
  28. k8s-node2 Ready <none> 20m v1.25.0

在Kubernetes集群中创建一个pod,验证是否正常运行

  1. //deployment:类型 ngimx:pod名称 nginx:镜像
  2. [root@k8s-master ~]# kubectl create deployment nginx --image=nginx
  3. deployment.apps/nginx created
  4. [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort //--part:暴露端口号 //type=NodePort:类型
  5. service/nginx exposed
  6. [root@k8s-master ~]# kubectl get pod,svc
  7. NAME READY STATUS RESTARTS AGE
  8. pod/nginx-76d6c9b8c-6vnnf 0/1 ContainerCreating 0 17s
  9. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  10. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h
  11. service/nginx NodePort 10.98.79.60 <none> 80:30274/TCP 7s
  12. [root@k8s-master ~]# kubectl get pod,svc
  13. NAME READY STATUS RESTARTS AGE
  14. pod/nginx-76d6c9b8c-6vnnf 1/1 Running 0 5m28s
  15. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  16. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16h
  17. service/nginx NodePort 10.98.79.60 <none> 80:30274/TCP 5m18s

网页访问

Kubernetes快速部署的更多相关文章

  1. 高可用Kubernetes集群-16. ansible快速部署

    说明 本文档指导采用二进制包的方式快速部署高可用kubernetes集群. 脚本托管:k8s-ansible(持续更新) 参考:高可用kubernetes集群 组件版本 组件 版本 备注 centos ...

  2. Kubernetes探索学习001--Centos7.6使用kubeadm快速部署Kubernetes集群

    Centos7.6使用kubeadm快速部署kubernetes集群 为什么要使用kubeadm来部署kubernetes?因为kubeadm是kubernetes原生的部署工具,简单快捷方便,便于新 ...

  3. 快速部署Kubernetes集群管理

    这篇文章介绍了如何快速部署一套Kubernetes集群,下面就快速开始吧! 准备工作 //关闭防火墙 systemctl stop firewalld.service systemctl disabl ...

  4. 使用 Sealos 在 3 分钟内快速部署一个生产级别的 Kubernetes 高可用集群

    本文首发于:微信公众号「运维之美」,公众号 ID:Hi-Linux. 「运维之美」是一个有情怀.有态度,专注于 Linux 运维相关技术文章分享的公众号.公众号致力于为广大运维工作者分享各类技术文章和 ...

  5. [转帖]centos7 使用kubeadm 快速部署 kubernetes 国内源

    centos7 使用kubeadm 快速部署 kubernetes 国内源 https://www.cnblogs.com/qingfeng2010/p/10540832.html 前言 搭建kube ...

  6. 【k8s】kubeadm快速部署Kubernetes

    1.Kubernetes 架构图 kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: # 创建一个 Mast ...

  7. 快速部署一个Kubernetes集群

    官方提供的三种部署方式 minikube Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用. 部署地址:https:// ...

  8. 教你在Kubernetes中快速部署ES集群

    摘要:ES集群是进行大数据存储和分析,快速检索的利器,本文简述了ES的集群架构,并提供了在Kubernetes中快速部署ES集群的样例:对ES集群的监控运维工具进行了介绍,并提供了部分问题定位经验,最 ...

  9. 第3章:快速部署一个Kubernetes集群

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: # 创建一个 Master 节点$ kubeadm in ...

  10. kubeadm快速部署Kubernetes单节点

    1. 安装要求 在开始之前,部署Kubernetes集群机器需要满足以下几个条件: 一台或多台机器,操作系统 CentOS7.x-86_x64 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬 ...

随机推荐

  1. 其他计算机&网络&行业知识

    互联网数据中心(IDC)   VIDC(端口映射) CVM云服务器(Cloud Virtual Machine)  IDE 集成开发环境: 开发工具 QA:Quality Assurance,直译为质 ...

  2. 关于 dangerouslySetInnerHTML

    之前不太了解react,第一次看dangerouslySetInnerHTML,以为可以用来防止xss注入. 后面看待又补充了一个DOMPurify,才知道这个东西只有一个提醒的作用,并不能防止xss ...

  3. 在CIMES中调用存储过程

    页面 调用步骤 DataTable dtResult = null; CustomDataAgent TODB_con = DBCenter.Create(AppSetting.Manufacturi ...

  4. save an excel csv to a github csv file

    :%s/\t/,/g

  5. c#怎样删除指定文件名的文件

    我有一个文件夹,里面有6个文件,我现在要删除字母B开头的三个文件,只需要剩下A开头的文件即可用C#怎样操作??? foreach (string d in Directory.GetFileSyste ...

  6. PHP基础教程(二)

    本部分列出了在 PHP 中使用的各种运算符:算数运算符 运算符 说明 例子 结果 + Addition x=2x+2 4 - Subtraction x=25-x 3 * Multiplication ...

  7. macOS完整安装包下载方法

    Python脚本下载 开源地址:macadmin-scripts 下载installinstallmacos.py到Mac,然后用python运行,如图: #    ProductID    Vers ...

  8. C#中Newtonsoft.Json.dll 的使用

    1.类库说明Newtonsoft.Json.dll是.NET 下开源的json格式序列号和反序列化的类库,利用此类库,可以方便地操作json数据,其中在反序列化时,可以直接将格式化的json数据处理成 ...

  9. 宽字符集(unicode)操作函数 (转)

    字符分类: 宽字符函数 普通C函数 描述 iswalnum() isalnum() 测试字符是否为数字或字母 iswalpha() isalpha() 测试字符是否是字母 iswcntrl() isc ...

  10. 实验:利用mqtt-spring-boot-starter实现后台云服务数据采集和远程控制

    1.资源地址及使用说明 https://search.maven.org/artifact/com.github.tocrhz/mqtt-spring-boot-starter/1.2.7/jar 2 ...