发送配置文件到各个节点

[root@master ~]# scp /opt/kubernetes/cfg/*kubeconfig root@192.168.238.128:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
kube-proxy.kubeconfig 100% 5315 5.2KB/s 00:00
[root@master ~]# scp /opt/kubernetes/cfg/*kubeconfig root@192.168.238.129:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
kube-proxy.kubeconfig 100% 5315 5.2KB/s 00:00

部署node包

[root@node01 ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz
[root@node01 ~]# tar -xf kubernetes-node-linux-amd64.tar
[root@node01 bin]# pwd
/root/kubernetes/node/bin
[root@node01 bin]# ls
kubeadm kubectl kubelet kube-proxy
[root@node01 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
[root@node01 bin]# chmod +x /opt/kubernetes/bin/
[root@node01 bin]# cat kubelet.sh
#!/bin/bash
NODE_ADDRESS=${1:-"192.168.238.129"}
DNS_SERVER_IP=${2:-"10.10.10.2"} cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit]
Description=Kubernetes kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 bin]# sh kubelet.sh 192.168.238.129 10.10.10.2
启动失败
[root@node01 bin]# systemctl status kubelet
● kubelet.service - Kubernetes kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2019-07-09 08:42:39 CST; 5s ago
Process: 16005 ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS (code=exited, status=1/FAILURE)
Main PID: 16005 (code=exited, status=1/FAILURE) Jul 09 08:42:39 node01 systemd[1]: kubelet.service: main process exited, code=exited, sta...URE
Jul 09 08:42:39 node01 systemd[1]: Unit kubelet.service entered failed state.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service failed.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jul 09 08:42:39 node01 systemd[1]: Stopped Kubernetes kubelet.
Jul 09 08:42:39 node01 systemd[1]: start request repeated too quickly for kubelet.service
Jul 09 08:42:39 node01 systemd[1]: Failed to start Kubernetes kubelet.
Jul 09 08:42:39 node01 systemd[1]: Unit kubelet.service entered failed state.
Jul 09 08:42:39 node01 systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
错误日志,原因是:kubelet-bootstrap并没有权限创建证书。所以要创建这个用户的权限并绑定到这个角色上。解决方法是在master上执行kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
[root@node01 bin]# tail -n 50 /var/log/messages
Jul 9 08:42:39 localhost kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
解决办法
[root@master ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@master ssl]# scp bootstrap.kubeconfig root@192.168.238.129:/opt/kubernetes/cfg/
bootstrap.kubeconfig 100% 1881 1.8KB/s 00:00
重新启动
[root@node01 bin]# systemctl start kubelet
[root@node01 bin]# systemctl status kubelet
● kubelet.service - Kubernetes kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 08:49:02 CST; 7s ago
Main PID: 16515 (kubelet)
Memory: 15.1M
CGroup: /system.slice/kubelet.service
└─16515 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --address=192.168.23... Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.587730 16515 controller.go:114] k...ler
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.587734 16515 controller.go:118] k...ags
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.720859 16515 mount_linux.go:210] ...emd
Jul 09 08:49:02 node01 kubelet[16515]: W0709 08:49:02.720943 16515 cni.go:171] Unable t...t.d
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.723626 16515 iptables.go:589] cou...ait
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.724943 16515 server.go:182] Versi....11
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.724968 16515 feature_gate.go:226]...[]}
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725028 16515 plugins.go:101] No c...ed.
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725035 16515 server.go:303] No cl... ""
Jul 09 08:49:02 node01 kubelet[16515]: I0709 08:49:02.725047 16515 bootstrap.go:58] Usi...ile
Hint: Some lines were ellipsized, use -l to show in full.
[root@node01 bin]# cat /opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \
--v=4 \
--address=192.168.238.129 \
--hostname-override=192.168.238.129 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--allow-privileged=true \
--cluster-dns=10.10.10.2shutdwon \
--cluster-domain=cluster.local \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@node01 bin]# cat proxy.sh
#!/bin/bash
NODE_ADDRESS=${1:-"192.168.238.129"} cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start kube-proxy.service
systemctl status kube-proxy.service
systemctl enable kube-proxy.service [root@node01 bin]# sh proxy.sh 192.168.238.129
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-07-15 05:50:58 CST; 19ms ago
Main PID: 10759 (kube-proxy)
Memory: 1.8M
CGroup: /system.slice/kube-proxy.service Jul 15 05:50:58 node01 systemd[1]: kube-proxy.service holdoff time over, scheduling restart.
Jul 15 05:50:58 node01 systemd[1]: Stopped Kubernetes Proxy.
Jul 15 05:50:58 node01 systemd[1]: Started Kubernetes Proxy.
[root@node01 bin]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-07-15 05:50:58 CST; 16s ago
Main PID: 10759 (kube-proxy)
Memory: 25.2M
CGroup: /system.slice/kube-proxy.service
‣ 10759 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.238.129 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig Jul 15 05:51:05 node01 kube-proxy[10759]: I0715 05:51:05.178631 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:06 node01 kube-proxy[10759]: I0715 05:51:06.006411 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:07 node01 kube-proxy[10759]: I0715 05:51:07.186278 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:08 node01 kube-proxy[10759]: I0715 05:51:08.013543 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:09 node01 kube-proxy[10759]: I0715 05:51:09.192931 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:10 node01 kube-proxy[10759]: I0715 05:51:10.021327 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:11 node01 kube-proxy[10759]: I0715 05:51:11.199918 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:12 node01 kube-proxy[10759]: I0715 05:51:12.028701 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:13 node01 kube-proxy[10759]: I0715 05:51:13.207165 10759 config.go:141] Calling handler.OnEndpointsUpdate
Jul 15 05:51:14 node01 kube-proxy[10759]: I0715 05:51:14.035887 10759 config.go:141] Calling handler.OnEndpointsUpdate [root@node01 bin]# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true --v=4 --hostname-override=192.168.238.129 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" 主节点查看node请求
[root@master bin]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k 8s kubelet-bootstrap Pending
node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 47s kubelet-bootstrap Pending
接受node请求
[root@master bin]# kubectl certificate approve node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k
certificatesigningrequest.certificates.k8s.io/node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k approved
[root@master bin]# kubectl certificate approve node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8
certificatesigningrequest.certificates.k8s.io/node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 approved
[root@master bin]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-MiuGllbQ1uICHoHLb_EspVlTUy_1vsdHaN62XUVAX0k 2m7s kubelet-bootstrap Approved,Issued
node-csr-TFcNOIGV2VHnkiqGIzxyWjhR9bEb576oP33SnyxLAy8 2m46s kubelet-bootstrap Approved,Issued
查看节点信息
[root@master bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.238.128 Ready <none> 100s v1.9.11
192.168.238.129 Ready <none> 110s v1.9.11
查看集群状态
[root@master bin]# kubectl get cs
NAME STATUS MESSAGE ERROR
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
controller-manager Healthy ok
查看节点自动签发的证书
[root@node01 ~]# ls /opt/kubernetes/ssl/kubelet-client.*
/opt/kubernetes/ssl/kubelet-client.crt /opt/kubernetes/ssl/kubelet-client.key

节点2同理操作。

删除单个节点的请求

kubectl delete csr 节点名称

删除所有节点请求

kubectl delete csr --all

删除加入的节点

kubectl delete nodes node名称

删除所有节点

kubectl delete nodes --all

kubernetes容器集群管理部署node节点组件的更多相关文章

  1. kubernetes容器集群管理部署master节点组件

    集群部署获取k8s二进制包 [root@master ~]# wget https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz [ ...

  2. Kubernetes容器集群管理环境 - Node节点的移除与加入

    一.如何从Kubernetes集群中移除Node比如从集群中移除k8s-node03这个Node节点,做法如下: 1)先在master节点查看Node情况 [root@k8s-master01 ~]# ...

  3. kubernetes容器集群管理创建node节点kubeconfig文件

    1.创建TLS Bootstrapping Token 2.创建kubelet kubeconfig 3.创建kube-proxy kubeconfig 安装和设置kubectl [root@mast ...

  4. Kubernetes容器集群管理环境 - 完整部署(中篇)

    接着Kubernetes容器集群管理环境 - 完整部署(上篇)继续往下部署: 八.部署master节点master节点的kube-apiserver.kube-scheduler 和 kube-con ...

  5. Kubernetes容器集群管理环境 - 完整部署(下篇)

    在前一篇文章中详细介绍了Kubernetes容器集群管理环境 - 完整部署(中篇),这里继续记录下Kubernetes集群插件等部署过程: 十一.Kubernetes集群插件 插件是Kubernete ...

  6. Kubernetes容器集群管理环境 - Prometheus监控篇

    一.Prometheus介绍之前已经详细介绍了Kubernetes集群部署篇,今天这里重点说下Kubernetes监控方案-Prometheus+Grafana.Prometheus(普罗米修斯)是一 ...

  7. Kubernetes容器集群管理环境 - 完整部署(上篇)

    Kubernetes(通常称为"K8S")是Google开源的容器集群管理系统.其设计目标是在主机集群之间提供一个能够自动化部署.可拓展.应用容器可运营的平台.Kubernetes ...

  8. kubernetes容器集群管理启动一个测试示例

    创建nginx 创建3个nginx副本 [root@master bin]# kubectl run nginx --image=nginx --replicas=3 kubectl run --ge ...

  9. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

随机推荐

  1. Java 时间类 Date 和 Calendar

    在项目中获取一个yyyy-MM-dd HH:mm:ss格式的时间字符串 package org.htsg.kits; import java.text.SimpleDateFormat; import ...

  2. SELECT - 从表或视图中取出若干行

    SYNOPSIS SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] * | expression [ AS output_name ] [ ...

  3. 记一次sql索引颠覆认知

    首先先建立数据库和插入数据 我们要查询的命令如下,前提是以mysql数据库为准 , 结果和我想的不太一样,先准备好环境和所需的数据库和表 准备阶段 CREATE TABLE `test` ( `id` ...

  4. 微信小程序(10)--开发者工具更新以后wxss编译错误

    更新最新版微信开发者工具后,出现下面报错: 解决办法: 1.在控制台输入openVendor() ,确定Enter: 2.清除里面的wcc.exe  wcsc.exe : 3.重启开发者工具

  5. 微信小程序(1)--新建项目

    这些天看了一下最近特别火的微信小程序,发现和vue大同小异. 新建项目 为方便初学者了解微信小程序的基本代码结构,在创建过程中,如果选择的本地文件夹是个空文件夹,开发者工具会提示,是否需要创建一个 q ...

  6. Matomo(Piwik)安装说明-----------基于LNPM环境

    Matomo(Piwik)安装说明 安装前环境检查 Piwik要求PHP版本高于PHP5.5(选用PHP7.2) Piwik需要pdo和pdo_mysql或mysqli支持(选用mysqli) Piw ...

  7. windows linux子系统(Windows Subsystem for Linux)的存放目录

    win10子系统把windows的底层接口做了个转换到Linux从而能运行linux,但是他在安装的时候并没有提供安装位置的选项.(还有hyper v) 现在,所有从商店安装的发行版都存在于以下目录中 ...

  8. java 字符串获取

    package java07; /* String 当中与获取相关的常用方法 public int length(); 获取字符串当中含有的字符的个数,得到字符串的长度 public String c ...

  9. spring整合Quartz2持久化任务调度

    转摘 https://blog.csdn.net/qwe6112071/article/details/50999386 因为通过Bean配置生成的JobDetail和CronTrigger或Simp ...

  10. mysql数据库进阶

    一.索引 索引,是数据库中专门用于帮助用户快速查询数据的一种数据结构.类似于字典中的目录,查找字典内容时可以根据目录查找到数据的存放位置,然后直接获取即可. 分类: 普通索引 唯一索引 全文索引 组合 ...