k8s集群之master节点部署
apiserver的部署
api-server的部署脚本
[root@mast-1 k8s]# cat apiserver.sh
#!/bin/bash MASTER_ADDRESS=$1 主节点IP
ETCD_SERVERS=$2 etcd地址 cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--bind-address=${MASTER_ADDRESS} \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
下载二进制包
[root@mast-1 k8s]# wget https://dl.k8s.io/v1.10.13/kubernetes-server-linux-amd64.tar.gz
解压安装
[root@mast-1 k8s]# tar xf kubernetes-server-linux-amd64.tar.gz
[root@mast-1 k8s]# cd kubernetes/server/bin/
[root@mast-1 bin]# ls
apiextensions-apiserver cloud-controller-manager.tar kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag
cloud-controller-manager hyperkube kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar
cloud-controller-manager.docker_tag kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter
[root@mast-1 ~]# mkdir /opt/kubernetes/{cfg,ssl,bin} -pv
mkdir: 已创建目录 "/opt/kubernetes"
mkdir: 已创建目录 "/opt/kubernetes/cfg"
mkdir: 已创建目录 "/opt/kubernetes/ssl"
mkdir: 已创建目录 "/opt/kubernetes/bin"
[root@mast-1 bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@mast-1 k8s]# ./apiserver.sh 192.168.10.11 https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379
[root@mast-1 k8s]# cd /opt/kubernetes/cfg/
[root@mast-1 cfg]# vi kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=false \
--log-dir=/opt/kubernetes/logs \ 定义日志目录;注意创建此目录
--v=4 \
--etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 \
--bind-address=192.168.10.11 \ 绑定的IP地址
--secure-port=6443 \ 端口基于https通信的
--advertise-address=192.168.10.11 \ 集群通告地址;其他节点访问通告这个IP
--allow-privileged=true \ 容器层的授权
--service-cluster-ip-range=10.0.0.0/24 \ 负责均衡的虚拟IP
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ 启用准入插件;决定是否要启用一些高级功能
--authorization-mode=RBAC,Node \ 认证模式
--kubelet-https=true \ api-server主动访问kubelet是使用https协议
--enable-bootstrap-token-auth \ 认证客户端并实现自动颁发证书
--token-auth-file=/opt/kubernetes/cfg/token.csv \ 指定token文件
--service-node-port-range=30000-50000 \ node认证端口范围
--tls-cert-file=/opt/kubernetes/ssl/server.pem \ apiserver 证书文件
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ ca证书
--etcd-cafile=/opt/etcd/ssl/ca.pem \ etcd 证书
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
生成证书与token文件
[root@mast-1 k8s]# cat k8s-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF cat > ca-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"10.206.176.19", master IP
"10.206.240.188", LB;node节点不用写,写上也不错
"10.206.240.189", LB:
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@mast-1 k8s]# bash k8s-cert.sh
2019/04/22 18:05:08 [INFO] generating a new CA key and certificate from CSR
2019/04/22 18:05:08 [INFO] generate received request
2019/04/22 18:05:08 [INFO] received CSR
2019/04/22 18:05:08 [INFO] generating key: rsa-2048
2019/04/22 18:05:09 [INFO] encoded CSR
2019/04/22 18:05:09 [INFO] signed certificate with serial number 631400127737303589248201910249856863284562827982
2019/04/22 18:05:09 [INFO] generate received request
2019/04/22 18:05:09 [INFO] received CSR
2019/04/22 18:05:09 [INFO] generating key: rsa-2048
2019/04/22 18:05:10 [INFO] encoded CSR
2019/04/22 18:05:10 [INFO] signed certificate with serial number 99345466047844052770348056449571016254842578399
2019/04/22 18:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:10 [INFO] generate received request
2019/04/22 18:05:10 [INFO] received CSR
2019/04/22 18:05:10 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 309283889504556884051139822527420141544215396891
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2019/04/22 18:05:11 [INFO] generate received request
2019/04/22 18:05:11 [INFO] received CSR
2019/04/22 18:05:11 [INFO] generating key: rsa-2048
2019/04/22 18:05:11 [INFO] encoded CSR
2019/04/22 18:05:11 [INFO] signed certificate with serial number 286610519064253595846587034459149175950956557113
2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@mast-1 k8s]# ls
admin.csr apiserver.sh ca-key.pem etcd-cert.sh kube-proxy.csr kubernetes scheduler.sh server.pem
admin-csr.json ca-config.json ca.pem etcd.sh kube-proxy-csr.json kubernetes-server-linux-amd64.tar.gz server.csr
admin-key.pem ca.csr controller-manager.sh k8s-cert kube-proxy-key.pem kubernetes.tar.gz server-csr.json
admin.pem ca-csr.json etcd-cert k8s-cert.sh kube-proxy.pem master.zip
生成token文件
[root@mast-1 k8s]# cp ca-key.pem ca.pem server-key.pem server.pem /opt/kubernetes/ssl/
[root@mast-1 k8s]#BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008 [root@mast-1 k8s]#cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
[root@mast-1 k8s]# cat token.csv
0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@mast-1 k8s]# mv token.csv /opt/kubernetes/cfg/
启动apiserver
[root@mast-1 k8s]# systemctl start kube-apiserver
[root@mast-1 k8s]# ps -ef | grep apiserver
root 3264 1 99 20:35 ? 00:00:01 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.10.11:2379,https:/
/192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 --secure-port=6443 --advertise-address=192.168.10.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pemroot 3274 1397 0 20:35 pts/0 00:00:00 grep --color=auto apiserver
生成配置文件并启动controller-manager
[root@mast-1 k8s]# cat controller-manager.sh
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ 日志配置
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\ apimaster端口
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--experimental-cluster-signing-duration=87600h0m0s" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
[root@mast-1 k8s]# bash controller-manager.sh 127.0.0.1 输入masterIP
[root@mast-1 k8s]# ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.10.11:6443 *:*
users:(("kube-apiserver",pid=7604,fd=6))LISTEN 0 128 192.168.10.11:2379 *:*
users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:*
users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:*
users:(("kube-controller",pid=7593,fd=3))LISTEN 0 128 192.168.10.11:2380 *:*
users:(("etcd",pid=1428,fd=5))LISTEN 0 128 127.0.0.1:8080 *:*
users:(("kube-apiserver",pid=7604,fd=5))LISTEN 0 128 *:22 *:*
users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:*
users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10257 :::*
users:(("kube-controller",pid=7593,fd=5))LISTEN 0 128 :::22 :::*
users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::*
users:(("master",pid=1102,fd=14))
生成配置文件,并启动scheduler
[root@mast-1 k8s]# cat scheduler.sh
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes [Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
[root@mast-1 k8s]# bash scheduler.sh 127.0.0.1
[root@mast-1 k8s]# ss -lntp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 192.168.10.11:2379 *:*
users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:*
users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:*
users:(("kube-controller",pid=7809,fd=3))LISTEN 0 128 192.168.10.11:2380 *:*
users:(("etcd",pid=1428,fd=5))LISTEN 0 128 *:22 *:*
users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:*
users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10251 :::*
users:(("kube-scheduler",pid=8073,fd=3))LISTEN 0 128 :::10257 :::*
users:(("kube-controller",pid=7809,fd=5))LISTEN 0 128 :::22 :::*
users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::*
users:(("master",pid=1102,fd=14))
配置文件
[root@mast-1 k8s]# cat /opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \ API连接地址
--leader-elect=true \ 自动做高可用选举
--address=127.0.0.1 \ 地址,不对外提供服务
--service-cluster-ip-range=10.0.0.0/24 \ 地址范围与apiserver配置一样
--cluster-name=kubernetes \ 名字
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \签名
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ 签名
--root-ca-file=/opt/kubernetes/ssl/ca.pem \ 根证书
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s" 有效时间
配置文件
[root@mast-1 k8s]# cat /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
将客户端工具复制到/usr/bin目录下
[root@mast-1 k8s]# cp kubernetes/server/bin/kubectl /usr/bin/
查看集群状态
[root@mast-1 k8s]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
controller-manager Healthy ok
k8s集群之master节点部署的更多相关文章
- k8s集群———单master节点2node节点
#部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...
- 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线
情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...
- k8s集群节点更换ip 或者 k8s集群添加新节点
1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...
- 使用KubeOperator扩展k8s集群的worker节点
官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...
- 实战交付一套dubbo微服务到k8s集群(2)之Jenkins部署
Jenkins官网:https://www.jenkins.io/zh/ Jenkins 2.190.3 镜像地址:docker pull jenkins/jenkins:2.190.3 1.下载Je ...
- k8s集群添加node节点(使用kubeadm搭建的集群)
1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...
- 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署
1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...
- 实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署
基础架构 主机名 角色 IP地址 mfyxw10.mfyxw.com K8S代理节点1,zk1 192.168.80.10 mfyxw20.mfyxw.com K8S代理节点2,zk2 192.168 ...
- 记二进制搭建k8s集群完成后,部署时容器一直在创建中的问题
gcr.io/google_containers/pause-amd64:3.0这个容器镜像国内不能下载容器一直创建中是这个原因 在kubelet.service中配置 systemctl daemo ...
随机推荐
- [angularJS]ng-hide|ng-show切换
<div class="row ng-scope"> <div class="col-lg-12"> <h1 class=&quo ...
- 【hyddd驱动开发学习】DDK与WDK
最近尝试去了解WINDOWS下的驱动开发,现在总结一下最近看到的资料. 1.首先,先从基础的东西说起,开发WINDOWS下的驱动程序,需要一个专门的开发包,如:开发JAVA程序,我们可能需要一个JDK ...
- nodejs supvisor模块
在测试nodejs程序的时候,每次都需要在控制台编译,非常的麻烦.supervisor是一款无需重复手动编译,自动后台监听文件变化来自动编译,并且不需要在项目内require,使用非常的方便. 使用方 ...
- Bootstrap Notify
https://github.com/mouse0270/bootstrap-notify $.notify('Hello World', { offset: { x: 50, y: 100 } }) ...
- BZOJ3990 排序
题目:www.lydsy.com/JudgeOnline/problem.php?id=3990 这题很不错. 刚开始时无从下手,想了好多$O((2^n)log(2^n))$ 的idea,但是都不行. ...
- ASP.NET Core MVC 2.x 全面教程_ASP.NET Core MVC 15. 用户管理
源码的github的地址 https://github.com/solenovex/ASP.NET-Core-MVC-Tutorial-Code 语雀上的人的地址: https://github.co ...
- vector理解一波~~~
Vector: 头文件: #include<vector> using namespacestd; 定义: vector<类型>q;//类同于 "类型 q[];&q ...
- sql server通过脚本进行数据库压缩全备份的方法
问题:生产环境的数据库可能比较大,如果直接进行全备而不压缩的话,备份集就会占用了大量磁盘空间.给备份文件的存放管理带来不便. 解决方案:通过with compression显式启用备份压缩,指定对此备 ...
- C++结构体的应用_YCOJ
结构体是一种自定义的东西,用struct来定义.在他里面, 可以装许多东西,比如int,string,char,bool等等等等. 如: struct a{ string name; int a; i ...
- __slots__ 和 @property
动态非常灵活, 创建一个class后, 给实例绑定一个属性: >>> class Bird: ... pass ... >>> s = Bird() >> ...