二进制搭建一个完整的K8S集群部署文档
服务器规划
|
角色 |
IP |
组件 |
|
k8s-master1 |
192.168.31.63 |
kube-apiserver kube-controller-manager kube-scheduler etcd |
|
k8s-master2 |
192.168.31.64 |
kube-apiserver kube-controller-manager kube-scheduler |
|
k8s-node1 |
192.168.31.65 |
kubelet kube-proxy docker etcd |
|
k8s-node2 |
192.168.31.66 |
kubelet kube-proxy docker etcd |
|
Load Balancer(Master) |
192.168.31.61 192.168.31.60 (VIP) |
Nginx L4 |
|
Load Balancer(Backup) |
192.168.31.62 |
Nginx L4 |
一 - 系统初始化
关闭防火墙:
# systemctl stop firewalld # systemctl disable firewalld
关闭selinux:
# setenforce # 临时 # sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
关闭swap:
# swapoff -a # 临时 # vim /etc/fstab # 永久
同步系统时间:
# ntpdate time.windows.com
添加hosts:
# vim /etc/hosts 192.168.31.63 k8s-master1 192.168.31.64 k8s-master2 192.168.31.65 k8s-node1 192.168.31.66 k8s-node2
修改主机名:
hostnamectl set-hostname k8s-master1
二 - Etcd集群
可在任意节点完成以下操作。
2.1 生成etcd证书
# cd TLS/etcd
安装cfssl工具:
# ./cfssl.sh
修改请求文件中hosts字段包含所有etcd节点IP:
# vi server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.31.63",
"192.168.31.64",
"192.168.31.65"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
# ./generate_etcd_cert.sh
# ls *pem
ca-key.pem ca.pem server-key.pem server.pem
2.2 部署三个Etcd节点
# tar zxvf etcd.tar.gz
# cd etcd
# cp TLS/etcd/ssl/{ca,server,server-key}.pem ssl
分别拷贝到Etcd三个节点:
# scp –r etcd root@192.168.31.63:/opt # scp etcd.service root@192.168.31.63:/usr/lib/systemd/system
登录三个节点修改配置文件 名称和IP:
# vi /opt/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.63:2380,etcd-2=https://192.168.31.64:2380,etcd-3=https://192.168.31.65:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
# systemctl start etcd # systemctl enable etcd
2.3 查看集群状态
# /opt/etcd/bin/etcdctl \ > --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \ > --endpoints="https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379" \ > cluster-health member 37f20611ff3d9209 is healthy: got healthy result from https://192.168.31.63:2379 member b10f0bac3883a232 is healthy: got healthy result from https://192.168.31.64:2379 member b46624837acedac9 is healthy: got healthy result from https://192.168.31.65:2379 cluster is healthy
三 - 部署Master Node
3.1 生成apiserver证书
# cd TLS/k8s
修改请求文件中hosts字段包含所有etcd节点IP:
# vi server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"192.168.31.60",
"192.168.31.61",
"192.168.31.62",
"192.168.31.63",
"192.168.31.64",
"192.168.31.65",
"192.168.31.66"
],
"key": {
"algo": "rsa",
"size":
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# ./generate_k8s_cert.sh
# ls *pem
ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
3.2 部署apiserver,controller-manager和scheduler
在Master节点完成以下操作。
二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
二进制文件位置:kubernetes/serverr/bin
# tar zxvf k8s-master.tar.gz # cd kubernetes # cp TLS/k8s/ssl/*.pem ssl # cp –rf kubernetes /opt # cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system # cat /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \ --bind-address=192.168.31.63 \ --secure-port=6443 \ --advertise-address=192.168.31.63 \ …… # systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler
3.3 启用TLS Bootstrapping
为kubelet TLS Bootstrapping 授权:
# cat /opt/kubernetes/cfg/token.csv c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,,"system:node-bootstrapper"
格式:token,用户,uid,用户组
给kubelet-bootstrap授权:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
token也可自行生成替换:
head -c /dev/urandom | od -An -t x | tr -d ' '
但apiserver配置的token必须要与node节点bootstrap.kubeconfig配置里一致。
四 - 部署Worker Node
4.1 安装Docker
二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
# tar zxvf k8s-node.tar.gz # tar zxvf docker-18.09..tgz # mv docker/* /usr/bin # mkdir /etc/docker # mv daemon.json /etc/docker # mv docker.service /usr/lib/systemd/system # systemctl start docker # systemctl enable docker
4.2 部署kubelet和kube-proxy
拷贝证书到Node:
# cd TLS/k8s # scp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/ # cp kube-apiserver.service kube-controller-manager.service kube- # tar zxvf k8s-node.tar.gz # mv kubernetes /opt # cp kubelet.service kube-proxy.service /usr/lib/systemd/system
修改以下三个文件中IP地址:
# grep * bootstrap.kubeconfig: server: https://192.168.31.63:6443 kubelet.kubeconfig: server: https://192.168.31.63:6443 kube-proxy.kubeconfig: server: https://192.168.31.63:6443
修改以下两个文件中主机名:
# grep hostname * kubelet.conf:--hostname-override=k8s-node1 \ kube-proxy-config.yml:hostnameOverride: k8s-node1 # systemctl start kubelet # systemctl start kube-proxy # systemctl enable kubelet # systemctl enable kube-proxy
4.3 允许给Node颁发证书
# kubectl get csr # kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI # kubectl get node
4.4 部署CNI网络
二进制包下载地址:https://github.com/containernetworking/plugins/releases
# mkdir /opt/cni/bin /etc/cni/net.d # tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz –C /opt/cni/bin
确保kubelet启用CNI:
# cat /opt/kubernetes/cfg/kubelet.conf --network-plugin=cni
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
在Master执行:
kubectl apply –f kube-flannel.yaml # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-5xmhh / Running 171m kube-flannel-ds-amd64-ps5fx / Running 150m
4.5 授权apiserver访问kubelet
为提供安全性,kubelet禁止匿名访问,必须授权才可以。
# cat /opt/kubernetes/cfg/kubelet-config.yml
……
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
……
# kubectl apply –f apiserver-to-kubelet-rbac.yaml
五. 部署Web UI和DNS
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
# vi recommended.yaml
…
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port:
targetPort:
nodePort:
selector:
k8s-app: kubernetes-dashboard
…
# kubectl apply -f recommended.yaml
创建service account并绑定默认cluster-admin管理员集群角色:
# cat dashboard-adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
获取token:
# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
访问地址:http://NodeIP:30001
使用输出的token登录Dashboard。
# kubectl apply –f coredns.yaml # kubectl get pods -n kube-system
六. Master高可用
6.1 部署Master组件(与Master1一致)
拷贝master1/opt/kubernetes和service文件:
# scp –r /opt/kubernetes root@192.168.31.64:/opt
# scp –r /opt/etcd/ssl root@192.168.31.64:/opt/etcd
# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.31.64:/usr/lib/systemd/system
修改apiserver配置文件为本地IP:
# cat /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v= \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \ --bind-address=192.168.31.64 \ --secure-port= \ --advertise-address=192.168.31.64 \ …… # systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler
6.2 部署Nginx负载均衡
nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/
# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm
# vim /etc/nginx/nginx.conf
……
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.31.63:;
server 192.168.31.64:;
}
server {
listen ;
proxy_pass k8s-apiserver;
}
}
……
# systemctl start nginx
# systemctl enable nginx
6.3 Nginx+Keepalived高可用
主节点:
# yum install keepalived
# vi /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的
priority # 优先级,备服务器设置
advert_int # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.31.60/
}
track_script {
check_nginx
}
}
# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq ];then
exit
else
exit
fi
# systemctl start keepalived
# systemctl enable keepalived
备节点:
# cat /etc/keepalived/keepalived.conf
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的
priority # 优先级,备服务器设置
advert_int # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
192.168.31.60/
}
track_script {
check_nginx
}
}
# cat /etc/keepalived/check_nginx.sh
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq ];then
exit
else
exit
fi
# systemctl start keepalived
# systemctl enable keepalived
测试:
# ip a
: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP group default qlen
link/ether :0c::9d:ee: brd ff:ff:ff:ff:ff:ff
inet 192.168.31.63/ brd 192.168.31.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.31.60/ scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9d:ee30/ scope link
valid_lft forever preferred_lft forever
关闭nginx测试VIP是否漂移到备节点。
6.4 修改Node连接VIP
测试VIP是否正常工作:
# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.31.60:6443/version
{
"major": "",
"minor": "",
"gitVersion": "v1.16.0",
"gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
"gitTreeState": "clean",
"buildDate": "2019-09-18T14:27:17Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
}
将Node连接VIP:
# cd /opt/kubernetes/cfg # grep * bootstrap.kubeconfig: server: https://192.168.31.63:6443 kubelet.kubeconfig: server: https://192.168.31.636443 kube-proxy.kubeconfig: server: https://192.168.31.63:6443
批量修改:
sed -i 's#192.168.31.63#192.168.31.60#g' *
二进制搭建一个完整的K8S集群部署文档的更多相关文章
- 11. 搭建一个完整的K8S集群
11. 搭建一个完整的Kubernetes集群 1. kubectl的命令遵循分类的原则(重点) 语法1: kubectl 动作 类 具体的对象 例如: """ kube ...
- 搭建一个完整的K8S集群-------基于CentOS 8系统
创建三个centos节点: 192.168.5.141 k8s-master 192.168.5.142 k8s-nnode1 192.168.5.143 k8s-nnode2 查看centos系统版 ...
- HP DL160 Gen9服务器集群部署文档
HP DL160 Gen9服务器集群部署文档 硬件配置=======================================================Server Memo ...
- redis多机集群部署文档
redis多机集群部署文档(centos6.2) (要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下 ...
- Redis集群部署文档(Ubuntu15.10系统)
Redis集群部署文档(Ubuntu15.10系统)(要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- Apache ZooKeeper 单机、集群部署文档
简介: Apache ZooKeeper 是一个分布式应用的高性能协调服务,功能包括:配置维护.统一命名.状态同步.集群管理.仲裁选举等. 下载地址:http://apache.fayea.com/z ...
- kafka集群部署文档(转载)
原文链接:http://www.cnblogs.com/luotianshuai/p/5206662.html Kafka初识 1.Kafka使用背景 在我们大量使用分布式数据库.分布式计算集群的时候 ...
- 从零开始,无DNS vcenter 6.7 vmotion热迁移,存储集群部署文档。
1,环境准备 准备:Vmware workstation环境 IP地址段规划 ESXI主机IP地址段 192.168.197.4-192.168.197.10 Vcenter Server集群IP地址 ...
随机推荐
- 你(可能)不知道的 web api
转自奇舞周刊 简介 作为前端er,我们的工作与web是分不开的,随着HTML5的日益壮大,浏览器自带的webapi也随着增多.本篇文章主要选取了几个有趣且有用的webapi进行介绍,分别介绍其用法.用 ...
- 【QT学习】数独游戏
前几天刷leetcode刷到一题,讲sudokuSolver,写完感觉很有意思,遂想做一个数独游戏,百度了一下如何自动生成题库,参考某位大神安卓下的实现思路,自己做了一套文字版的数独游戏,后来想乘机会 ...
- 转 关于HTML5中meta name="viewport" 的用法 不同分辨率手机比例缩放
移动端的布局不同于pc端,首先我们要知道在移动端中,css中的1px并不等于物理上的1px,因为手机屏幕的分辨率已经越来越高,高像素但是屏幕尺寸却没有发生太大变化,那就意味着一个物理像素点实际上塞入了 ...
- String path = request.getContextPath();报错
String path = request.getContextPath();报错 1. 右击该项目 - Build Path - Configure Build Path , 在 Libraries ...
- SQL中LEFT JOIN ON AND 与 LEFT JOIN ON WHERE的区别
数据库在通过连接两张或多张表来返回记录时,都会生成一张中间的临时表,然后再将这张临时表返回给用户. ON...WHERE ' order by ts.id SQL执行过程: 生成临时表: ON条件: ...
- jdbc blob插入及查询操作
首先建一张表 create table picture( picId ) primary key not null, picName ) not null, picfile image null ) ...
- 接口测试——postman安装
http://www.jianshu.com/p/dd0db1b13cfc postman的视频终于过审了,https://ke.qq.com/course/229839#tuin=1eb87ef,大 ...
- RHEL6.1 安装 Oracle10gr2 (图文、解析)
目录 目录 软件环境 前言 初始化RHEL61 硬件检测 预安装软件包 安装oratoolkit 创建Oracle用户 修改配置文件 系统版本伪装 解压并运行Oracle10gr2安装包 安装rlwr ...
- 使用 vue.js 的一些操作记录
vue.js不支持ie8以下 1. 在html的属性中赋值: 需要在属性前加上 v-bind
- [转]关于Unity中文件读取 - 大世界
原文 http://www.cnblogs.com/ThreeThousandBigWorld/p/3199245.html 存储: 在程序发布后文件的存放有两种,第一种是打包到Uniyt的资源包 ...