二进制搭建一个完整的K8S集群部署文档
服务器规划
角色 |
IP |
组件 |
k8s-master1 |
192.168.31.63 |
kube-apiserver kube-controller-manager kube-scheduler etcd |
k8s-master2 |
192.168.31.64 |
kube-apiserver kube-controller-manager kube-scheduler |
k8s-node1 |
192.168.31.65 |
kubelet kube-proxy docker etcd |
k8s-node2 |
192.168.31.66 |
kubelet kube-proxy docker etcd |
Load Balancer(Master) |
192.168.31.61 192.168.31.60 (VIP) |
Nginx L4 |
Load Balancer(Backup) |
192.168.31.62 |
Nginx L4 |
一 - 系统初始化
关闭防火墙:
# systemctl stop firewalld # systemctl disable firewalld
关闭selinux:
# setenforce # 临时 # sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
关闭swap:
# swapoff -a # 临时 # vim /etc/fstab # 永久
同步系统时间:
# ntpdate time.windows.com
添加hosts:
# vim /etc/hosts 192.168.31.63 k8s-master1 192.168.31.64 k8s-master2 192.168.31.65 k8s-node1 192.168.31.66 k8s-node2
修改主机名:
hostnamectl set-hostname k8s-master1
二 - Etcd集群
可在任意节点完成以下操作。
2.1 生成etcd证书
# cd TLS/etcd
安装cfssl工具:
# ./cfssl.sh
修改请求文件中hosts字段包含所有etcd节点IP:
# vi server-csr.json { "CN": "etcd", "hosts": [ "192.168.31.63", "192.168.31.64", "192.168.31.65" ], "key": { "algo": "rsa", "size": }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } # ./generate_etcd_cert.sh # ls *pem ca-key.pem ca.pem server-key.pem server.pem
2.2 部署三个Etcd节点
# tar zxvf etcd.tar.gz # cd etcd # cp TLS/etcd/ssl/{ca,server,server-key}.pem ssl
分别拷贝到Etcd三个节点:
# scp –r etcd root@192.168.31.63:/opt # scp etcd.service root@192.168.31.63:/usr/lib/systemd/system
登录三个节点修改配置文件 名称和IP:
# vi /opt/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.63:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.31.63:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.63:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.63:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.63:2380,etcd-2=https://192.168.31.64:2380,etcd-3=https://192.168.31.65:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
# systemctl start etcd # systemctl enable etcd
2.3 查看集群状态
# /opt/etcd/bin/etcdctl \ > --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \ > --endpoints="https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379" \ > cluster-health member 37f20611ff3d9209 is healthy: got healthy result from https://192.168.31.63:2379 member b10f0bac3883a232 is healthy: got healthy result from https://192.168.31.64:2379 member b46624837acedac9 is healthy: got healthy result from https://192.168.31.65:2379 cluster is healthy
三 - 部署Master Node
3.1 生成apiserver证书
# cd TLS/k8s
修改请求文件中hosts字段包含所有etcd节点IP:
# vi server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "192.168.31.60", "192.168.31.61", "192.168.31.62", "192.168.31.63", "192.168.31.64", "192.168.31.65", "192.168.31.66" ], "key": { "algo": "rsa", "size": }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } # ./generate_k8s_cert.sh # ls *pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
3.2 部署apiserver,controller-manager和scheduler
在Master节点完成以下操作。
二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161
二进制文件位置:kubernetes/serverr/bin
# tar zxvf k8s-master.tar.gz # cd kubernetes # cp TLS/k8s/ssl/*.pem ssl # cp –rf kubernetes /opt # cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system # cat /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \ --bind-address=192.168.31.63 \ --secure-port=6443 \ --advertise-address=192.168.31.63 \ …… # systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler
3.3 启用TLS Bootstrapping
为kubelet TLS Bootstrapping 授权:
# cat /opt/kubernetes/cfg/token.csv c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,,"system:node-bootstrapper"
格式:token,用户,uid,用户组
给kubelet-bootstrap授权:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
token也可自行生成替换:
head -c /dev/urandom | od -An -t x | tr -d ' '
但apiserver配置的token必须要与node节点bootstrap.kubeconfig配置里一致。
四 - 部署Worker Node
4.1 安装Docker
二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/
# tar zxvf k8s-node.tar.gz # tar zxvf docker-18.09..tgz # mv docker/* /usr/bin # mkdir /etc/docker # mv daemon.json /etc/docker # mv docker.service /usr/lib/systemd/system # systemctl start docker # systemctl enable docker
4.2 部署kubelet和kube-proxy
拷贝证书到Node:
# cd TLS/k8s # scp ca.pem kube-proxy*.pem root@192.168.31.65:/opt/kubernetes/ssl/ # cp kube-apiserver.service kube-controller-manager.service kube- # tar zxvf k8s-node.tar.gz # mv kubernetes /opt # cp kubelet.service kube-proxy.service /usr/lib/systemd/system
修改以下三个文件中IP地址:
# grep * bootstrap.kubeconfig: server: https://192.168.31.63:6443 kubelet.kubeconfig: server: https://192.168.31.63:6443 kube-proxy.kubeconfig: server: https://192.168.31.63:6443
修改以下两个文件中主机名:
# grep hostname * kubelet.conf:--hostname-override=k8s-node1 \ kube-proxy-config.yml:hostnameOverride: k8s-node1 # systemctl start kubelet # systemctl start kube-proxy # systemctl enable kubelet # systemctl enable kube-proxy
4.3 允许给Node颁发证书
# kubectl get csr # kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI # kubectl get node
4.4 部署CNI网络
二进制包下载地址:https://github.com/containernetworking/plugins/releases
# mkdir /opt/cni/bin /etc/cni/net.d # tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz –C /opt/cni/bin
确保kubelet启用CNI:
# cat /opt/kubernetes/cfg/kubelet.conf --network-plugin=cni
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
在Master执行:
kubectl apply –f kube-flannel.yaml # kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-5xmhh / Running 171m kube-flannel-ds-amd64-ps5fx / Running 150m
4.5 授权apiserver访问kubelet
为提供安全性,kubelet禁止匿名访问,必须授权才可以。
# cat /opt/kubernetes/cfg/kubelet-config.yml …… authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem …… # kubectl apply –f apiserver-to-kubelet-rbac.yaml
五. 部署Web UI和DNS
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml # vi recommended.yaml … kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: targetPort: nodePort: selector: k8s-app: kubernetes-dashboard … # kubectl apply -f recommended.yaml
创建service account并绑定默认cluster-admin管理员集群角色:
# cat dashboard-adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
获取token:
# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
访问地址:http://NodeIP:30001
使用输出的token登录Dashboard。
# kubectl apply –f coredns.yaml # kubectl get pods -n kube-system
六. Master高可用
6.1 部署Master组件(与Master1一致)
拷贝master1/opt/kubernetes和service文件:
# scp –r /opt/kubernetes root@192.168.31.64:/opt # scp –r /opt/etcd/ssl root@192.168.31.64:/opt/etcd # scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.31.64:/usr/lib/systemd/system
修改apiserver配置文件为本地IP:
# cat /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v= \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://192.168.31.63:2379,https://192.168.31.64:2379,https://192.168.31.65:2379 \ --bind-address=192.168.31.64 \ --secure-port= \ --advertise-address=192.168.31.64 \ …… # systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler
6.2 部署Nginx负载均衡
nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/
# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm # vim /etc/nginx/nginx.conf …… stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.31.63:; server 192.168.31.64:; } server { listen ; proxy_pass k8s-apiserver; } } …… # systemctl start nginx # systemctl enable nginx
6.3 Nginx+Keepalived高可用
主节点:
# yum install keepalived # vi /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的 priority # 优先级,备服务器设置 advert_int # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass } virtual_ipaddress { 192.168.31.60/ } track_script { check_nginx } } # cat /etc/keepalived/check_nginx.sh #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq ];then exit else exit fi # systemctl start keepalived # systemctl enable keepalived
备节点:
# cat /etc/keepalived/keepalived.conf global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/keepalived/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id # VRRP 路由 ID实例,每个实例是唯一的 priority # 优先级,备服务器设置 advert_int # 指定VRRP 心跳包通告间隔时间,默认1秒 authentication { auth_type PASS auth_pass } virtual_ipaddress { 192.168.31.60/ } track_script { check_nginx } } # cat /etc/keepalived/check_nginx.sh #!/bin/bash count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq ];then exit else exit fi # systemctl start keepalived # systemctl enable keepalived
测试:
# ip a : ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu qdisc pfifo_fast state UP group default qlen link/ether :0c::9d:ee: brd ff:ff:ff:ff:ff:ff inet 192.168.31.63/ brd 192.168.31.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.31.60/ scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe9d:ee30/ scope link valid_lft forever preferred_lft forever
关闭nginx测试VIP是否漂移到备节点。
6.4 修改Node连接VIP
测试VIP是否正常工作:
# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.31.60:6443/version { "major": "", "minor": "", "gitVersion": "v1.16.0", "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", "gitTreeState": "clean", "buildDate": "2019-09-18T14:27:17Z", "goVersion": "go1.12.9", "compiler": "gc", "platform": "linux/amd64" }
将Node连接VIP:
# cd /opt/kubernetes/cfg # grep * bootstrap.kubeconfig: server: https://192.168.31.63:6443 kubelet.kubeconfig: server: https://192.168.31.636443 kube-proxy.kubeconfig: server: https://192.168.31.63:6443
批量修改:
sed -i 's#192.168.31.63#192.168.31.60#g' *
二进制搭建一个完整的K8S集群部署文档的更多相关文章
- 11. 搭建一个完整的K8S集群
11. 搭建一个完整的Kubernetes集群 1. kubectl的命令遵循分类的原则(重点) 语法1: kubectl 动作 类 具体的对象 例如: """ kube ...
- 搭建一个完整的K8S集群-------基于CentOS 8系统
创建三个centos节点: 192.168.5.141 k8s-master 192.168.5.142 k8s-nnode1 192.168.5.143 k8s-nnode2 查看centos系统版 ...
- HP DL160 Gen9服务器集群部署文档
HP DL160 Gen9服务器集群部署文档 硬件配置=======================================================Server Memo ...
- redis多机集群部署文档
redis多机集群部署文档(centos6.2) (要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下 ...
- Redis集群部署文档(Ubuntu15.10系统)
Redis集群部署文档(Ubuntu15.10系统)(要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- Apache ZooKeeper 单机、集群部署文档
简介: Apache ZooKeeper 是一个分布式应用的高性能协调服务,功能包括:配置维护.统一命名.状态同步.集群管理.仲裁选举等. 下载地址:http://apache.fayea.com/z ...
- kafka集群部署文档(转载)
原文链接:http://www.cnblogs.com/luotianshuai/p/5206662.html Kafka初识 1.Kafka使用背景 在我们大量使用分布式数据库.分布式计算集群的时候 ...
- 从零开始,无DNS vcenter 6.7 vmotion热迁移,存储集群部署文档。
1,环境准备 准备:Vmware workstation环境 IP地址段规划 ESXI主机IP地址段 192.168.197.4-192.168.197.10 Vcenter Server集群IP地址 ...
随机推荐
- RIDE使用介绍
[转载] RIDE是一款专门用来编辑Robot Framework用例的软件,用Python编写并且开源.当我们针对一个系统编写好一套用例后,每当我们对系 统做一些更改的时候,便可以把已经写好的用例拿 ...
- 【leetcode】929. Unique Email Addresses
题目如下: Every email consists of a local name and a domain name, separated by the @ sign. For example, ...
- hdu1059&poj1014 Dividing (dp,多重背包的二分优化)
Problem Description Marsha and Bill own a collection of marbles. They want to split the collection a ...
- POJ 2808 校门外的树(线段树入门)
题目描述 某校大门外长度为L的马路上有一排树,每两棵相邻的树之间的间隔都是1米.我们可以把马路看成一个数轴,马路的一端在数轴0的位置,另一端在L的位置:数轴上的每个整数点,即0,1,2,……,L,都种 ...
- One Switch for Mac 一键切换系统各项功能
One Switch 是火球工作室推出的最新 Mac效率软件,它在 Menubar 菜单里集成了隐藏桌面(图标).切换 Dark Mode.保持亮屏.开启屏保的一键切换按钮,将以往这些以独立小 ...
- HTML5+CSS3特效设计集锦
20款CSS3鼠标经过文字背景动画特效 站长之家 -- HTML5特效索引 爱果果h5酷站欣赏 30个酷毙的交互式网站(HTML5+CSS3) 轻松搞定动画!17个有趣实用的CSS 3悬停效果教程 ...
- (转)Spring Boot干货系列:(四)开发Web应用之Thymeleaf篇
转:http://tengj.top/2017/03/13/springboot4/ 前言 Web开发是我们平时开发中至关重要的,这里就来介绍一下Spring Boot对Web开发的支持. 正文 Sp ...
- wrtnode板
Arduino技术交流:www.openjumper.com QQ群 ArduinoCN I : 180646674,ArduinoCN II : 203870250 商品详情 产品介绍 : WRT ...
- leetcode上的一些单链表
147- 思路: 148- 思路: 24- 思路: 25- 思路: 21- 思路: 109- 思路: 237- 思路:
- 如何理解 HTML 语义化?
先看下面两段代码 <div>标题</div> <div> <div>一段文字</div> <div> <div>列表 ...