kubernetes-集群构建
| 主机 | 主机名 | 集色角色 |
| 192.168.1.200 | master | deploy、etcd、lb1、master1 |
| 192.168.1.201 | master2 | lb2、master2 |
| 192.168.1.202 | node | etcd2、node1 |
| 192.168.1.203 | node2 | etcd3、node2 |
| 192.168.1.250 | vip |
yum install -y epel-release
yum install -y python
iptables -F
setenforce
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cfoSPSgeEkAkgY08UIVWK2t2eNJIrKph5wkRkZX7AKs root@master
The key's randomart image is:
+---[RSA ]----+
|BOB=+ |
|oB=o . |
| oB + . . |
| +.O . * |
|o.B B o S o |
|Eo.+ + o o . |
|oo . . . . |
|o.+ . . |
|. o |
+----[SHA256]-----+
[root@master ~]# for ip in ; do ssh-copy-id 192.168..$ip; done
5:测试是否可以免密登录
[root@master ~]# ssh 192.168.1.200
Last login: Wed Dec :: from 192.168.1.2
[root@master ~]# exit
登出
Connection to 192.168.1.200 closed.
[root@master ~]# ssh 192.168.1.201
Last login: Wed Dec :: from 192.168.1.2
[root@master2 ~]# exit
登出
Connection to 192.168.1.201 closed.
[root@master ~]# ssh 192.168.1.202
Last login: Wed Dec :: from 192.168.1.200
[root@node1 ~]# exit
登出
Connection to 192.168.1.202 closed.
[root@master ~]# ssh 192.168.1.203
Last login: Wed Dec :: from 192.168.1.2
[root@node2 ~]# exit
登出
Connection to 192.168.1.203 closed.
[root@master ~]# chmod +x easzup
[root@master ~]# ./easzup -D
[root@master ~]# ls /etc/ansible/
.prepare.yml .docker.yml .network.yml .upgrade.yml .setup.yml bin down pics tools
.etcd.yml .kube-master.yml .cluster-addon.yml .backup.yml .clean.yml dockerfiles example README.md
.containerd.yml .kube-node.yml .harbor.yml .restore.yml ansible.cfg docs manifests roles
7:配置hosts集群参数
[root@master ~ ]# cd /etc/ansible
[root@master ansible]# cp example/hosts.multi-node hosts
[root@master ansible]# vim hosts
[etcd] ##设置etcd节点ip
192.168.1.200 NODE_NAME=etcd1
192.168.1.202 NODE_NAME=etcd2
192.168.1.203 NODE_NAME=etcd3 [kube-master] ##设置master节点ip
192.168.1.200
192.168.1.201 [kube-node] ##设置node节点ip
192.168.1.202
192.168.1.203 [ex-lb] ##设置lb节点ip和VIP
192.168.1.200 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
192.168.1.201 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
[root@master ansible]# ansible all -m ping
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user
configurable on deprecation. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details 192.168.1.201 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.202 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.203 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
二、开始部署集群
【deploy节点操作】手动安装方式
1:安装ca证书
[root@master ansible]# ansible-playbook .prepare.yml
[root@master ansible]# for ip in ; do ETCDCTL_API= etcdctl --endpoints=https://192.168.1.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; done
https://192.168.1.200:2379 is healthy: successfully committed proposal: took = 5.658163ms
https://192.168.1.202:2379 is healthy: successfully committed proposal: took = 6.384588ms
https://192.168.1.203:2379 is healthy: successfully committed proposal: took = 7.386942ms
3:安装docker
[root@master ansible]# ansible-playbook .docker.yml
[root@master ansible]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
5:安装node节点
[root@master ansible]# ansible-playbook .kube-node.yml
[root@master ansible]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.202 Ready node 12s v1.15.0
192.168.1.203 Ready node 12s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-7bk5w / Running 61s
kube-flannel-ds-amd64-blcxx / Running 61s
kube-flannel-ds-amd64-c4sfx / Running 61s
kube-flannel-ds-amd64-f8pnz / Running 61s
[root@master ansible]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.68.191.0 <none> /TCP 13m
kube-dns ClusterIP 10.68.0.2 <none> /UDP,/TCP,/TCP 15m
kubernetes-dashboard NodePort 10.68.115.45 <none> :/TCP 13m
metrics-server ClusterIP 10.68.116.163 <none> /TCP 15m
traefik-ingress-service NodePort 10.68.106.241 <none> :/TCP,:/TCP 12m
【自动安装方式】
一步执行上面所有手动安装操作
[root@master ansible]# ansible-playbook .setup.yml
[root@master ansible]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.1.200 58m % 960Mi %
192.168.1.201 34m % 1018Mi %
192.168.1.202 76m % 549Mi %
192.168.1.203 89m % 568Mi %
[root@master ansible]# kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-797455887b-9nscp 5m 22Mi
kube-system coredns-797455887b-k92wv 5m 19Mi
kube-system heapster-5f848f54bc-vvwzx 1m 11Mi
kube-system kube-flannel-ds-amd64-7bk5w 3m 20Mi
kube-system kube-flannel-ds-amd64-blcxx 2m 19Mi
kube-system kube-flannel-ds-amd64-c4sfx 2m 18Mi
kube-system kube-flannel-ds-amd64-f8pnz 2m 10Mi
kube-system kubernetes-dashboard-5c7687cf8-hnbdp 1m 22Mi
kube-system metrics-server-85c7b8c8c4-6q4vj 1m 16Mi
kube-system traefik-ingress-controller-766dbfdddd-98trv 4m 17Mi
查看集群信息
[root@master ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.200:6443
CoreDNS is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
:测试DNS
①创建一个nginx.service
[root@master ansible]# kubectl run nginx --image=nginx --expose --port=
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/nginx created
deployment.apps/nginx created
[root@master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx.default.svc.cluster.local
Server: 10.68.0.2
Address: 10.68.0.2: Name: nginx.default.svc.cluster.local
Address: 10.68.243.55
三、增加node节点,IP:192.168.1.204
【deploy节点操作】
1:拷贝公钥到新的node节点机器上
[root@master ansible]# ssh-copy-id 192.168.1.204
2:修改hosts文件,添加新的node节点IP
[root@master ansible]# vim hosts
[kube-node]
192.168.1.202
192.168.1.203
192.168.1.204
[root@master ansible]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.202 Ready node 9h v1.15.0
192.168.1.203 Ready node 9h v1.15.0
192.168.1.204 Ready node 2m11s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-797455887b-9nscp / Running 31h 172.20.3.2 192.168.1.203 <none> <none>
coredns-797455887b-k92wv / Running 31h 172.20.2.2 192.168.1.202 <none> <none>
heapster-5f848f54bc-vvwzx / Running 31h 172.20.2.4 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-7bk5w / Running 31h 192.168.1.202 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-blcxx / Running 31h 192.168.1.200 192.168.1.200 <none> <none>
kube-flannel-ds-amd64-c4sfx / Running 31h 192.168.1.203 192.168.1.203 <none> <none>
kube-flannel-ds-amd64-f8pnz / Running 31h 192.168.1.201 192.168.1.201 <none> <none>
kube-flannel-ds-amd64-vdd7n / Running 21h 192.168.1.204 192.168.1.204 <none> <none>
kubernetes-dashboard-5c7687cf8-hnbdp / Running 31h 172.20.3.3 192.168.1.203 <none> <none>
metrics-server-85c7b8c8c4-6q4vj / Running 31h 172.20.2.3 192.168.1.202 <none> <none>
traefik-ingress-controller-766dbfdddd-98trv / Running 31h 172.20.3.4 192.168.1.203 <none> <none>
kubernetes-集群构建的更多相关文章
- 企业运维实践-丢弃手中的 docker build , 使用Kaniko直接在Kubernetes集群或Containerd环境中快速进行构建推送容器镜像
关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 本章目录 目录 首发地址: h ...
- kubeadm搭建kubernetes集群之一:构建标准化镜像
使用docker可以批量管理多个容器,但都是在同一台电脑内进行的,这在实际生产环境中是不够用的,如何突破单机的限制?让多个电脑上的容器可以像单机上的docker-compose.yml管理的那样方便呢 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- kubernetes集群pod使用tc进行网络资源限额
kubernetes集群pod使用tc进行网络资源限额 Docker容器可以实现CPU,内存,磁盘的IO限额,但是没有实现网络IO的限额.主要原因是在实际使用中,构建的网络环境是往超级复杂的大型网络. ...
- Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库
搭建背景 企业环境中使用Docker环境,一般出于安全考虑,业务使用的镜像一般不会从第三方公共仓库下载.那么就要引出今天的主题 企业级环境中基于Harbor搭建自己的安全认证仓库 介绍 名称:Harb ...
- Centos7部署Kubernetes集群
目录贴:Kubernetes学习系列 1.环境介绍及准备: 1.1 物理机操作系统 物理机操作系统采用Centos7.3 64位,细节如下. [root@localhost ~]# uname -a ...
- Harbor快速部署到Kubernetes集群及登录问题解决
Harbor(https://goharbor.io)是一个功能强大的容器镜像管理和服务系统,用于提供专有容器镜像服务.随着云原生架构的广泛使用,原来由VMWare开发的Harbor也加入了云原生基金 ...
- kubeadm搭建kubernetes集群之二:创建master节点
在上一章kubeadm搭建kubernetes集群之一:构建标准化镜像中我们用VMware安装了一个CentOS7虚拟机,并且打算用这个虚拟机的镜像文件作为后续整个kubernetes的标准化镜像,现 ...
- Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目
在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
随机推荐
- Android ConstraintLayout
对官方例子加上自己的容器即可调整ConstraintLayout实时运行中观察这种布局的变化
- Opencv的线性滤波和非线性滤波
线性滤波 :方框滤波 均值滤波 高斯滤波 非线性滤波: 中值滤波 双边滤波 这几个滤波都是起模糊作用 去除噪点 不废话了 下面是代码 #include <opencv2/opencv.h ...
- Ubuntu改坏sudoers后无法使用sudo的解决办法
练习安装odoo的时候,创建了一个odoo用户,想把它赋予sudo权限,然而,编辑的时候不留意,改坏了,导致sudo无法使用,无法编辑sudoers文件修改回来. 总提示如下信息: >>& ...
- 2019-10-11:渗透测试,基础学习,php+mysql连接,笔记
mysql导出数据1,通过工具如phpmyadmin,navicat等2,mysqldump -u -p 库名 > /数据库文件, mysqldump -u -p 库名 表名 > /表数据 ...
- 选择了uniapp开发app
7月份打算做一简单app,之前公司做app的时候简单用过Dcloud公司的mui,当时由于uniapp刚出来,最终选择了mui.对uniapp的 了解几乎没有. 做app对我来说几乎是零基础的,当然是 ...
- webuploader 快速应用(C#)
百度的WebUploader前端插件作为目前比较好用且免费的附件上传工具,利用了断点续传特点实现了大文件上传功能,其更好的兼容性与界面效果完全可以替换掉IE的activex 上传控件.许多人或许还不知 ...
- C#学习笔记03--循环和一维数组
一.循环(重点) 什么时候用循环? 想让一段代码执行多次, 这段代码可能不一样但是一定有一个规律. 1.while 循环 格式: while(循环条件) { 循环执行的代码; } 循环的机制: 当 ...
- 甲小蛙战记:PHP2Java 排雷指南
(马蜂窝技术原创内容,申请转载请在公众后后台留言,ID:mfwtech ) 大家好,我是来自马蜂窝电商旅游平台的甲小蛙,从前是一名 PHP 工程师,现在可能是一名 PHJ 工程师,以后...... 前 ...
- 高效PHP Redis缓存技术,可参考下步骤
是否想过PHP使用redis作为缓存时,如何能: 前后台模块共用Model层: 但是,不能每个Model类都进行缓存,这样太浪费Redis资源: 前后台模块可以自由决定从数据库还是从缓存读数据: 没有 ...
- ganglia 服务端
#!/bin/bash #配置参数 serverIP=192.168.1.16 network=ens32 #关闭selinux setenforce sed -i 's/SELINUX=enforc ...