kubernetes-集群构建
| 主机 | 主机名 | 集色角色 |
| 192.168.1.200 | master | deploy、etcd、lb1、master1 |
| 192.168.1.201 | master2 | lb2、master2 |
| 192.168.1.202 | node | etcd2、node1 |
| 192.168.1.203 | node2 | etcd3、node2 |
| 192.168.1.250 | vip |
yum install -y epel-release
yum install -y python
iptables -F
setenforce
[root@master ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cfoSPSgeEkAkgY08UIVWK2t2eNJIrKph5wkRkZX7AKs root@master
The key's randomart image is:
+---[RSA ]----+
|BOB=+ |
|oB=o . |
| oB + . . |
| +.O . * |
|o.B B o S o |
|Eo.+ + o o . |
|oo . . . . |
|o.+ . . |
|. o |
+----[SHA256]-----+
[root@master ~]# for ip in ; do ssh-copy-id 192.168..$ip; done
5:测试是否可以免密登录
[root@master ~]# ssh 192.168.1.200
Last login: Wed Dec :: from 192.168.1.2
[root@master ~]# exit
登出
Connection to 192.168.1.200 closed.
[root@master ~]# ssh 192.168.1.201
Last login: Wed Dec :: from 192.168.1.2
[root@master2 ~]# exit
登出
Connection to 192.168.1.201 closed.
[root@master ~]# ssh 192.168.1.202
Last login: Wed Dec :: from 192.168.1.200
[root@node1 ~]# exit
登出
Connection to 192.168.1.202 closed.
[root@master ~]# ssh 192.168.1.203
Last login: Wed Dec :: from 192.168.1.2
[root@node2 ~]# exit
登出
Connection to 192.168.1.203 closed.
[root@master ~]# chmod +x easzup
[root@master ~]# ./easzup -D
[root@master ~]# ls /etc/ansible/
.prepare.yml .docker.yml .network.yml .upgrade.yml .setup.yml bin down pics tools
.etcd.yml .kube-master.yml .cluster-addon.yml .backup.yml .clean.yml dockerfiles example README.md
.containerd.yml .kube-node.yml .harbor.yml .restore.yml ansible.cfg docs manifests roles
7:配置hosts集群参数
[root@master ~ ]# cd /etc/ansible
[root@master ansible]# cp example/hosts.multi-node hosts
[root@master ansible]# vim hosts
[etcd] ##设置etcd节点ip
192.168.1.200 NODE_NAME=etcd1
192.168.1.202 NODE_NAME=etcd2
192.168.1.203 NODE_NAME=etcd3 [kube-master] ##设置master节点ip
192.168.1.200
192.168.1.201 [kube-node] ##设置node节点ip
192.168.1.202
192.168.1.203 [ex-lb] ##设置lb节点ip和VIP
192.168.1.200 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
192.168.1.201 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=
[root@master ansible]# ansible all -m ping
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user
configurable on deprecation. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details 192.168.1.201 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.202 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.203 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.1.200 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
二、开始部署集群
【deploy节点操作】手动安装方式
1:安装ca证书
[root@master ansible]# ansible-playbook .prepare.yml
[root@master ansible]# for ip in ; do ETCDCTL_API= etcdctl --endpoints=https://192.168.1.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; done
https://192.168.1.200:2379 is healthy: successfully committed proposal: took = 5.658163ms
https://192.168.1.202:2379 is healthy: successfully committed proposal: took = 6.384588ms
https://192.168.1.203:2379 is healthy: successfully committed proposal: took = 7.386942ms
3:安装docker
[root@master ansible]# ansible-playbook .docker.yml
[root@master ansible]# kubectl get componentstatus
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
etcd- Healthy {"health":"true"}
5:安装node节点
[root@master ansible]# ansible-playbook .kube-node.yml
[root@master ansible]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 4m45s v1.15.0
192.168.1.202 Ready node 12s v1.15.0
192.168.1.203 Ready node 12s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-7bk5w / Running 61s
kube-flannel-ds-amd64-blcxx / Running 61s
kube-flannel-ds-amd64-c4sfx / Running 61s
kube-flannel-ds-amd64-f8pnz / Running 61s
[root@master ansible]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heapster ClusterIP 10.68.191.0 <none> /TCP 13m
kube-dns ClusterIP 10.68.0.2 <none> /UDP,/TCP,/TCP 15m
kubernetes-dashboard NodePort 10.68.115.45 <none> :/TCP 13m
metrics-server ClusterIP 10.68.116.163 <none> /TCP 15m
traefik-ingress-service NodePort 10.68.106.241 <none> :/TCP,:/TCP 12m
【自动安装方式】
一步执行上面所有手动安装操作
[root@master ansible]# ansible-playbook .setup.yml
[root@master ansible]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.1.200 58m % 960Mi %
192.168.1.201 34m % 1018Mi %
192.168.1.202 76m % 549Mi %
192.168.1.203 89m % 568Mi %
[root@master ansible]# kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-797455887b-9nscp 5m 22Mi
kube-system coredns-797455887b-k92wv 5m 19Mi
kube-system heapster-5f848f54bc-vvwzx 1m 11Mi
kube-system kube-flannel-ds-amd64-7bk5w 3m 20Mi
kube-system kube-flannel-ds-amd64-blcxx 2m 19Mi
kube-system kube-flannel-ds-amd64-c4sfx 2m 18Mi
kube-system kube-flannel-ds-amd64-f8pnz 2m 10Mi
kube-system kubernetes-dashboard-5c7687cf8-hnbdp 1m 22Mi
kube-system metrics-server-85c7b8c8c4-6q4vj 1m 16Mi
kube-system traefik-ingress-controller-766dbfdddd-98trv 4m 17Mi
查看集群信息
[root@master ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.200:6443
CoreDNS is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://192.168.1.200:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
:测试DNS
①创建一个nginx.service
[root@master ansible]# kubectl run nginx --image=nginx --expose --port=
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/nginx created
deployment.apps/nginx created
[root@master ansible]# kubectl run busybox --rm -it --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx.default.svc.cluster.local
Server: 10.68.0.2
Address: 10.68.0.2: Name: nginx.default.svc.cluster.local
Address: 10.68.243.55
三、增加node节点,IP:192.168.1.204
【deploy节点操作】
1:拷贝公钥到新的node节点机器上
[root@master ansible]# ssh-copy-id 192.168.1.204
2:修改hosts文件,添加新的node节点IP
[root@master ansible]# vim hosts
[kube-node]
192.168.1.202
192.168.1.203
192.168.1.204
[root@master ansible]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.1.200 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.201 Ready,SchedulingDisabled master 9h v1.15.0
192.168.1.202 Ready node 9h v1.15.0
192.168.1.203 Ready node 9h v1.15.0
192.168.1.204 Ready node 2m11s v1.15.0
[root@master ansible]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-797455887b-9nscp / Running 31h 172.20.3.2 192.168.1.203 <none> <none>
coredns-797455887b-k92wv / Running 31h 172.20.2.2 192.168.1.202 <none> <none>
heapster-5f848f54bc-vvwzx / Running 31h 172.20.2.4 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-7bk5w / Running 31h 192.168.1.202 192.168.1.202 <none> <none>
kube-flannel-ds-amd64-blcxx / Running 31h 192.168.1.200 192.168.1.200 <none> <none>
kube-flannel-ds-amd64-c4sfx / Running 31h 192.168.1.203 192.168.1.203 <none> <none>
kube-flannel-ds-amd64-f8pnz / Running 31h 192.168.1.201 192.168.1.201 <none> <none>
kube-flannel-ds-amd64-vdd7n / Running 21h 192.168.1.204 192.168.1.204 <none> <none>
kubernetes-dashboard-5c7687cf8-hnbdp / Running 31h 172.20.3.3 192.168.1.203 <none> <none>
metrics-server-85c7b8c8c4-6q4vj / Running 31h 172.20.2.3 192.168.1.202 <none> <none>
traefik-ingress-controller-766dbfdddd-98trv / Running 31h 172.20.3.4 192.168.1.203 <none> <none>
kubernetes-集群构建的更多相关文章
- 企业运维实践-丢弃手中的 docker build , 使用Kaniko直接在Kubernetes集群或Containerd环境中快速进行构建推送容器镜像
关注「WeiyiGeek」公众号 设为「特别关注」每天带你玩转网络安全运维.应用开发.物联网IOT学习! 希望各位看友[关注.点赞.评论.收藏.投币],助力每一个梦想. 本章目录 目录 首发地址: h ...
- kubeadm搭建kubernetes集群之一:构建标准化镜像
使用docker可以批量管理多个容器,但都是在同一台电脑内进行的,这在实际生产环境中是不够用的,如何突破单机的限制?让多个电脑上的容器可以像单机上的docker-compose.yml管理的那样方便呢 ...
- Kubernetes — 从0到1:搭建一个完整的Kubernetes集群
准备工作 首先,准备机器.最直接的办法,自然是到公有云上申请几个虚拟机.当然,如果条件允许的话,拿几台本地的物理服务器来组集群是最好不过了.这些机器只要满足如下几个条件即可: 满足安装 Docker ...
- kubernetes集群pod使用tc进行网络资源限额
kubernetes集群pod使用tc进行网络资源限额 Docker容器可以实现CPU,内存,磁盘的IO限额,但是没有实现网络IO的限额.主要原因是在实际使用中,构建的网络环境是往超级复杂的大型网络. ...
- Kubernetes集群搭建之企业级环境中基于Harbor搭建自己的私有仓库
搭建背景 企业环境中使用Docker环境,一般出于安全考虑,业务使用的镜像一般不会从第三方公共仓库下载.那么就要引出今天的主题 企业级环境中基于Harbor搭建自己的安全认证仓库 介绍 名称:Harb ...
- Centos7部署Kubernetes集群
目录贴:Kubernetes学习系列 1.环境介绍及准备: 1.1 物理机操作系统 物理机操作系统采用Centos7.3 64位,细节如下. [root@localhost ~]# uname -a ...
- Harbor快速部署到Kubernetes集群及登录问题解决
Harbor(https://goharbor.io)是一个功能强大的容器镜像管理和服务系统,用于提供专有容器镜像服务.随着云原生架构的广泛使用,原来由VMWare开发的Harbor也加入了云原生基金 ...
- kubeadm搭建kubernetes集群之二:创建master节点
在上一章kubeadm搭建kubernetes集群之一:构建标准化镜像中我们用VMware安装了一个CentOS7虚拟机,并且打算用这个虚拟机的镜像文件作为后续整个kubernetes的标准化镜像,现 ...
- Gitlab CI 集成 Kubernetes 集群部署 Spring Boot 项目
在上一篇博客中,我们成功将 Gitlab CI 部署到了 Docker 中去,成功创建了 Gitlab CI Pipline 来执行 CI/CD 任务.那么这篇文章我们更进一步,将它集成到 K8s 集 ...
- 阿里云上万个 Kubernetes 集群大规模管理实践
点击下载<不一样的 双11 技术:阿里巴巴经济体云原生实践> 本文节选自<不一样的 双11 技术:阿里巴巴经济体云原生实践>一书,点击上方图片即可下载! 作者 | 汤志敏,阿里 ...
随机推荐
- hdu 1817 Necklace of Beads (polya)
Necklace of Beads Time Limit: 3000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others ...
- python:正则0
Python3 正则表达式特殊符号及用法(详细列表) 正则表达式的强大之处在于特殊符号的应用,特殊符号定义了字符集合.子组匹配.模式重复次数.正是这些特殊符号使得一个正则表达式可以匹配字符串集合而不只 ...
- mongodb存储二进制数据
mongodb 3.x存储二进制数据并不是以base64的方式,虽然在mongo客户端的查询结果以base64方式显示,请放心使用.下面来分析存储文件的存储内容.base64编码数据会增长1/3成为顾 ...
- php为什么需要异步编程?php异步编程的详解(附示例)
本篇文章给大家带来的内容是关于php为什么需要异步编程?php异步编程的详解(附示例),有一定的参考价值,有需要的朋友可以参考一下,希望对你有所帮助. 我对 php 异步的知识还比较混乱,写这篇是为了 ...
- PHP与Python进行数据交互
最近,决定在一个项目用tp5进行APP接口开发,用Python做数据分析,然后这就面临一个问题:PHP和Python如何进行数据交互? 思路 我解决此问题的方法是利用了PHP的passthru函数来调 ...
- HashMap面试题,看这一篇就够了!
目录 序言 一.JDK7中的HashMap底层实现 1.1 基础知识 1.2 put()方法 1.2.1 特殊key值处理 1.2.2 扩容 1.2.3 如何计算bucket下标? 1.2.4 在目标 ...
- java中的基本数据类型转换
Java 中的 8 种基本数据类型,以及它们的占内存的容量大小和表示的范围,如下图所示: 重新温故了下原始数据类型,现在来解释下它们之间的转换关系. 自动类型转换 自动类型转换是指:数字表示范围小的数 ...
- 网络OSI七层模型以及数据传输过程
网络OSI七层模型 模型图 国际标准化组织(ISO)制定了osi七层模型,iso规定了各种各样的协议,并且分了7层 每一层的详细信息 具体7层 数据格式 功能与连接方式 典型设备 应用层 Applic ...
- MyEclispe启动Tomcat7时出现错误The servlets named [LoginServlet] and [com.liu.control.LoginServlet] are both
刚开始尝试写Servlet代码,第一天就碰到这个错误,在网上找了很多资料才找到解决办法,在此记录一下. org.apache.catalina.LifecycleException: Failed t ...
- ImSQL:海量数据,可信存储
数据造假.数据不可信等问题的存在,给金融监管及风控等众多应用场景带来了严峻的挑战,也正成为阻碍数据大规模互联互通.共享共用的一大障碍.数据的真实可信问题长期影响着社会的各个领域,在更依赖数据的人工智能 ...