前言

为了配置kubernetes中的ingress的高可用,对于kubernetes集群以外只暴露一个访问入口,需要使用keepalived排除单点问题。需要使用daemonset方式将ingress-controller部署在边缘节点上。

边缘节点

首先解释下什么叫边缘节点(Edge Node),所谓的边缘节点即集群内部用来向集群外暴露服务能力的节点,集群外部的服务通过该节点来调用集群内部的服务,边缘节点是集群内外交流的一个Endpoint。

边缘节点要考虑两个问题

  • 边缘节点的高可用,不能有单点故障,否则整个kubernetes集群将不可用
  • 对外的一致暴露端口,即只能有一个外网访问IP和端口

架构

为了满足边缘节点的以上需求,我们使用keepalived来实现。

在Kubernetes中添加了ingress后,在DNS中添加A记录,域名为你的ingress中host的内容,IP为你的keepalived的VIP,这样集群外部就可以通过域名来访问你的服务,也解决了单点故障。

选择Kubernetes的三个node作为边缘节点,并安装keepalived。

安装keepalived服务

yum install -y keepalived

  

node1的keealived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 88
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

  

node2 keepalive配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 88
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

  

node3 keepalived配置

cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id edgenode
} vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 88
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.40.0.109 dev eth0 label eth0:1
}
}

 

设置keepalived开机自启动

systemctl enable keepalived.service

  

启动keepalived服务

systemctl start keepalived.service

  

安装ingress-nginx

给边缘节点打标签

kubectl label nodes 10.40.0.105 edgenode=true
kubectl label nodes 10.40.0.106 edgenode=true
kubectl label nodes 10.40.0.107 edgenode=true

  

查看node标签

[root@k8s-master01 ingress]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
10.40.0.105 Ready <none> 25d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.105
10.40.0.106 Ready <none> 25d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,env_role=dev,kubernetes.io/hostname=10.40.0.106
10.40.0.107 Ready <none> 17d v1.12.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,edgenode=true,kubernetes.io/hostname=10.40.0.

将原来ingress的yaml文件由deployment改为daemonset

cat ingress-daemonset.yaml

apiVersion: extensions/v1beta1
#kind: Deployment
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
#replicas: 3
#selector:
#matchLabels:
#app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
edgenode: 'true'
containers:
- name: nginx-ingress-controller
image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

  

namespace部署文件

cat namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx

configmap部署文件

cat configmap.yaml 

apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx

  

rbac部署文件

cat rbac.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx

  

tcp-service-configmap

cat tcp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx

  

udp-servcie-configmap

cat udp-services-configmap.yaml 

kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx

  

default-backend

cat default-backend.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
--- apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend

  

部署

kubectl create -f namespace.yaml
kubectl create -f configmap.yaml
kubectl create -f rbac.yaml
kubectl create -f default-backend.yaml
kubectl create -f tcp-services-configmap.yaml
kubectl create -f udp-services-configmap.yaml
kubectl create -f ingress-daemonset.yaml

  

查看ingress-controller

[root@k8s-master01 ingress]# kubectl get ds -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nginx-ingress-controller edgenode=true 57m
[root@k8s-master01 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default-http-backend-86569b9d95-x4bsn / Running 24d 172.17.65.6 10.40.0.105 <none>
nginx-ingress-controller-5b7xg / Running 58m 10.40.0.105 10.40.0.105 <none>
nginx-ingress-controller-b5mxc / Running 58m 10.40.0.106 10.40.0.106 <none>
nginx-ingress-controller-t5n5k / Running 58m 10.40.0.107 10.40.0.107 <none>

ingress高可用--使用DaemonSet方式部署ingress-nginx的更多相关文章

  1. ProxySQL Cluster 高可用集群环境部署记录

    ProxySQL在早期版本若需要做高可用,需要搭建两个实例,进行冗余.但两个ProxySQL实例之间的数据并不能共通,在主实例上配置后,仍需要在备用节点上进行配置,对管理来说非常不方便.但是Proxy ...

  2. 大数据学习笔记——Hbase高可用+完全分布式完整部署教程

    Hbase高可用+完全分布式完整部署教程 本篇博客承接上一篇sqoop的部署教程,将会详细介绍完全分布式并且是高可用模式下的Hbase的部署流程,废话不多说,我们直接开始! 1. 安装准备 部署Hba ...

  3. MySQL高可用方案MHA的部署和原理

    MHA(Master High Availability)是一套相对成熟的MySQL高可用方案,能做到在0~30s内自动完成数据库的故障切换操作,在master服务器不宕机的情况下,基本能保证数据的一 ...

  4. Centos7.5基于MySQL5.7的 InnoDB Cluster 多节点高可用集群环境部署记录

    一.   MySQL InnoDB Cluster 介绍MySQL的高可用架构无论是社区还是官方,一直在技术上进行探索,这么多年提出了多种解决方案,比如MMM, MHA, NDB Cluster, G ...

  5. MySQL高可用架构-MMM环境部署记录

    MMM介绍MMM(Master-Master replication manager for MySQL)是一套支持双主故障切换和双主日常管理的脚本程序.MMM使用Perl语言开发,主要用来监控和管理 ...

  6. MySQL高可用架构-MHA环境部署记录

    一.MHA介绍 MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司) ...

  7. 使用Ansible实现nginx+keepalived高可用负载均衡自动化部署

    本篇文章记录通过Ansible自动化部署nginx的负载均衡高可用,前端代理使用nginx+keepalived,端web server使用3台nginx用于负载效果的体现,结构图如下: 部署前准备工 ...

  8. kubeadm实现k8s高可用集群环境部署与配置

    高可用架构 k8s集群的高可用实际是k8s各核心组件的高可用,这里使用主备模式,架构如下: 主备模式高可用架构说明: 核心组件 高可用模式 高可用实现方式 apiserver 主备 keepalive ...

  9. openstack高可用集群21-生产环境高可用openstack集群部署记录

    第一篇 集群概述 keepalived + haproxy +Rabbitmq集群+MariaDB Galera高可用集群   部署openstack时使用单个控制节点是非常危险的,这样就意味着单个节 ...

随机推荐

  1. 安装XHProf分析PHP性能瓶颈(原创)

    废话不多说,直接上代码 ,手动滑稽.o(╯□╰)o   如果已解决您的问题,请在文章底部点击下关注,非常感谢. 下面是LINUX命令行 $ wget http://pecl.php.net/get/x ...

  2. sonarQube6.1 升级至6.2

    在使用sonarQube6.1一段时间后,今天才发现sonarQube6.2已经更新,为了尝鲜,我决定在本机先尝试一下,如何升级至6.2 在这里,根据站点提示的升级步骤 1.下载新版本sonarQub ...

  3. POJ 1008 Maya Calendar / UVA 300【日期转换/常量数组】

    Maya Calendar Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 82431 Accepted: 25319 Descr ...

  4. 洛谷——P1495 曹冲养猪

    题目描述 自从曹冲搞定了大象以后,曹操就开始捉摸让儿子干些事业,于是派他到中原养猪场养猪,可是曹冲满不高兴,于是在工作中马马虎虎,有一次曹操想知道母猪的数量,于是曹冲想狠狠耍曹操一把.举个例子,假如有 ...

  5. AxureRP7超强部件库打包下载

    摘要: 很多刚刚开始学习Axure的朋友都喜欢到网上搜罗各种部件库(组件库)widgets library ,但是网络中真正实用的并且适合你使用的少之又少,最好的办法就是自己制作适合自己工作内容的部件 ...

  6. Mysql-库的基本操作

    一 .系统数据库 二 .创建数据库 三 .数据库相关操作 一. 系统数据库 information_schema: 虚拟库,不占用磁盘空间,存储的是数据库启动后的一些参数,如用户表信息.列信息.权限信 ...

  7. 1.9(java学习笔记)object类及toString()与equals()方法

    object类 java中objec是所有类公共的父类,一个类只要没有明显的继承某一类,那么它就是继承object类. 例如 class Person {......};和class Person e ...

  8. 打印不同的数 Exercise07_05

    import java.util.Scanner; /** * @author 冰樱梦 * 时间:2018年下半年 * 题目:打印不同的数 * */ public class Exercise07_0 ...

  9. ACM--素数距离问题

    题目描述:现在给出你一些数,要求你写出一个程序,输出这些整数相邻最近的素数,并输出其相距长度.如果左右有等距离长度素数,则输出左侧的值及相应距离.如果输入的整数本身就是素数,则输出该素数本身,距离输出 ...

  10. iOS 简单易用的二维码扫描及生成二维码三方控件LFQRCode,可灵活自定义UI

    一.扫码 扫描的控件是一个view,使用者只需贴在自己的控制器内即可.其他UI用户可在自己控制器随便添加.代码如下 - (void)viewDidLoad { [super viewDidLoad]; ...