1、traefik

  traefik:HTTP层路由,官网:http://traefik.cn/,文档:https://docs.traefik.io/user-guide/kubernetes/

  功能和nginx ingress类似。

  相对于nginx ingress,traefix能够实时跟Kubernetes API 交互,感知后端 Service、Pod 变化,自动更新配置并热重载。Traefik 更快速更方便,同时支持更多的特性,使反向代理、负载均衡更直接更高效。

  k8s集群部署Traefik,结合上一篇文章。

  创建k8s-master-lb的证书:

[root@k8s-master01 ~]# openssl req -x509 -nodes -days  -newkey rsa: -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb"
Generating a bit RSA private key
................................................................................................................+++
.........................................................................................................................................................+++
writing new private key to 'tls.key'

  把证书写入到k8s的secret

[root@k8s-master01 ~]# kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt
secret/traefik-cert created

  安装traefix

[root@k8s-master01 kubeadm-ha]# kubectl apply -f traefik/
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
configmap/traefik-conf created
daemonset.extensions/traefik-ingress-controller created
service/traefik-web-ui created
ingress.extensions/traefik-jenkins created

  查看pods,因为创建的类型是DaemonSet所有每个节点都会创建一个Traefix的pod

[root@k8s-master01 kubeadm-ha]# kubectl  get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-kwz9t / Running 20h
kube-system calico-node-nfhrd / Running 56m
kube-system calico-node-nxtlf / Running 57m
kube-system calico-node-rj8p8 / Running 20h
kube-system calico-node-xfsg5 / Running 20h
kube-system coredns-777d78ff6f-4rcsb / Running 22h
kube-system coredns-777d78ff6f-7xqzx / Running 22h
kube-system etcd-k8s-master01 / Running 16h
kube-system etcd-k8s-master02 / Running 21h
kube-system etcd-k8s-master03 / Running 20h
kube-system heapster-5874d498f5-ngk26 / Running 16h
kube-system kube-apiserver-k8s-master01 / Running 16h
kube-system kube-apiserver-k8s-master02 / Running 20h
kube-system kube-apiserver-k8s-master03 / Running 20h
kube-system kube-controller-manager-k8s-master01 / Running 16h
kube-system kube-controller-manager-k8s-master02 / Running 20h
kube-system kube-controller-manager-k8s-master03 / Running 20h
kube-system kube-proxy-4cjhm / Running 22h
kube-system kube-proxy-kpxhz / Running 56m
kube-system kube-proxy-lkvjk / Running 21h
kube-system kube-proxy-m7htq / Running 22h
kube-system kube-proxy-r4sjs / Running 57m
kube-system kube-scheduler-k8s-master01 / Running 16h
kube-system kube-scheduler-k8s-master02 / Running 21h
kube-system kube-scheduler-k8s-master03 / Running 20h
kube-system kubernetes-dashboard-7954d796d8-2k4hx / Running 17h
kube-system metrics-server-55fcc5b88-bpmkm / Running 16h
kube-system monitoring-grafana-9b6b75b49-4zm6d / Running 18h
kube-system monitoring-influxdb-655cd78874-56gf8 / Running 16h
kube-system traefik-ingress-controller-cv2jg / Running 28s
kube-system traefik-ingress-controller-d7lzw / Running 28s
kube-system traefik-ingress-controller-r2z29 / Running 28s
kube-system traefik-ingress-controller-tm6vv / Running 28s
kube-system traefik-ingress-controller-w4mj7 / Running 28s

  打开Traefix的Web UI:http://k8s-master-lb:30011/

  创建测试web应用

[root@k8s-master01 ~]# cat traefix-test.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
template:
metadata:
labels:
name: nginx-svc
namespace: traefix-test
spec:
selector:
run: ngx-pod
ports:
- protocol: TCP
port:
targetPort:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: ngx-pod
spec:
replicas:
template:
metadata:
labels:
run: ngx-pod
spec:
containers:
- name: nginx
image: nginx:1.10
ports:
- containerPort:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ngx-ing
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefix-test.com
http:
paths:
- backend:
serviceName: nginx-svc
servicePort:
[root@k8s-master01 ~]# kubectl create -f traefix-test.yaml
service/nginx-svc created
deployment.apps/ngx-pod created
ingress.extensions/ngx-ing created

  traefix UI查看

  k8s查看

  访问测试:将域名http://traefix-test.com/解析到任何一个node节点即可访问 

  HTTPS证书配置

  利用上述创建的nginx,再次创建https的ingress

[root@k8s-master01 nginx-cert]# cat ../traefix-https.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-https-test
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefix-test.com
http:
paths:
- backend:
serviceName: nginx-svc
servicePort:
tls:
- secretName: nginx-test-tls

  创建证书,线上为公司购买的证书

[root@k8s-master01 nginx-cert]# openssl req -x509 -nodes -days  -newkey rsa: -keyout tls.key -out tls.crt -subj "/CN=traefix-test.com"
Generating a bit RSA private key
.................................+++
.........................................................+++
writing new private key to 'tls.key'
-----

  导入证书

kubectl -n default create secret tls nginx-test-tls --key=tls.key --cert=tls.crt

  创建ingress

[root@k8s-master01 ~]# kubectl create -f traefix-https.yaml
ingress.extensions/nginx-https-test created

  访问测试:

  其他方法查看官方文档:https://docs.traefik.io/user-guide/kubernetes/

2、安装prometheus

  安装prometheus

[root@k8s-master01 kubeadm-ha]# kubectl apply -f prometheus/
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-server-conf created
deployment.extensions/prometheus created
service/prometheus created

  pod查看

[root@k8s-master01 kubeadm-ha]# kubectl get pods --all-namespaces | grep prome
kube-system prometheus-56dff8579d-x2w62 / Running 52s

  访问地址:http://k8s-master-lb:30013/

  安装使用grafana(此处使用自装grafana)

yum install https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.4.3-1.x86_64.rpm -y

  启动grafana

[root@k8s-master01 grafana-dashboard]# systemctl start grafana-server
[root@k8s-master01 grafana-dashboard]# systemctl enable grafana-server
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.

  访问:http://192.168.20.20:3000,账号密码admin,配置prometheus的DataSource

  导入模板:文件路径/root/kubeadm-ha/heapster/grafana-dashboard 

  导入后如下

  查看数据

  grafana文档:http://docs.grafana.org/

3、集群验证

  验证集群高可用

  创建一个副本为3的deployment

[root@k8s-master01 ~]# kubectl run nginx --image=nginx --replicas= --port=
deployment.apps/nginx created
[root@k8s-master01 ~]# kubectl get deployment --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default nginx 58s

  查看pods

[root@k8s-master01 ~]# kubectl get pods -l=run=nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6f858d4d45-7lv6f / Running 1m 172.168.5.16 k8s-node01
nginx-6f858d4d45-g2njj / Running 1m 172.168.0.18 k8s-master01
nginx-6f858d4d45-rcz89 / Running 1m 172.168.6.12 k8s-node02

  创建service

[root@k8s-master01 ~]# kubectl expose deployment nginx --type=NodePort --port=
service/nginx exposed
[root@k8s-master01 ~]# kubectl get service -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> /TCP 1d
nginx NodePort 10.97.112.176 <none> :/TCP 29s

  访问测试

  测试HPA自动弹性伸缩

# 创建测试服务
kubectl run nginx-server --requests=cpu=10m --image=nginx --port=
kubectl expose deployment nginx-server --port= # 创建hpa
kubectl autoscale deployment nginx-server --cpu-percent= --min= --max=

  查看当前nginx-server的ClusterIP

[root@k8s-master01 ~]# kubectl get service -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> /TCP 1d
nginx-server ClusterIP 10.108.160.23 <none> /TCP 5m

  给测试服务增加负载

[root@k8s-master01 ~]# while true; do wget -q -O- http://10.108.160.23 > /dev/null; done

  查看当前扩容情况

  终止增加负载,结束增加负载后,pod自动缩容(自动缩容需要大概10-15分钟)

  删除测试数据

[root@k8s-master01 ~]# kubectl delete deploy,svc,hpa nginx-server
deployment.extensions "nginx-server" deleted
service "nginx-server" deleted
horizontalpodautoscaler.autoscaling "nginx-server" deleted

4、集群稳定性测试

  关闭master01电源

  master02查看

[root@k8s-master02 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 1d v1.11.1
k8s-master02 Ready master 1d v1.11.1
k8s-master03 Ready master 1d v1.11.1
k8s-node01 Ready <none> 22h v1.11.1
k8s-node02 Ready <none> 22h v1.11.1

  访问测试

  VIP以漂移至master02

  关闭集群所有节点(直接断电,非正常关机)

  重新开机

  查看节点状态

[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 2d v1.11.1
k8s-master02 Ready master 2d v1.11.1
k8s-master03 Ready master 2d v1.11.1
k8s-node01 Ready <none> 1d v1.11.1
k8s-node02 Ready <none> 1d v1.11.1

  查看所有pods

[root@k8s-master01 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-kwz9t / Running 2d
kube-system calico-node-nfhrd / Running 1d
kube-system calico-node-nxtlf / Running 1d
kube-system calico-node-rj8p8 / Running 2d
kube-system calico-node-xfsg5 / Running 2d
kube-system coredns-777d78ff6f-dctjh / Running 8h
kube-system coredns-777d78ff6f-ljpqs / Running 8h
kube-system etcd-k8s-master01 / Running 6m
kube-system etcd-k8s-master02 / Running 21m
kube-system etcd-k8s-master03 / Running 2d
kube-system heapster-5874d498f5-rv25x / Running 8h
kube-system kube-apiserver-k8s-master01 / Running 6m
kube-system kube-apiserver-k8s-master02 / Running 21m
kube-system kube-apiserver-k8s-master03 / Running 2d
kube-system kube-controller-manager-k8s-master01 / Running 6m
kube-system kube-controller-manager-k8s-master02 / Running 21m
kube-system kube-controller-manager-k8s-master03 / Running 2d
kube-system kube-proxy-4cjhm / Running 2d
kube-system kube-proxy-kpxhz / Running 1d
kube-system kube-proxy-lkvjk / Running 2d
kube-system kube-proxy-m7htq / Running 2d
kube-system kube-proxy-r4sjs / Running 1d
kube-system kube-scheduler-k8s-master01 / Running 6m
kube-system kube-scheduler-k8s-master02 / Running 21m
kube-system kube-scheduler-k8s-master03 / Running 2d
kube-system kubernetes-dashboard-7954d796d8-2k4hx / Running 1d
kube-system metrics-server-55fcc5b88-bpmkm / Running 1d
kube-system monitoring-influxdb-655cd78874-ccrgl / Running 8h
kube-system prometheus-56dff8579d-28qm5 / Running 28m
kube-system traefik-ingress-controller-cv2jg / Running 1d
kube-system traefik-ingress-controller-d7lzw / Running 1d
kube-system traefik-ingress-controller-r2z29 / Running 1d
kube-system traefik-ingress-controller-tm6vv / Running 1d
kube-system traefik-ingress-controller-w4mj7 / Running 1d

  访问测试

  

赞助作者:

  

  

Kubernetes实战(二):k8s v1.11.1 prometheus traefik组件安装及集群测试的更多相关文章

  1. kubernetes实战(二十七):CentOS 8 二进制 高可用 安装 k8s 1.16.x

    1. 基本说明 本文章将演示CentOS 8二进制方式安装高可用k8s 1.16.x,相对于其他版本,二进制安装方式并无太大区别.CentOS 8相对于CentOS 7操作更加方便,比如一些服务的关闭 ...

  2. kubernetes实战(二十八):Kubernetes一键式资源管理平台Ratel安装及使用

    1. Ratel是什么? Ratel是一个Kubernetes资源平台,基于管理Kubernetes的资源开发,可以管理Kubernetes的Deployment.DaemonSet.Stateful ...

  3. 在Ubuntu上使用离线方式快速安装K8S v1.11.1

    在Ubuntu上使用离线方式快速安装K8S v1.11.1 0.安装包文件下载 https://pan.baidu.com/s/1nmC94Uh-lIl0slLFeA1-qw v1.11.1 文件大小 ...

  4. [转贴]CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群

    CentOS7.5 Kubernetes V1.13(最新版)二进制部署集群 http://blog.51cto.com/10880347/2326146   一.概述 kubernetes 1.13 ...

  5. kubernetes学习与实践篇(二) kubernetes1.5 的安装和集群环境部署

    kubernetes 1.5 的安装和集群环境部署 文章转载自:http://www.cnblogs.com/tynia/p/k8s-cluster.html 简介: Docker:是一个开源的应用容 ...

  6. k8s中安装rabbitmq集群

    官方文档地址:https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html 要求 1.k8s版本要1.18及其以上 2.能 ...

  7. K8S从入门到放弃系列-(3)部署etcd集群

    摘要:etcd 是k8s集群最重要的组件,用来存储k8s的所有服务信息, etcd 挂了,集群就挂了,我们这里把etcd部署在master三台节点上做高可用,etcd集群采用raft算法选举Leade ...

  8. Hadoop(二)CentOS7.5搭建Hadoop2.7.6完全分布式集群

    一 完全分布式集群(单点) Hadoop官方地址:http://hadoop.apache.org/ 1  准备3台客户机 1.1防火墙,静态IP,主机名 关闭防火墙,设置静态IP,主机名此处略,参考 ...

  9. Kubernetes实战(一):k8s v1.11.x v1.12.x 高可用安装

    说明:部署的过程中请保证每个命令都有在相应的节点执行,并且执行成功,此文档已经帮助几十人(仅包含和我取得联系的)快速部署k8s高可用集群,文档不足之处也已更改,在部署过程中遇到问题请先检查是否遗忘某个 ...

随机推荐

  1. Ubuntu 10.04 安装 Oracle11gR2

    注意点: 在 ubuntu的 /bin 下建立以下几个基本命令的链接: /bin/basename->/usr/bin/basename /bin/awk->/usr/bin/gawk / ...

  2. [译]Unity3D内存管理——对象池(Object Pool)

    原文地址:C# Memory Management for Unity Developers (part 3 of 3), 其实从原文标题可以看出,这是一系列文章中的第三篇,前两篇讲解了从C#语言本身 ...

  3. linux 数据盘和系统盘的查看

    系统盘就像linux的c盘,使用df -l命令查看 如下所示: 可以看到根路径 / 都是位于系统盘.而/root,/home,/usr就如同c盘下的c:\windows,c:\usr这些目录 如果单独 ...

  4. 50个Android开发技巧(03 自己定义ViewGroup)

    问题:怎样创建一个例如以下图所看到的的布局?                图1 (原文地址:http://blog.csdn.net/vector_yi/article/details/244155 ...

  5. day24<多线程>

    多线程(多线程的引入) 多线程(多线程并行和并发的区别) 多线程(Java程序运行原理和JVM的启动是多线程的吗) 多线程(多线程程序实现的方式1) 多线程(多线程程序实现的方式2) 多线程(实现Ru ...

  6. redis客户端使用密码

    ./redis-cli  -h 127.0.0.1 -p 6379 -a password

  7. com.thoughtworks.xstream.converters.ConversionException

    使用webService调用接口,返回的是xml格式,运用xstream解析的时候,出现了如下的错误: Exception in thread "Timer-1" com.thou ...

  8. UML设计,可以设计程序的用例图、类图、活动图等_SurfaceView

    « 对Cocos2d游戏引擎有一定的了解和实践,并接触过处理3D图形和模型库的OpenGL 在进行游戏界面的绘制工作中,需要处理大量的工作,这些工作有很多共性的操作:并且对于游戏界面的切换,元素动作的 ...

  9. 【TP3.2】 动态切换数据库方法

    1 config 配置: 'connection' => 'mysql://root:root@localhost:3306/dbname', connection  数据库连接字符串,后面代码 ...

  10. HTTP/2笔记之错误处理和安全

    零.前言 这里整理了一下错误和安全相关部分简单记录. 一.HTTP/2错误 1. 错误定义 HTTP/2定义了两种类型错误: 导致整个连接不可使用的错误为连接错误(connection error) ...