本节内容:

  • EFK介绍
  • 安装配置EFK
    • 配置efk-rbac.yaml文件
    • 配置 es-controller.yaml
    • 配置 es-service.yaml
    • 配置 fluentd-es-ds.yaml
    • 配置 kibana-controller.yaml
    • 配置 kibana-service.yaml
    • 给 Node 设置标签
    • 执行定义文件
    • 检查执行结果
  • 访问 kibana

一、EFK介绍

  • Logstash(或者Fluentd)负责收集日志
  • Elasticsearch存储日志并提供搜索
  • Kibana负责日志查询和展示

官方地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

通过在每台node上部署一个以DaemonSet方式运行的fluentd来收集每台node上的日志。Fluentd将docker日志目录/var/lib/docker/containers和/var/log目录挂载到Pod中,然后Pod会在node节点的/var/log/pods目录中创建新的目录,可以区别不同的容器日志输出,该目录下有一个日志文件链接到/var/lib/docker/contianers目录下的容器日志输出。

二、安装配置EFK

1. 配置efk-rbac.yaml文件

EFK服务也需要一个efk-rbac.yaml文件,配置serviceaccount为efk。

[root@node1 opt]# mkdir efk
[root@node1 opt]# cd efk
[root@node1 efk]# cat efk-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: efk
namespace: kube-system --- kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: efk
subjects:
- kind: ServiceAccount
name: efk
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

efk-rbac.yaml

2. 配置 es-controller.yaml

[root@node1 efk]# vim es-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas:
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: efk
containers:
- image: index.tenxcloud.com/jimmy/elasticsearch:v2.4.1-
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort:
name: db
protocol: TCP
- containerPort:
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: es-persistent-storage
emptyDir: {}

es-controller.yaml

3. 配置 es-service.yaml

[root@node1 efk]# vim es-service.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port:
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging

es-service.yaml

4. 配置 fluentd-es-ds.yaml

[root@node1 efk]# cat fluentd-es-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: fluentd-es-v1.
namespace: kube-system
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v1.
spec:
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: "true"
version: v1.
# This annotation ensures that fluentd does not get evicted if the node
# supports critical pod annotation based priority scheme.
# Note that this does not guarantee admission on the nodes (#).
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: efk
containers:
- name: fluentd-es
image: index.tenxcloud.com/jimmy/fluentd-elasticsearch:1.22
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
nodeSelector:
beta.kubernetes.io/fluentd-ds-ready: "true"
tolerations:
- key : "node.alpha.kubernetes.io/ismaster"
effect: "NoSchedule"
terminationGracePeriodSeconds:
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers

fluentd-es-ds.yaml

5. 配置 kibana-controller.yaml

[root@node1 efk]# cat kibana-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas:
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
serviceAccountName: efk
containers:
- name: kibana-logging
image: index.tenxcloud.com/jimmy/kibana:v4.6.1-
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
- name: "KIBANA_BASE_URL"
value: "/api/v1/proxy/namespaces/kube-system/services/kibana-logging"
ports:
- containerPort:
name: ui
protocol: TCP

kibana-controller.yaml

6. 配置 kibana-service.yaml

[root@node1 efk]# cat kibana-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
ports:
- port:
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging

kibana-service.yaml

root@node1 efk]# ls
efk-rbac.yaml es-controller.yaml es-service.yaml fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml

7. 给 Node 设置标签

定义 DaemonSet fluentd-es-v1.22 时设置了 nodeSelector beta.kubernetes.io/fluentd-ds-ready=true ,所以需要在期望运行 fluentd 的 Node 上设置该标签;

[root@node1 efk]# kubectl label nodes 172.16.7.151 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.151" labeled
[root@node1 efk]# kubectl label nodes 172.16.7.152 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.152" labeled
[root@node1 efk]# kubectl label nodes 172.16.7.153 beta.kubernetes.io/fluentd-ds-ready=true
node "172.16.7.153" labeled

8. 执行定义文件

[root@node1 efk]# kubectl create -f .

9. 检查执行结果

[root@node1 efk]# kubectl get deployment -n kube-system|grep kibana
kibana-logging 1h [root@node1 efk]# kubectl get pods -n kube-system|grep -E 'elasticsearch|fluentd|kibana'
elasticsearch-logging-v1-nw3p3 / Running 43m
elasticsearch-logging-v1-pp89h / Running 43m
fluentd-es-v1.-cqd1s / Running 15m
fluentd-es-v1.-f5ljr / Error 15m
fluentd-es-v1.-x24jx / Running 15m
kibana-logging--kg8kx / Running 1h [root@node1 efk]# kubectl get service -n kube-system|grep -E 'elasticsearch|kibana'
elasticsearch-logging 10.254.50.63 <none> /TCP 1h
kibana-logging 10.254.169.159 <none> /TCP 1h

kibana Pod 第一次启动时会用较长时间(10-20分钟)来优化和 Cache 状态页面,可以 tailf 该 Pod 的日志观察进度。

[root@node1 efk]# kubectl logs kibana-logging--86h5d -n kube-system -f
ELASTICSEARCH_URL=http://elasticsearch-logging:9200
server.basePath: /api/v1/proxy/namespaces/kube-system/services/kibana-logging
{"type":"log","@timestamp":"2017-10-13T00:51:31Z","tags":["info","optimize"],"pid":,"message":"Optimizing and caching bundles for kibana and statusPage. This may take a few minutes"}
{"type":"log","@timestamp":"2017-10-13T01:13:36Z","tags":["info","optimize"],"pid":,"message":"Optimization of bundles for kibana and statusPage complete in 1324.64 seconds"}
{"type":"log","@timestamp":"2017-10-13T01:13:37Z","tags":["status","plugin:kibana@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:38Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:kbn_vislib_vis_types@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:markdown_vis@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:metric_vis@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:39Z","tags":["status","plugin:spyModes@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["status","plugin:statusPage@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["status","plugin:table_vis@1.0.0","info"],"pid":,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
{"type":"log","@timestamp":"2017-10-13T01:13:40Z","tags":["listening","info"],"pid":,"message":"Server running at http://0.0.0.0:5601"}
{"type":"log","@timestamp":"2017-10-13T01:13:45Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":,"state":"yellow","message":"Status changed from yellow to yellow - No existing Kibana index found","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
{"type":"log","@timestamp":"2017-10-13T01:13:49Z","tags":["status","plugin:elasticsearch@1.0.0","info"],"pid":,"state":"green","message":"Status changed from yellow to green - Kibana index ready","prevState":"yellow","prevMsg":"No existing Kibana index found"}

三、访问kibana

1. 通过 kube-apiserver 访问:获取 kibana 服务 URL

[root@node1 efk]# kubectl cluster-info
Kubernetes master is running at https://172.16.7.151:6443
Elasticsearch is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
monitoring-grafana is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
monitoring-influxdb is running at https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

浏览器访问 URL: https://172.16.7.151:6443/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana

2. 通过 kubectl proxy 访问:创建代理

[root@node1 efk]# kubectl proxy --address='172.16.7.151' --port= --accept-hosts='^*$' &  

浏览器访问 URL:http://172.16.7.151:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging

如果你在这里发现Create按钮是灰色的无法点击,且Time-filed name中没有选项,fluentd要读取/var/log/containers/目录下的log日志,这些日志是从/var/lib/docker/containers/${CONTAINER_ID}/${CONTAINER_ID}-json.log链接过来的,查看你的docker配置,—-log-driver需要设置为json-file格式,默认的可能是journald。

查看当前的--log-driver:

[root@node1 ~]# docker version
Client:
Version: 1.12.
API version: 1.24
Package version: docker-1.12.-.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.
Built: Mon Jul ::
OS/Arch: linux/amd64 Server:
Version: 1.12.
API version: 1.24
Package version: docker-1.12.-.git88a4867.el7.centos.x86_64
Go version: go1.7.4
Git commit: 88a4867/1.12.
Built: Mon Jul ::
OS/Arch: linux/amd64
[root@node1 efk]# docker info |grep 'Logging Driver'
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
WARNING: bridge-nf-call-ip6tables is disabled
Logging Driver: journald

修改当前版本docker的--log-driver:

[root@node1 ~]# vim /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=json-file --signature-verification=false'
[root@node1 efk]# systemctl restart docker

【注意】:本来修改这个参数应该在在/etc/docker/daemon.json文件中添加:

{
"log-driver": "json-file",
}

但是在该版本中,--log-driver是在文件/etc/sysconfig/docker中定义的。在docker-ce版本中,默认的--log-driver是json-file。

遇到的问题:

由于之前在/etc/docker/daemon.json中配置--log-driver,重启导致docker程序启动失败,等到后来在/etc/sysconfig/docker配置文件中配置好后,启动docker却发现当前node变成NotReady状态,所有的Pod也变为Unknown状态。查看kubelet状态,发现kubelet程序已经挂掉了。

[root@node1 ~]# kubectl get nodes
NAME STATUS AGE VERSION
172.16.7.151 NotReady 28d v1.6.0
172.16.7.152 Ready 28d v1.6.0
172.16.7.153 Ready 28d v1.6.0

启动kubelet:

[root@node1 ~]# systemctl start kubelet
[root@node1 ~]# kubectl get nodes
NAME STATUS AGE VERSION
172.16.7.151 Ready 28d v1.6.0
172.16.7.152 Ready 28d v1.6.0
172.16.7.153 Ready 28d v1.6.0

浏览器再次访问 kibana URL:http://172.16.7.151:8086/api/v1/proxy/namespaces/kube-system/services/kibana-logging,此时就会发现有Create按钮了。

在 Settings -> Indices 页面创建一个 index(相当于 mysql 中的一个 database),去掉已经勾选的 Index contains time-based events,使用默认的 logstash-* pattern,点击 Create ;

创建Index后,可以在 Discover 下看到 ElasticSearch logging 中汇聚的日志。

EFK收集Kubernetes应用日志的更多相关文章

  1. Kubernetes容器日志收集

    日志采集方式 日志从传统方式演进到容器方式的过程就不详细讲了,可以参考一下这篇文章Docker日志收集最佳实践,由于容器的漂移.自动伸缩等特性,日志收集也就必须使用新的方式来实现,Kubernetes ...

  2. EFK收集nginx日志

    准备三台centos7的服务器 两核两G的 关闭防火墙和SELinux systemctl stop firewalld setenforce 0 1.每一台都安装jdk rpm -ivh jdk-8 ...

  3. Kubernetes 常用日志收集方案

    Kubernetes 常用日志收集方案 学习了 Kubernetes 集群中监控系统的搭建,除了对集群的监控报警之外,还有一项运维工作是非常重要的,那就是日志的收集. 介绍 应用程序和系统日志可以帮助 ...

  4. ELK:收集k8s容器日志最佳实践

    简介 关于日志收集这个主题,这已经是第三篇了,为什么一再研究这个课题,因为这个课题实在太重要,而当今优秀的开源解决方案还不是很明朗: 就docker微服务化而言,研发有需求标准输出,也有需求文件输出, ...

  5. elk系列8之logstash+redis+es的架构来收集apache的日志

    preface logstash--> redis --> logstash --> es这套架构在讲究松耦合关系里面是最简单的, 架构图如下: 解释下这个架构图的流程 首先前端lo ...

  6. 12G服务器在BIOS中收集阵列卡日志(TTY日志)的方法

      如果系统进不去.请参考如下方法收集日志. 请准备个U 盘,容量在8G以下(含8G),否则会识别不到. 图片参考,以描述为准 F2 enter BIOS option--> Enter the ...

  7. Kubernetes审计日志方案

    前言 当前Kubernetes(K8S)已经成为事实上的容器编排标准,大家关注的重点也不再是最新发布的功能.稳定性提升等,正如Kubernetes项目创始人和维护者谈到,Kubernetes已经不再是 ...

  8. logstash收集TCP端口日志

    logstash收集TCP端口日志官方地址:https://www.elastic.co/guide/en/logstash-versioned-plugins/current/index.html ...

  9. logstash收集nginx访问日志

    logstash收集nginx访问日志 安装nginx #直接yum安装: [root@elk-node1 ~]# yum install nginx -y 官方文档:http://nginx.org ...

随机推荐

  1. Betsy Ross Problem

    Matlab学习中的betsy ross 问题.用matlab函数画1777年的美国国旗. 五角星绘制部分是自己想出来的方法去画上的.具体代码参考如下. 先是绘制矩形的函数 function Draw ...

  2. P2243 电路维修

    P2243 电路维修 题目背景 Elf 是来自Gliese 星球的少女,由于偶然的原因漂流到了地球上.在她无依无靠的时候,善良的运输队员Mark 和James 收留了她.Elf 很感谢Mark和Jam ...

  3. OC中线程安全的单例

    @implementation MySingleton + (instancetype)sharedInstance { static MySingleton* instance = nil; sta ...

  4. M-JPEG、MPEG4、H.264都有何区别

    压缩方式是网络视频服务器和网络摄像机的核心技术,压缩方式很大程度上决定着图像的质量.压缩比.传输效率.传输速度等性能,它是评价网络视频服务器和网络摄像机性能优劣的重要一环.随着多媒体技术的发展,相继推 ...

  5. windows安装filebeat服务报错

    cmd进入filebeat目录下   用以下命令执行: PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-fil ...

  6. nodejs出现events.js:72中抛出错误 Error: listen EADDRINUSE

    <pre>events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EADDRINUSE at errnoExce ...

  7. springcloud入门系列(二):注册中心Eureka

    搭建注册中心Eureka 1.pom中依赖 <dependencies> <dependency> <groupId>org.springframework.clo ...

  8. 早该知道的7个JavaScript技巧

    我写JavaScript代码已经很久了,都记不起是什么年代开始的了.对于JavaScript这种语言近几年所取得的成就,我感到非常的兴奋:我很幸运也是这些成就的获益者.我写了不少的文章,章节,还有一本 ...

  9. [NOI导刊2010提高&洛谷P1774]最接近神的人 题解(树状数组求逆序对)

    [NOI导刊2010提高&洛谷P1774]最接近神的人 Description 破解了符文之语,小FF开启了通往地下的道路.当他走到最底层时,发现正前方有一扇巨石门,门上雕刻着一幅古代人进行某 ...

  10. CSS line-height应用

    一.固定高度的容器,单行文本垂直居中 代码如下: <!DOCTYPE html> <html> <head> <meta charset="utf- ...