kubernetes(k8s) 安装 Prometheus + Grafana
kubernetes(k8s) 安装 Prometheus + Grafana
组件说明
MetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如 kubectl,hpa,scheduler等。
PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。
NodeExporter:用于各node的关键度量指标状态数据。
KubeStateMetrics:收集kubernetes集群内资源对象数 据,制定告警规则。
Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数 据,通过http协议传输。
Grafana:是可视化数据统计和监控平台。
克隆代码
root@hello:~# git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.git
Cloning into 'kube-prometheus'...
remote: Enumerating objects: 16026, done.
remote: Counting objects: 100% (2639/2639), done.
remote: Compressing objects: 100% (165/165), done.
remote: Total 16026 (delta 2524), reused 2485 (delta 2470), pack-reused 13387
Receiving objects: 100% (16026/16026), 7.81 MiB | 2.67 MiB/s, done.
Resolving deltas: 100% (10333/10333), done.
root@hello:~#
进入目录修改镜像地址
若访问Google畅通无阻即可无需修改,跳过即可
root@hello:~# cd kube-prometheus/manifests
root@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml
root@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml
root@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml
root@hello:~/kube-prometheus/manifests# sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml
root@hello:~/kube-prometheus/manifests# sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml
root@hello:~/kube-prometheus/manifests#
修改svc为NodePort
root@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" grafana-service.yaml
root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: http/i\ nodePort: 31100" grafana-service.yaml
root@hello:~/kube-prometheus/manifests# cat grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 8.3.3
name: grafana
namespace: monitoring
spec:
type: NodePort
ports:
- name: http
port: 3000
nodePort: 31100
targetPort: http
selector:
app.kubernetes.io/component: grafana
app.kubernetes.io/name: grafana
app.kubernetes.io/part-of: kube-prometheus
root@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" prometheus-service.yaml
root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: web/i\ nodePort: 31200" prometheus-service.yaml
root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: reloader-web/i\ nodePort: 31300" prometheus-service.yaml
root@hello:~/kube-prometheus/manifests# cat prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 2.32.1
name: prometheus-k8s
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9090
nodePort: 31200
targetPort: web
- name: reloader-web
port: 8080
nodePort: 31300
targetPort: reloader-web
selector:
app.kubernetes.io/component: prometheus
app.kubernetes.io/instance: k8s
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
sessionAffinity: ClientIP
root@hello:~/kube-prometheus/manifests# sed -i "/ports:/i\ type: NodePort" alertmanager-service.yaml
root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: web/i\ nodePort: 31400" alertmanager-service.yaml
root@hello:~/kube-prometheus/manifests# sed -i "/targetPort: reloader-web/i\ nodePort: 31500" alertmanager-service.yaml
root@hello:~/kube-prometheus/manifests# cat alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: alert-router
app.kubernetes.io/instance: main
app.kubernetes.io/name: alertmanager
app.kubernetes.io/part-of: kube-prometheus
app.kubernetes.io/version: 0.23.0
name: alertmanager-main
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9093
nodePort: 31400
targetPort: web
- name: reloader-web
port: 8080
nodePort: 31500
targetPort: reloader-web
selector:
app.kubernetes.io/component: alert-router
app.kubernetes.io/instance: main
app.kubernetes.io/name: alertmanager
app.kubernetes.io/part-of: kube-prometheus
sessionAffinity: ClientIP
执行部署
root@hello:~#
root@hello:~# kubectl create -f /root/kube-prometheus/manifests/setup
customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created
namespace/monitoring created
root@hello:~#
root@hello:~#
root@hello:~#
root@hello:~# kubectl create -f /root/kube-prometheus/manifests/
alertmanager.monitoring.coreos.com/main created
networkpolicy.networking.k8s.io/alertmanager-main created
poddisruptionbudget.policy/alertmanager-main created
prometheusrule.monitoring.coreos.com/alertmanager-main-rules created
secret/alertmanager-main created
service/alertmanager-main created
serviceaccount/alertmanager-main created
servicemonitor.monitoring.coreos.com/alertmanager-main created
clusterrole.rbac.authorization.k8s.io/blackbox-exporter created
clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created
configmap/blackbox-exporter-configuration created
deployment.apps/blackbox-exporter created
networkpolicy.networking.k8s.io/blackbox-exporter created
service/blackbox-exporter created
serviceaccount/blackbox-exporter created
servicemonitor.monitoring.coreos.com/blackbox-exporter created
secret/grafana-config created
secret/grafana-datasources created
configmap/grafana-dashboard-alertmanager-overview created
configmap/grafana-dashboard-apiserver created
configmap/grafana-dashboard-cluster-total created
configmap/grafana-dashboard-controller-manager created
configmap/grafana-dashboard-grafana-overview created
configmap/grafana-dashboard-k8s-resources-cluster created
configmap/grafana-dashboard-k8s-resources-namespace created
configmap/grafana-dashboard-k8s-resources-node created
configmap/grafana-dashboard-k8s-resources-pod created
configmap/grafana-dashboard-k8s-resources-workload created
configmap/grafana-dashboard-k8s-resources-workloads-namespace created
configmap/grafana-dashboard-kubelet created
configmap/grafana-dashboard-namespace-by-pod created
configmap/grafana-dashboard-namespace-by-workload created
configmap/grafana-dashboard-node-cluster-rsrc-use created
configmap/grafana-dashboard-node-rsrc-use created
configmap/grafana-dashboard-nodes created
configmap/grafana-dashboard-persistentvolumesusage created
configmap/grafana-dashboard-pod-total created
configmap/grafana-dashboard-prometheus-remote-write created
configmap/grafana-dashboard-prometheus created
configmap/grafana-dashboard-proxy created
configmap/grafana-dashboard-scheduler created
configmap/grafana-dashboard-workload-total created
configmap/grafana-dashboards created
deployment.apps/grafana created
networkpolicy.networking.k8s.io/grafana created
prometheusrule.monitoring.coreos.com/grafana-rules created
service/grafana created
serviceaccount/grafana created
servicemonitor.monitoring.coreos.com/grafana created
prometheusrule.monitoring.coreos.com/kube-prometheus-rules created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
networkpolicy.networking.k8s.io/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created
service/kube-state-metrics created
serviceaccount/kube-state-metrics created
servicemonitor.monitoring.coreos.com/kube-state-metrics created
prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created
servicemonitor.monitoring.coreos.com/kube-apiserver created
servicemonitor.monitoring.coreos.com/coredns created
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
clusterrole.rbac.authorization.k8s.io/node-exporter created
clusterrolebinding.rbac.authorization.k8s.io/node-exporter created
daemonset.apps/node-exporter created
networkpolicy.networking.k8s.io/node-exporter created
prometheusrule.monitoring.coreos.com/node-exporter-rules created
service/node-exporter created
serviceaccount/node-exporter created
servicemonitor.monitoring.coreos.com/node-exporter created
clusterrole.rbac.authorization.k8s.io/prometheus-k8s created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created
networkpolicy.networking.k8s.io/prometheus-k8s created
poddisruptionbudget.policy/prometheus-k8s created
prometheus.monitoring.coreos.com/k8s created
prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
rolebinding.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s-config created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
role.rbac.authorization.k8s.io/prometheus-k8s created
service/prometheus-k8s created
serviceaccount/prometheus-k8s created
servicemonitor.monitoring.coreos.com/prometheus-k8s created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created
clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created
clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created
configmap/adapter-config created
deployment.apps/prometheus-adapter created
networkpolicy.networking.k8s.io/prometheus-adapter created
poddisruptionbudget.policy/prometheus-adapter created
rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created
service/prometheus-adapter created
serviceaccount/prometheus-adapter created
servicemonitor.monitoring.coreos.com/prometheus-adapter created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
networkpolicy.networking.k8s.io/prometheus-operator created
prometheusrule.monitoring.coreos.com/prometheus-operator-rules created
service/prometheus-operator created
serviceaccount/prometheus-operator created
servicemonitor.monitoring.coreos.com/prometheus-operator created
root@hello:~#
root@hello:~#
查看验证
root@hello:~# kubectl get pod -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-main-0 2/2 Running 0 69s
alertmanager-main-1 2/2 Running 0 69s
alertmanager-main-2 2/2 Running 0 69s
blackbox-exporter-6c559c5c66-kw6vd 3/3 Running 0 83s
grafana-7fd69887fb-jmpmp 1/1 Running 0 81s
kube-state-metrics-867b64476b-h84g4 3/3 Running 0 81s
node-exporter-576bm 2/2 Running 0 80s
node-exporter-94gn9 2/2 Running 0 80s
node-exporter-cbjqk 2/2 Running 0 80s
node-exporter-mhlh7 2/2 Running 0 80s
node-exporter-pdc6k 2/2 Running 0 80s
node-exporter-pqqds 2/2 Running 0 80s
node-exporter-s9cz4 2/2 Running 0 80s
node-exporter-tdlnt 2/2 Running 0 81s
prometheus-adapter-8f88b5b45-rrsh4 1/1 Running 0 78s
prometheus-adapter-8f88b5b45-wh6pf 1/1 Running 0 78s
prometheus-k8s-0 2/2 Running 0 68s
prometheus-k8s-1 2/2 Running 0 68s
prometheus-operator-7f9d9c77f8-h5gkt 2/2 Running 0 78s
root@hello:~#
root@hello:~# kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-main NodePort 10.103.47.160 <none> 9093:31400/TCP,8080:31500/TCP 92s
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 78s
blackbox-exporter ClusterIP 10.102.108.160 <none> 9115/TCP,19115/TCP 92s
grafana NodePort 10.106.2.21 <none> 3000:31100/TCP 90s
kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 90s
node-exporter ClusterIP None <none> 9100/TCP 90s
prometheus-adapter ClusterIP 10.108.65.108 <none> 443/TCP 87s
prometheus-k8s NodePort 10.100.227.174 <none> 9090:31200/TCP,8080:31300/TCP 88s
prometheus-operated ClusterIP None <none> 9090/TCP 77s
prometheus-operator ClusterIP None <none> 8443/TCP 87s
root@hello:~#
http://192.168.1.81:31400/
http://192.168.1.81:31200/
http://192.168.1.81:31100/
一条命令执行
cd /root ; git clone -b release-0.10 https://github.com/prometheus-operator/kube-prometheus.git ;cd kube-prometheus/manifests ;sed -i "s#quay.io/prometheus/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/brancz/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/prometheus-adapter/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#quay.io/prometheus-operator/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "s#k8s.gcr.io/kube-state-metrics/#registry.cn-hangzhou.aliyuncs.com/chenby/#g" *.yaml ;sed -i "/ports:/i\ type: NodePort" grafana-service.yaml ;sed -i "/targetPort: http/i\ nodePort: 31100" grafana-service.yaml ;sed -i "/ports:/i\ type: NodePort" prometheus-service.yaml ;sed -i "/targetPort: web/i\ nodePort: 31200" prometheus-service.yaml ;sed -i "/targetPort: reloader-web/i\ nodePort: 31300" prometheus-service.yaml ;sed -i "/ports:/i\ type: NodePort" alertmanager-service.yaml ;sed -i "/targetPort: web/i\ nodePort: 31400" alertmanager-service.yaml ;sed -i "/targetPort: reloader-web/i\ nodePort: 31500" alertmanager-service.yaml ;kubectl create -f /root/kube-prometheus/manifests/setup ;kubectl create -f /root/kube-prometheus/manifests/ ; sleep 30 ; kubectl get pod -n monitoring ; kubectl get svc -n monitoring ;
https://www.oiox.cn/
https://www.chenby.cn/
https://cby-chen.github.io/
https://blog.csdn.net/qq_33921750
https://my.oschina.net/u/3981543
https://www.zhihu.com/people/chen-bu-yun-2
https://segmentfault.com/u/hppyvyv6/articles
https://juejin.cn/user/3315782802482007
https://cloud.tencent.com/developer/column/93230
https://www.jianshu.com/u/0f894314ae2c
https://www.toutiao.com/c/user/token/MS4wLjABAAAAeqOrhjsoRZSj7iBJbjLJyMwYT5D0mLOgCoo4pEmpr4A/
CSDN、GitHub、知乎、开源中国、思否、掘金、简书、腾讯云、今日头条、个人博客、全网可搜《小陈运维》
文章主要发布于微信公众号:《Linux运维交流社区》
kubernetes(k8s) 安装 Prometheus + Grafana的更多相关文章
- [转帖]安装prometheus+grafana监控mysql redis kubernetes等
安装prometheus+grafana监控mysql redis kubernetes等 https://www.cnblogs.com/sfnz/p/6566951.html plug 的模式进行 ...
- k8s 安装 prometheus 过程记录
开始以为只要安装 prometheus-operator 就行了. git clone https://github.com/coreos/prometheus-operator.git cd pro ...
- Kubernetes(K8s) 安装(使用kubeadm安装Kubernetes集群)
背景: 由于工作发生了一些变动,很长时间没有写博客了. 概述: 这篇文章是为了介绍使用kubeadm安装Kubernetes集群(可以用于生产级别).使用了Centos 7系统. 一.Centos7 ...
- 安装prometheus+grafana监控mysql redis kubernetes等
1.prometheus安装 wget https://github.com/prometheus/prometheus/releases/download/v1.5.2/prometheus-1.5 ...
- EggJS 云原生应用硬核实战(Kubernetes+Traefik+Helm+Prometheus+Grafana),提供 Demo
介绍 这是一个关于 Egg.js 应用上云️的示例,笔者所在的大前端团队的已应用于生产. CI/CD & DevOps & GitOps & HPA 等这里暂不做讨论,因为每一 ...
- K8s 部署 Prometheus + Grafana
一.简介 1. Prometheus 一款开源的监控&报警&时间序列数据库的组合,起始是由 SoundCloud 公司开发的 基本原理是通过 HTTP 协议周期性抓取被监控组件的状态, ...
- k8s安装之grafana.yaml
这个作展示,够用. 为了使用nginx统一管理, 这里将grafana放在子目录下. - name: GF_SERVER_ROOT_URL value: "%(protocol)s://% ...
- 排查 Kubernetes HPA 通过 Prometheus 获取不到 http_requests 指标的问题
部署好了 kube-prometheus 与 k8s-prometheus-adapter (详见之前的博文 k8s 安装 prometheus 过程记录),使用下面的配置文件部署 HPA(Horiz ...
- Kubernetes+Prometheus+Grafana部署笔记
一.基础概念 1.1 基础概念 Kubernetes(通常写成“k8s”)Kubernetes是Google开源的容器集群管理系统.其设计目标是在主机集群之间提供一个能够自动化部署.可拓展.应用容器可 ...
- [转帖]Prometheus+Grafana监控Kubernetes
原博客的位置: https://blog.csdn.net/shenhonglei1234/article/details/80503353 感谢原作者 这里记录一下自己试验过程中遇到的问题: . 自 ...
随机推荐
- #Python #OpenCV 使用Python为你的圣诞节增添更多乐趣
目录 1.前言 2.目标与效果展示 3.下载OpenCV图形识别库 4.下载python支持的v2模块 5.图片素材 6.代码 1.前言 编辑 Merry Christmas!今天是2022 ...
- js实现指定dom节点滚动到可视窗口
const rollDom = document.getElementById('domId') // 获取想要滚动的dom节点 rollDom.scrollIntoView({ block: 'ce ...
- F. K-th Power 容斥,莫比乌斯
F. K-th Power 传送门: 牛客:https://ac.nowcoder.com/acm/contest/34866/F cf:https://codeforces.com/group/5z ...
- WSL/Ubutun安装GCC
1.更新升级软件包 $ sudo apt-get update && sudo apt-get upgrade -y 2.清理关联软件包 $ sudo apt autoremove - ...
- nios verify failed 问题解决。
nios 调试时碰到上图所示问题.根据下载地址可以判断下载flash.sdram都成功,这里说明电路设计和焊接都没有问题. 但是在flash地址verify failed between adress ...
- angularJS:一个页面多个ng-app
var app = angular.module('myApp', []); app.controller('myCtrl', function($scope, $rootScope) { $scop ...
- 帝国CMS安全方案
一.帝国CMS介绍 帝国CMS是一款主流的网站内容管理系统,因其系统结构科学合理,功能强大,操作简单,拥有海量用户.和其他CMS一样,安全漏洞也是其无法避免的问题.虽然官方不断发布补丁.升级版本,但安 ...
- LoadRunner——安装教程以及创建与录制(一)
theme: channing-cyan 1. loadrunner12|loadrunner12官方版下载(附安装教程)+网盘下载+汉化包 CSDN下载及安装教程: https://blog.csd ...
- 网络如何运作——详细DNS、HTTP、网站
详细的DNS 什么是DNS? DNS(域名系统)为我们提供了一种简单的方式来与互联网上的设备进行通信,而无需记住复杂的数字.就像每个房子都有一个唯一的地址可以直接向它发送邮件一样,互联网上的每台计算机 ...
- 如何利用javaweb实现数据的可视化
描述 之前一直使用html进行网页版的数据库查询啥的,没有图片的参与,也没有将一条条数据变成较为直观的图画形式,这就是来实现以下数据的图画形式 了解及基础说明 通过查阅资料,我首先了解到要是想实现数据 ...