k8s zookeeper安装(集群版与非集群版)
集群版zookeeper安装
第一步:添加helm镜像源
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
第二步:下载Zookeeper
helm fetch incubator/zookeeper
第三步:修改
...
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 5Gi
...
注意:
1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可
查看storageclass,替换对应的NAME
kubectl get sc -(namespace名称)
[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME PROVISION RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete Immediate true 16d
2、如果没有存储,执行下列操作时,注意存储的方式及地址
修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)
1、集群版本:如果是1.19+
# xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner
如果出现错误
Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下执行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml
注意nfs-client.yaml存储地址!!!
nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs
nfs-client.yaml
spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.8.158
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.8.158
path: /data/k8s
非集群版zookeeper安装
注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)
kubectl apply -f zookeeper.yaml -n xxxxx
zookeeper.yaml
##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: zookeeper-2181
nodePort: 30000
- port: 2888
protocol: TCP
targetPort: 2888
name: zookeeper-2888
- port: 3888
protocol: TCP
targetPort: 3888
name: zookeeper-3888
selector:
name: zookeeper
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-pv
labels:
pv: zookeeper-data-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-data-pv
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv
labels:
pv: zookeeper-datalog-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-datalog-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-datalog-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-logs-pv
labels:
pv: zookeeper-logs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper:3.4.13
imagePullPolicy: Always
volumeMounts:
- mountPath: /logs
name: zookeeper-logs
- mountPath: /data
name: zookeeper-data
- mountPath: /datalog
name: zookeeper-datalog
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumes:
- name: zookeeper-logs
persistentVolumeClaim:
claimName: zookeeper-logs-pvc
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-datalog
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc
安装nimbus
第一步:安装nimbus配置文件config map
注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称
kubectl apply -fnimbus-cm.yaml -n xxxxxx
nimbus-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nimbus-cm
data:
storm.yaml: |
# DataSource
storm.zookeeper.servers: [zookeeper]
nimbus.seeds: [nimbus]
storm.log.dir: "/logs"
storm.local.dir: "/data"
第二步:安装Deployment
kubectl apply -f nimbus.yaml -n xxxxxx
nimbus.yaml
注意创建PV时,存储地址,根据实际情况修改
##创建Service
apiVersion: v1
kind: Service
metadata:
name: nimbus
labels:
name: nimbus
spec:
ports:
- port: 6627
protocol: TCP
targetPort: 6627
name: nimbus-6627
selector:
name: storm-nimbus
---
##创建PV,注意修改nfs存储地址,根据实际情况调整
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-data-pv
labels:
pv: storm-nimbus-data-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-logs-pv
labels:
pv: storm-nimbus-logs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storm-nimbus
labels:
name: storm-nimbus
spec:
replicas: 1
selector:
matchLabels:
name: storm-nimbus
template:
metadata:
labels:
name: storm-nimbus
spec:
hostname: nimbus
imagePullSecrets:
- name: e6-aliyun-image
containers:
- name: storm-nimbus
image: storm:1.2.2
imagePullPolicy: Always
command:
- storm
- nimbus
#args:
#- nimbus
volumeMounts:
- mountPath: /conf/
name: configmap-volume
- mountPath: /logs
name: storm-nimbus-logs
- mountPath: /data
name: storm-nimbus-data
ports:
- containerPort: 6627
volumes:
- name: storm-nimbus-logs
persistentVolumeClaim:
claimName: storm-nimbus-logs-pvc
- name: storm-nimbus-data
persistentVolumeClaim:
claimName: storm-nimbus-data-pvc
- name: configmap-volume
configMap:
name: nimbus-cm
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet
安装nimbus-ui
kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安装svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:创建config map
安装zk-ui
安装方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件
zookeeper-program-ui.yaml
##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
name: zookeeper-ui-8080
nodePort: 30012
selector:
name: zookeeper-ui
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper-ui
template:
metadata:
labels:
name: zookeeper-ui
spec:
containers:
- name: zookeeper-ui
image: maauso/zkui
imagePullPolicy: Always
env:
- name: ZKLIST
value: 192.168.8.158:30000
ports:
- containerPort: 9090
k8s zookeeper安装(集群版与非集群版)的更多相关文章
- ActiveMQ基础教程(二):安装与配置(单机与集群)
因为本文会用到集群介绍,因此准备了三台虚拟机(当然读者也可以使用一个虚拟机,然后使用不同的端口来模拟实现伪集群): 192.168.209.133 test1 192.168.209.134 test ...
- 制作docker-jdk7-zookeeper镜像(非集群版)
## 准备工作 用到的工具, Xshell5, Xftp5, jdk-7u79-linux-x64.tar.gz, zookeeper-3.4.9.tar.gz, docker.io/centos:l ...
- 使用k8s operator安装和维护etcd集群
关于Kubernetes Operator这个新生事物,可以参考下文来了解这一技术的来龙去脉: https://yq.aliyun.com/articles/685522?utm_content=g_ ...
- zookeeper安装和应用场合(名字,配置,锁,队列,集群管理)
安装和配置详解 本文介绍的 Zookeeper 是以 3.2.2 这个稳定版本为基础,最新的版本可以通过官网http://hadoop.apache.org/zookeeper/ 来获取,Zookee ...
- MongoDB ReplacaSet & Sharding集群安装 配置 和 非集群情况的安装 配置 -摘自网络
单台机器做sharding --单机配置集群服务(Sharding) --shard1_1 mongod --install --serviceName MongoDBServerShard1 --s ...
- 3.Hadoop集群搭建之Zookeeper安装
前期准备 下载Zookeeper 3.4.5 若无特殊说明,则以下操作均在master节点上进行 1. 解压Zookeeper #直接解压Zookeeper压缩包 tar -zxvf zookeepe ...
- zookeeper安装与集群搭建
此处以centos系统下zookeeper安装为例,详细步骤可参考官网文档:zookeeper教程 一.单节点部署 1.下载zookeeper wget http://mirrors.hust.edu ...
- Zookeeper 安装及集群配置注意点
Zookeeper在ubuntu下安装及集群搭建,关于集群搭建,网上很多文章 可以参考:https://www.ibm.com/developerworks/cn/opensource/os-cn-z ...
- zookeeper 安装及集群
一.zookeeper介绍 zookeeper是一个中间件,为分布式系统提供协调服务,可以为大数据服务,也可以为java服务. 分布式系统,很多计算机组成一个整体,作为一个整体一致对外并处理同一请求, ...
- Dapr 虚拟机集群部署 (非K8S)
从2021-10-08号发布4小时Dapr + .NET 5 + K8S实战到今天刚刚一周时间,报名人数到了230人,QQ群人数从80人增加到了260人左右,大家对Dapr的关注度再一次得到了验证,并 ...
随机推荐
- 懒人的百宝箱「GitHub 热点速览」
本周 GitHub Trending 除了 lazydocker 之外,还有多个 lazy 项目上线,比如大家熟悉的 lazyvim,可见,这个世界对懒人还是很友好的.除此之外,主打一个密码免输入,绕 ...
- (数据科学学习手札153)基于martin的高性能矢量切片地图服务构建
本文示例代码已上传至我的Github仓库https://github.com/CNFeffery/DataScienceStudyNotes 1 简介 大家好我是费老师,在日常研发地图类应用的场景中, ...
- 如何编写难以维护的React代码?耦合组件
如何编写难以维护的React代码?耦合组件 在许多项目中,我们经常会遇到一些难以维护的React代码.其中一种常见的情况是:子组件直接操作父组件方法,从而导致父子组件深度耦合.这样的实现让子组件过于依 ...
- 【Azure K8S | AKS】在AKS集群中创建 PVC(PersistentVolumeClaim)和 PV(PersistentVolume) 示例
问题描述 在AKS集群中创建 PVC(PersistentVolumeClaim)和 PV(PersistentVolume) 示例 问题解答 在Azure Kubernetes Service(AK ...
- 26种source-map看花了眼?别急,理解这几个全弄懂
上一篇 webpack处理模块化源码 的文章中提到了 "source map",这一篇来详细说说. 有什么作用 source map 用于映射编译后的代码与源码,这样如果编译后的代 ...
- [jmeter]简介与安装
简介 JMeter是开源软件Apache基金会下的一个性能测试工具,用来测试部署在服务器端的应用程序的性能. 安装 安装jmeter 从 官网 下载jmeter的压缩包 安装jdk并配置 JAVA_H ...
- .NET Core多线程 (2) 异步 - 上
去年换工作时系统复习了一下.NET Core多线程相关专题,学习了一线码农老哥的<.NET 5多线程编程实战>课程,我将复习的知识进行了总结形成本专题. 本篇,我们来复习一下异步的相关知识 ...
- 基于CUBEMX的STM32F4 Hal库,配置LVGL(无操作系统版)
本篇文章移植思路适用于所有嵌入式MCU,包括Arm,STM32,NXP,乐鑫,Nuvoton,Arduino,RT-Thread,Zephyr,NuttX,Adafruit等等. 为什么要写这一篇移植 ...
- 【Unity3D】运动模糊特效
1 运动模糊原理 开启混合(Blend)后,通过 Alpha 通道控制当前屏幕纹理与历史屏幕纹理进行混合,当有物体运动时,就会将当前位置的物体影像与历史位置的物体影像进行混合,从而实现运动模糊效果 ...
- Redis专题-队列
Redis专题-队列 首先,想一想 Redis 适合做消息队列吗? 1.消息队列的消息存取需求是什么?redis中的解决方案是什么? 无非就是下面这几点: 0.数据可以顺序读取 1.支持阻塞等待拉取消 ...