集群版zookeeper安装

第一步:添加helm镜像源

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

第二步:下载Zookeeper

helm fetch incubator/zookeeper

第三步:修改

...
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 5Gi
...

注意:

1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可

查看storageclass,替换对应的NAME

kubectl get sc -(namespace名称)

[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME PROVISION RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete Immediate true 16d
2、如果没有存储,执行下列操作时,注意存储的方式及地址

修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)

1、集群版本:如果是1.19+
# xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner

如果出现错误

Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下执行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml

注意nfs-client.yaml存储地址!!!

nfs-client-sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner ---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] ---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

nfs-client-class.yaml

 apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs

nfs-client.yaml

spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址

kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.8.158
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.8.158
path: /data/k8s

非集群版zookeeper安装

注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)

kubectl apply -f zookeeper.yaml -n xxxxx

zookeeper.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: zookeeper-2181
nodePort: 30000
- port: 2888
protocol: TCP
targetPort: 2888
name: zookeeper-2888
- port: 3888
protocol: TCP
targetPort: 3888
name: zookeeper-3888
selector:
name: zookeeper
--- ##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-pv
labels:
pv: zookeeper-data-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-data-pv
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv
labels:
pv: zookeeper-datalog-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-datalog-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-datalog-pv ---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-logs-pv
labels:
pv: zookeeper-logs-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs:
server: 192.168.8.158
path: /data/k8s ##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-logs-pv ---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper:3.4.13
imagePullPolicy: Always
volumeMounts:
- mountPath: /logs
name: zookeeper-logs
- mountPath: /data
name: zookeeper-data
- mountPath: /datalog
name: zookeeper-datalog
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumes:
- name: zookeeper-logs
persistentVolumeClaim:
claimName: zookeeper-logs-pvc
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-datalog
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc

安装nimbus

第一步:安装nimbus配置文件config map

注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称

kubectl apply -fnimbus-cm.yaml -n xxxxxx

nimbus-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: nimbus-cm
data:
storm.yaml: |
# DataSource
storm.zookeeper.servers: [zookeeper]
nimbus.seeds: [nimbus]
storm.log.dir: "/logs"
storm.local.dir: "/data"
第二步:安装Deployment
kubectl apply -f nimbus.yaml -n xxxxxx

nimbus.yaml

注意创建PV时,存储地址,根据实际情况修改

##创建Service
apiVersion: v1
kind: Service
metadata:
name: nimbus
labels:
name: nimbus
spec:
ports:
- port: 6627
protocol: TCP
targetPort: 6627
name: nimbus-6627
selector:
name: storm-nimbus
---
##创建PV,注意修改nfs存储地址,根据实际情况调整
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-data-pv
labels:
pv: storm-nimbus-data-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s ##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-logs-pv
labels:
pv: storm-nimbus-logs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storm-nimbus
labels:
name: storm-nimbus
spec:
replicas: 1
selector:
matchLabels:
name: storm-nimbus
template:
metadata:
labels:
name: storm-nimbus
spec:
hostname: nimbus
imagePullSecrets:
- name: e6-aliyun-image
containers:
- name: storm-nimbus
image: storm:1.2.2
imagePullPolicy: Always
command:
- storm
- nimbus
#args:
#- nimbus
volumeMounts:
- mountPath: /conf/
name: configmap-volume
- mountPath: /logs
name: storm-nimbus-logs
- mountPath: /data
name: storm-nimbus-data
ports:
- containerPort: 6627
volumes:
- name: storm-nimbus-logs
persistentVolumeClaim:
claimName: storm-nimbus-logs-pvc
- name: storm-nimbus-data
persistentVolumeClaim:
claimName: storm-nimbus-data-pvc
- name: configmap-volume
configMap:
name: nimbus-cm
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet

安装nimbus-ui

kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安装svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:创建config map

安装zk-ui

安装方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件

zookeeper-program-ui.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
name: zookeeper-ui-8080
nodePort: 30012
selector:
name: zookeeper-ui ---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper-ui
template:
metadata:
labels:
name: zookeeper-ui
spec:
containers:
- name: zookeeper-ui
image: maauso/zkui
imagePullPolicy: Always
env:
- name: ZKLIST
value: 192.168.8.158:30000
ports:
- containerPort: 9090

k8s zookeeper安装(集群版与非集群版)的更多相关文章

  1. ActiveMQ基础教程(二):安装与配置(单机与集群)

    因为本文会用到集群介绍,因此准备了三台虚拟机(当然读者也可以使用一个虚拟机,然后使用不同的端口来模拟实现伪集群): 192.168.209.133 test1 192.168.209.134 test ...

  2. 制作docker-jdk7-zookeeper镜像(非集群版)

    ## 准备工作 用到的工具, Xshell5, Xftp5, jdk-7u79-linux-x64.tar.gz, zookeeper-3.4.9.tar.gz, docker.io/centos:l ...

  3. 使用k8s operator安装和维护etcd集群

    关于Kubernetes Operator这个新生事物,可以参考下文来了解这一技术的来龙去脉: https://yq.aliyun.com/articles/685522?utm_content=g_ ...

  4. zookeeper安装和应用场合(名字,配置,锁,队列,集群管理)

    安装和配置详解 本文介绍的 Zookeeper 是以 3.2.2 这个稳定版本为基础,最新的版本可以通过官网http://hadoop.apache.org/zookeeper/ 来获取,Zookee ...

  5. MongoDB ReplacaSet & Sharding集群安装 配置 和 非集群情况的安装 配置 -摘自网络

    单台机器做sharding --单机配置集群服务(Sharding) --shard1_1 mongod --install --serviceName MongoDBServerShard1 --s ...

  6. 3.Hadoop集群搭建之Zookeeper安装

    前期准备 下载Zookeeper 3.4.5 若无特殊说明,则以下操作均在master节点上进行 1. 解压Zookeeper #直接解压Zookeeper压缩包 tar -zxvf zookeepe ...

  7. zookeeper安装与集群搭建

    此处以centos系统下zookeeper安装为例,详细步骤可参考官网文档:zookeeper教程 一.单节点部署 1.下载zookeeper wget http://mirrors.hust.edu ...

  8. Zookeeper 安装及集群配置注意点

    Zookeeper在ubuntu下安装及集群搭建,关于集群搭建,网上很多文章 可以参考:https://www.ibm.com/developerworks/cn/opensource/os-cn-z ...

  9. zookeeper 安装及集群

    一.zookeeper介绍 zookeeper是一个中间件,为分布式系统提供协调服务,可以为大数据服务,也可以为java服务. 分布式系统,很多计算机组成一个整体,作为一个整体一致对外并处理同一请求, ...

  10. Dapr 虚拟机集群部署 (非K8S)

    从2021-10-08号发布4小时Dapr + .NET 5 + K8S实战到今天刚刚一周时间,报名人数到了230人,QQ群人数从80人增加到了260人左右,大家对Dapr的关注度再一次得到了验证,并 ...

随机推荐

  1. Hexo博客Next主题文章置顶相关

    我需要写一些文章做推荐相关,需要文章置顶功能 博客效果 置顶方法配置 一.修改库文件 原理 在Hexo生成首页HTML时,将top值高的文章排在前面,达到置顶功能. 修改方法 修改Hexo文件夹下的n ...

  2. C++(继承)

    继承 struct Person { int age; int sex; }; struct Teacher { int age; int sex; int level; int classId; } ...

  3. 文心一言 VS 讯飞星火 VS chatgpt (64)-- 算法导论6.5 3题

    文心一言 VS 讯飞星火 VS chatgpt (64)-- 算法导论6.5 3题 三.要求用最小堆实现最小优先队列,请写出 HEAP-MINIMUM.HEAP-EXTRACT-MIN.HEAP DE ...

  4. 如何在Spring Boot中记录用户系统操作流程?

    在现代Web应用程序中,记录用户系统操作流程对于监控用户行为.进行故障排查.安全审计等方面都是非常重要的.在本篇博客中,我们将介绍如何在Spring Boot中使用AOP(面向切面编程)和日志框架来实 ...

  5. FPGA学习之乒乓操作

    乒乓操作学习记录如下: 乒乓操作" 是一个常常应用于数据流控制的设计思想, 典型的乒乓操作方法如下图 所示: 乒乓操作的处理流程为:输入数据流通过" 输入数据选择单元"将 ...

  6. 《HelloGitHub》第 88 期

    兴趣是最好的老师,HelloGitHub 让你对编程感兴趣! 简介 HelloGitHub 分享 GitHub 上有趣.入门级的开源项目. https://github.com/521xueweiha ...

  7. openpyxl 设置单元格自动换行

    解决方案 openpyxl的alignment函数中的参数:wrapText=True,就可以了 from openpyxl.styles import Alignment worksheet.cel ...

  8. redux的三个概念与三大核心

    1.什么是redux?一个组件里可能会有很多的状态,比如控制某个内容显示的flag,从后端获取的展示数据,那么这些状态可以在自己的单个页面进行管理,也可以选择别的管理方式,redux就是是一种状态管理 ...

  9. 手写 Vuex4 源码

    本文首发于掘金,未经许可禁止转载 Vuex4 是 Vue 的状态管理工具,Vuex 和单纯的全局对象有以下两点不同: Vuex 的状态存储是响应式的 不能直接改变 store 中的状态.改变 stor ...

  10. JS标识符

    什么是标识符? 变量名 函数名 属性名都称为标识符. 定义标识符规范如下 1) 标识符只能由字母 数字 下划线 $组成. 2) 标识符不能以数字开头,例如: 1name. 3) 标识符不能实JS中的关 ...