集群版zookeeper安装

第一步:添加helm镜像源

helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

第二步:下载Zookeeper

helm fetch incubator/zookeeper

第三步:修改

...
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 5Gi
...

注意:

1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可

查看storageclass,替换对应的NAME

kubectl get sc -(namespace名称)

[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME PROVISION RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete Immediate true 16d
2、如果没有存储,执行下列操作时,注意存储的方式及地址

修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)

1、集群版本:如果是1.19+
# xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner

如果出现错误

Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下执行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml

注意nfs-client.yaml存储地址!!!

nfs-client-sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner ---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] ---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io

nfs-client-class.yaml

 apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs

nfs-client.yaml

spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址

kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.8.158
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.8.158
path: /data/k8s

非集群版zookeeper安装

注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)

kubectl apply -f zookeeper.yaml -n xxxxx

zookeeper.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: zookeeper-2181
nodePort: 30000
- port: 2888
protocol: TCP
targetPort: 2888
name: zookeeper-2888
- port: 3888
protocol: TCP
targetPort: 3888
name: zookeeper-3888
selector:
name: zookeeper
--- ##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-pv
labels:
pv: zookeeper-data-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-data-pv
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv
labels:
pv: zookeeper-datalog-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-datalog-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-datalog-pv ---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-logs-pv
labels:
pv: zookeeper-logs-pv spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs:
server: 192.168.8.158
path: /data/k8s ##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-logs-pv ---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper:3.4.13
imagePullPolicy: Always
volumeMounts:
- mountPath: /logs
name: zookeeper-logs
- mountPath: /data
name: zookeeper-data
- mountPath: /datalog
name: zookeeper-datalog
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumes:
- name: zookeeper-logs
persistentVolumeClaim:
claimName: zookeeper-logs-pvc
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-datalog
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc

安装nimbus

第一步:安装nimbus配置文件config map

注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称

kubectl apply -fnimbus-cm.yaml -n xxxxxx

nimbus-cm.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: nimbus-cm
data:
storm.yaml: |
# DataSource
storm.zookeeper.servers: [zookeeper]
nimbus.seeds: [nimbus]
storm.log.dir: "/logs"
storm.local.dir: "/data"
第二步:安装Deployment
kubectl apply -f nimbus.yaml -n xxxxxx

nimbus.yaml

注意创建PV时,存储地址,根据实际情况修改

##创建Service
apiVersion: v1
kind: Service
metadata:
name: nimbus
labels:
name: nimbus
spec:
ports:
- port: 6627
protocol: TCP
targetPort: 6627
name: nimbus-6627
selector:
name: storm-nimbus
---
##创建PV,注意修改nfs存储地址,根据实际情况调整
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-data-pv
labels:
pv: storm-nimbus-data-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s ##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-logs-pv
labels:
pv: storm-nimbus-logs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storm-nimbus
labels:
name: storm-nimbus
spec:
replicas: 1
selector:
matchLabels:
name: storm-nimbus
template:
metadata:
labels:
name: storm-nimbus
spec:
hostname: nimbus
imagePullSecrets:
- name: e6-aliyun-image
containers:
- name: storm-nimbus
image: storm:1.2.2
imagePullPolicy: Always
command:
- storm
- nimbus
#args:
#- nimbus
volumeMounts:
- mountPath: /conf/
name: configmap-volume
- mountPath: /logs
name: storm-nimbus-logs
- mountPath: /data
name: storm-nimbus-data
ports:
- containerPort: 6627
volumes:
- name: storm-nimbus-logs
persistentVolumeClaim:
claimName: storm-nimbus-logs-pvc
- name: storm-nimbus-data
persistentVolumeClaim:
claimName: storm-nimbus-data-pvc
- name: configmap-volume
configMap:
name: nimbus-cm
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet

安装nimbus-ui

kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安装svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:创建config map

安装zk-ui

安装方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件

zookeeper-program-ui.yaml

##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
name: zookeeper-ui-8080
nodePort: 30012
selector:
name: zookeeper-ui ---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper-ui
template:
metadata:
labels:
name: zookeeper-ui
spec:
containers:
- name: zookeeper-ui
image: maauso/zkui
imagePullPolicy: Always
env:
- name: ZKLIST
value: 192.168.8.158:30000
ports:
- containerPort: 9090

k8s zookeeper安装(集群版与非集群版)的更多相关文章

  1. ActiveMQ基础教程(二):安装与配置(单机与集群)

    因为本文会用到集群介绍,因此准备了三台虚拟机(当然读者也可以使用一个虚拟机,然后使用不同的端口来模拟实现伪集群): 192.168.209.133 test1 192.168.209.134 test ...

  2. 制作docker-jdk7-zookeeper镜像(非集群版)

    ## 准备工作 用到的工具, Xshell5, Xftp5, jdk-7u79-linux-x64.tar.gz, zookeeper-3.4.9.tar.gz, docker.io/centos:l ...

  3. 使用k8s operator安装和维护etcd集群

    关于Kubernetes Operator这个新生事物,可以参考下文来了解这一技术的来龙去脉: https://yq.aliyun.com/articles/685522?utm_content=g_ ...

  4. zookeeper安装和应用场合(名字,配置,锁,队列,集群管理)

    安装和配置详解 本文介绍的 Zookeeper 是以 3.2.2 这个稳定版本为基础,最新的版本可以通过官网http://hadoop.apache.org/zookeeper/ 来获取,Zookee ...

  5. MongoDB ReplacaSet & Sharding集群安装 配置 和 非集群情况的安装 配置 -摘自网络

    单台机器做sharding --单机配置集群服务(Sharding) --shard1_1 mongod --install --serviceName MongoDBServerShard1 --s ...

  6. 3.Hadoop集群搭建之Zookeeper安装

    前期准备 下载Zookeeper 3.4.5 若无特殊说明,则以下操作均在master节点上进行 1. 解压Zookeeper #直接解压Zookeeper压缩包 tar -zxvf zookeepe ...

  7. zookeeper安装与集群搭建

    此处以centos系统下zookeeper安装为例,详细步骤可参考官网文档:zookeeper教程 一.单节点部署 1.下载zookeeper wget http://mirrors.hust.edu ...

  8. Zookeeper 安装及集群配置注意点

    Zookeeper在ubuntu下安装及集群搭建,关于集群搭建,网上很多文章 可以参考:https://www.ibm.com/developerworks/cn/opensource/os-cn-z ...

  9. zookeeper 安装及集群

    一.zookeeper介绍 zookeeper是一个中间件,为分布式系统提供协调服务,可以为大数据服务,也可以为java服务. 分布式系统,很多计算机组成一个整体,作为一个整体一致对外并处理同一请求, ...

  10. Dapr 虚拟机集群部署 (非K8S)

    从2021-10-08号发布4小时Dapr + .NET 5 + K8S实战到今天刚刚一周时间,报名人数到了230人,QQ群人数从80人增加到了260人左右,大家对Dapr的关注度再一次得到了验证,并 ...

随机推荐

  1. 2020中国系统架构师大会活动回顾:ZEGO实时音视频服务架构实践

    10月24日,即构科技后台架构负责人&高级技术专家祝永坚(jack),受邀参加2020中国系统架构师大会,在音视频架构与算法专场进行了主题为<ZEGO实时音视频服务架构实践>的技术 ...

  2. Java_Day16_作业

    A:简答题 1.请把我们讲解过的所有类中的方法在API中找到,并使用自己的话进行描述 答案: Map public V put(K key, V value): public void clear() ...

  3. Java Maven Settings配置参考

    介绍 快速概览 settings.xml文件中的 settings 元素包含用于定义以各种方式配置Maven执行的值的元素,如pom.xml,但不应绑定到任何特定项目或分发给受众.这些值包括本地仓库位 ...

  4. 关于 yield 关键字(C#)

    〇.前言 yield 关键字的用途是把指令推迟到程序实际需要的时候再执行,这个特性允许我们更细致地控制集合每个元素产生的时机. 对于一些大型集合,加载起来比较耗时,此时最好是先返回一个来让系统持续展示 ...

  5. 【实践篇】推荐算法PaaS化探索与实践

    作者:京东零售 崔宁 1. 背景说明 目前,推荐算法部支持了主站.企业业务.全渠道等20+业务线的900+推荐场景,通过梳理大促运营.各垂直业务线推荐场景的共性需求,对现有推荐算法能力进行沉淀和积累, ...

  6. pip 更新

    pip install --user --upgrade pip成功升级

  7. python excel 07版本转换为03版本

    需要安装pywin32模块 pip install pywin32 主程序: import win32com.client as win32 import os.path import glob cl ...

  8. iostat命令安装及详解

    iostat是I/O statistics(输入/输出统计)的缩写,iostat工具将对系统的磁盘操作活动进行监视.它的特点是汇报磁盘活动统计情况,同时也会汇报出CPU使用情况.同vmstat一样,i ...

  9. Linux:通过命令查找日志文件中的某字段

    工作中有用到,做个记录. 1. 查询某字段,显示行号: cat -n file_name|grep '查找字段' [root@ZWZF-CWY-LZY-12 CWY]# cat -n nohup.ou ...

  10. 如何使用Grid中的repeat函数

    在本文中,我们将探索 CSS Grid repeat() 函数的所有可能性,它允许我们高效地创建 Grid 列和行的模式,甚至无需媒体查询就可以创建响应式布局. 不要重复自己 通过 grid-temp ...