k8s zookeeper安装(集群版与非集群版)
集群版zookeeper安装
第一步:添加helm镜像源
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
第二步:下载Zookeeper
helm fetch incubator/zookeeper
第三步:修改
...
persistence:
enabled: true
## zookeeper data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-client"
accessMode: ReadWriteOnce
size: 5Gi
...
注意:
1、如果已有存储,可不执行以下操作,将现有的storageClass替换即可
查看storageclass,替换对应的NAME
kubectl get sc -(namespace名称)
[root@k8s-master zookeeper]# kubectl get sc -n xxxxxx
NAME PROVISION RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/moldy-seagull-nfs-client-provisioner Delete Immediate true 16d
2、如果没有存储,执行下列操作时,注意存储的方式及地址
修改存储(storageclass 名称为kubectl get sc -(namespace名称) 下面的共享存储卷,如果没有按照以下步骤安装)
1、集群版本:如果是1.19+
# xxx填写存储地址,例如nfs共享存储填写ip:192.168.8.158
helm install --set nfs.server=x.x.x.x --set nfs.path=/exported/path stable/nfs-client-provisioner
如果出现错误
Error: failed to download "stable/nfs-client-provisioner" (hint: running `helm repo update` may help)
2、如果是1.19版本以下执行yaml文件
$ kubectl create -f nfs-client-sa.yaml
$ kubectl create -f nfs-client-class.yaml
$ kubectl create -f nfs-client.yaml
注意nfs-client.yaml存储地址!!!
nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs
nfs-client.yaml
spec.containers.env.name:NFS_SERVER 对应的value地址根据实际需求更换,以下192.168.8.158地址为示例地址
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.8.158
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 192.168.8.158
path: /data/k8s
非集群版zookeeper安装
注意:zookeeper.yaml中存储地址,根据实际情况修改存储(共有三处PV需要修改)
kubectl apply -f zookeeper.yaml -n xxxxx
zookeeper.yaml
##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
type: NodePort
ports:
- port: 2181
protocol: TCP
targetPort: 2181
name: zookeeper-2181
nodePort: 30000
- port: 2888
protocol: TCP
targetPort: 2888
name: zookeeper-2888
- port: 3888
protocol: TCP
targetPort: 3888
name: zookeeper-3888
selector:
name: zookeeper
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-data-pv
labels:
pv: zookeeper-data-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-data-pv
---
##创建PV
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datalog-pv
labels:
pv: zookeeper-datalog-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs: #NFS设置
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-datalog-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-datalog-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-logs-pv
labels:
pv: zookeeper-logs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
#########################################################注意pv的nfs存储地址,根据实际情况修改##################
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
selector:
matchLabels:
pv: zookeeper-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper
labels:
name: zookeeper
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper
template:
metadata:
labels:
name: zookeeper
spec:
containers:
- name: zookeeper
image: zookeeper:3.4.13
imagePullPolicy: Always
volumeMounts:
- mountPath: /logs
name: zookeeper-logs
- mountPath: /data
name: zookeeper-data
- mountPath: /datalog
name: zookeeper-datalog
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumes:
- name: zookeeper-logs
persistentVolumeClaim:
claimName: zookeeper-logs-pvc
- name: zookeeper-data
persistentVolumeClaim:
claimName: zookeeper-data-pvc
- name: zookeeper-datalog
persistentVolumeClaim:
claimName: zookeeper-datalog-pvc
安装nimbus
第一步:安装nimbus配置文件config map
注意:nimbus-cm.yaml中的zookeeper为zookeeper的service名称
kubectl apply -fnimbus-cm.yaml -n xxxxxx
nimbus-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nimbus-cm
data:
storm.yaml: |
# DataSource
storm.zookeeper.servers: [zookeeper]
nimbus.seeds: [nimbus]
storm.log.dir: "/logs"
storm.local.dir: "/data"
第二步:安装Deployment
kubectl apply -f nimbus.yaml -n xxxxxx
nimbus.yaml
注意创建PV时,存储地址,根据实际情况修改
##创建Service
apiVersion: v1
kind: Service
metadata:
name: nimbus
labels:
name: nimbus
spec:
ports:
- port: 6627
protocol: TCP
targetPort: 6627
name: nimbus-6627
selector:
name: storm-nimbus
---
##创建PV,注意修改nfs存储地址,根据实际情况调整
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-data-pv
labels:
pv: storm-nimbus-data-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-data-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-data-pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: storm-nimbus-logs-pv
labels:
pv: storm-nimbus-logs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 192.168.8.158
path: /data/k8s
##创建pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storm-nimbus-logs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: storm-nimbus-logs-pv
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: storm-nimbus
labels:
name: storm-nimbus
spec:
replicas: 1
selector:
matchLabels:
name: storm-nimbus
template:
metadata:
labels:
name: storm-nimbus
spec:
hostname: nimbus
imagePullSecrets:
- name: e6-aliyun-image
containers:
- name: storm-nimbus
image: storm:1.2.2
imagePullPolicy: Always
command:
- storm
- nimbus
#args:
#- nimbus
volumeMounts:
- mountPath: /conf/
name: configmap-volume
- mountPath: /logs
name: storm-nimbus-logs
- mountPath: /data
name: storm-nimbus-data
ports:
- containerPort: 6627
volumes:
- name: storm-nimbus-logs
persistentVolumeClaim:
claimName: storm-nimbus-logs-pvc
- name: storm-nimbus-data
persistentVolumeClaim:
claimName: storm-nimbus-data-pvc
- name: configmap-volume
configMap:
name: nimbus-cm
# hostNetwork: true
# dnsPolicy: ClusterFirstWithHostNet
安装nimbus-ui
kubectl create deployment stormui --image=adejonge/storm-ui -n xxxxxx
第二步:安装svc
kubectl expose deployment stormui --port=8080 --type=nodeport -n xxxxxxx
第三步:创建config map
安装zk-ui
安装方式
kubectl apply -f zookeeper-program-ui.yaml -n xxxxxxx
配置文件
zookeeper-program-ui.yaml
##创建Service
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
type: NodePort
ports:
- port: 9090
protocol: TCP
targetPort: 9090
name: zookeeper-ui-8080
nodePort: 30012
selector:
name: zookeeper-ui
---
## 部署
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zookeeper-ui
labels:
name: zookeeper-ui
spec:
replicas: 1
selector:
matchLabels:
name: zookeeper-ui
template:
metadata:
labels:
name: zookeeper-ui
spec:
containers:
- name: zookeeper-ui
image: maauso/zkui
imagePullPolicy: Always
env:
- name: ZKLIST
value: 192.168.8.158:30000
ports:
- containerPort: 9090
k8s zookeeper安装(集群版与非集群版)的更多相关文章
- ActiveMQ基础教程(二):安装与配置(单机与集群)
因为本文会用到集群介绍,因此准备了三台虚拟机(当然读者也可以使用一个虚拟机,然后使用不同的端口来模拟实现伪集群): 192.168.209.133 test1 192.168.209.134 test ...
- 制作docker-jdk7-zookeeper镜像(非集群版)
## 准备工作 用到的工具, Xshell5, Xftp5, jdk-7u79-linux-x64.tar.gz, zookeeper-3.4.9.tar.gz, docker.io/centos:l ...
- 使用k8s operator安装和维护etcd集群
关于Kubernetes Operator这个新生事物,可以参考下文来了解这一技术的来龙去脉: https://yq.aliyun.com/articles/685522?utm_content=g_ ...
- zookeeper安装和应用场合(名字,配置,锁,队列,集群管理)
安装和配置详解 本文介绍的 Zookeeper 是以 3.2.2 这个稳定版本为基础,最新的版本可以通过官网http://hadoop.apache.org/zookeeper/ 来获取,Zookee ...
- MongoDB ReplacaSet & Sharding集群安装 配置 和 非集群情况的安装 配置 -摘自网络
单台机器做sharding --单机配置集群服务(Sharding) --shard1_1 mongod --install --serviceName MongoDBServerShard1 --s ...
- 3.Hadoop集群搭建之Zookeeper安装
前期准备 下载Zookeeper 3.4.5 若无特殊说明,则以下操作均在master节点上进行 1. 解压Zookeeper #直接解压Zookeeper压缩包 tar -zxvf zookeepe ...
- zookeeper安装与集群搭建
此处以centos系统下zookeeper安装为例,详细步骤可参考官网文档:zookeeper教程 一.单节点部署 1.下载zookeeper wget http://mirrors.hust.edu ...
- Zookeeper 安装及集群配置注意点
Zookeeper在ubuntu下安装及集群搭建,关于集群搭建,网上很多文章 可以参考:https://www.ibm.com/developerworks/cn/opensource/os-cn-z ...
- zookeeper 安装及集群
一.zookeeper介绍 zookeeper是一个中间件,为分布式系统提供协调服务,可以为大数据服务,也可以为java服务. 分布式系统,很多计算机组成一个整体,作为一个整体一致对外并处理同一请求, ...
- Dapr 虚拟机集群部署 (非K8S)
从2021-10-08号发布4小时Dapr + .NET 5 + K8S实战到今天刚刚一周时间,报名人数到了230人,QQ群人数从80人增加到了260人左右,大家对Dapr的关注度再一次得到了验证,并 ...
随机推荐
- Git子模块使用说明
介绍 前端不同应用存在公共的脚本或样式代码,为了避免重复开发,将公共的代码抽取出来,形成一个公共的 git 子模块,方便调用和维护. 软件架构 本仓库代码将作为 git 子模块,被引用到其他仓库中,不 ...
- 一分钟学一个 Linux 命令 - rm
前言 大家好,我是 god23bin,欢迎回到咱们的<一分钟学一个 Linux 命令>系列,今天我要讲的是一个比较危险的命令,rm 命令,没错,你可以没听过 rm 命令,但是删库跑路你不可 ...
- pandas 根据列的值选取所有行
原文链接:https://blog.csdn.net/changzoe/article/details/82348913 在其他论坛上看到的,原文链接如上所示.为方便记忆,原文如下所示: 选取等于某些 ...
- 用python selenium提取网页中的所有<a>标签中的超级链接地址
urls = driver.find_elements_by_xpath("//a") for url in urls: print(url.get_attribute(" ...
- 使用TypeScript类型注解,编写更干净的JS代码
TypeScript 可以看作是 JavaScript 的超集,不仅包含了 JavaScript 的所有内容,还拓展了语法.规定了类型约束,使得我们可以编写更干净.完整的代码. 类型注解 TypeSc ...
- [k8s]使用nfs挂载pod的应用日志文件
前言 某些特殊场景下应用日志无法通过elk.grafana等工具直接查看,需要将日志文件挂载出来再处理.本文以nfs作为远程存储,统一存放pod日志. 系统版本:CentOS 7 x86-64 宿主机 ...
- jmeter:json提取一个字段的多个值,用逗号分隔
目的:将接口响应结果中的一个字段的所有值同时提取,作为参数传给下个接口 1. 格式化接口响应结果 获取下图中所有object里面的EMP_UID 2. json提取 JSON Path express ...
- servlet系列:简介和基本使用以及工作流程
目录 一.简介 二.Servlet实现 三.基本使用 1.引入pom依赖 2.实现Servlet规范,重写service方法 3.配置web.xml 4.配置Tomcat 6.运行 四.Servlet ...
- 【技术实战】Vue功能样式实战【六】
需求实战一 样式展示 代码展示 <template> <ARow> <ACol style="background-color:#F1F4F5 "&g ...
- 移动优先与响应式Web设计
1.加速度计-设备置向 2.下拉更新-Iscroll 3.滑动以得到更多选项-panel 4.Sketch a Search 雅虎 画圈搜索 5.<Tapeworthy> 移动用户的行为特 ...