kubernetes实战(九):k8s集群动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群
1、准备工作
所有节点安装GFS客户端
yum install glusterfs glusterfs-fuse -y
如果不是所有节点要部署GFS管理服务,就在需要部署的节点上打上标签
[root@k8s-master01 ~]# kubectl label node k8s-node01 storagenode=glusterfs
node/k8s-node01 labeled
[root@k8s-master01 ~]# kubectl label node k8s-node02 storagenode=glusterfs
node/k8s-node02 labeled
[root@k8s-master01 ~]# kubectl label node k8s-master01 storagenode=glusterfs
node/k8s-master01 labeled
所有节点
[root@k8s-master01 ~]# modprobe dm_snapshot
[root@k8s-master01 ~]# modprobe dm_mirror
[root@k8s-master01 ~]# modprobe dm_thin_pool
2、创建GFS管理服务容器集群
本文采用容器化方式部署GFS,公司如有GFS集群可直接使用。
GFS已Daemonset的方式进行部署,保证每台需要部署GFS管理服务的Node上都运行一个GFS管理服务。
下载相关文件:
wget https://github.com/heketi/heketi/releases/download/v7.0.0/heketi-client-v7.0.0.linux.amd64.tar.gz
创建集群:
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
[root@k8s-master01 kubernetes]# kubectl create -f glusterfs-daemonset.json
daemonset.extensions/glusterfs created
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
注意1:此时采用的为默认的挂载方式,可使用其他磁盘当做GFS的工作目录。
注意2:此时创建的namespace为默认的default,按需更改
注意3:可使用gluster/gluster-centos:gluster3u12_centos7镜像
查看pods
[root@k8s-master01 kubernetes]# kubectl get pods -l glusterfs-node=daemonset
NAME READY STATUS RESTARTS AGE
glusterfs-5npwn / Running 1m
glusterfs-bd5dx / Running 1m
...
3、创建Heketi服务
Heketi是一个提供RESTful API管理GFS卷的框架,并能够在K8S、OpenShift、OpenStack等云平台上实现动态存储资源供应,支持GFS多集群管理,便于管理员对GFS进行操作。
创建Heketi的ServiceAccount对象:
[root@k8s-master01 kubernetes]# cat heketi-service-account.json
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "heketi-service-account"
}
}
[root@k8s-master01 kubernetes]# kubectl create -f heketi-service-account.json
serviceaccount/heketi-service-account created
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
[root@k8s-master01 kubernetes]# kubectl get sa
NAME SECRETS AGE
default 13d
heketi-service-account <invalid>
创建Heketi对应的权限和secret
[root@k8s-master01 kubernetes]# kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account
clusterrolebinding.rbac.authorization.k8s.io/heketi-gluster-admin created
[root@k8s-master01 kubernetes]# kubectl create secret generic heketi-config-secret --from-file=./heketi.json
secret/heketi-config-secret created
初始化部署Heketi
[root@k8s-master01 kubernetes]# kubectl create -f heketi-bootstrap.json
secret/heketi-db-backup created
service/heketi created
deployment.extensions/heketi created
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
4、设置GFS集群
[root@k8s-master01 heketi-client]# cp bin/heketi-cli /usr/local/bin/
[root@k8s-master01 heketi-client]# pwd
/root/heketi-client [root@k8s-master01 heketi-client]# heketi-cli -v
heketi-cli v7.0.0
修改topology-sample,manage为GFS管理服务的Node节点主机名,storage为Node节点IP,devices为Node节点上的裸设备
[root@k8s-master01 kubernetes]# cat topology-sample.json
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"k8s-master01"
],
"storage": [
"192.168.20.20"
]
},
"zone":
},
"devices": [
{
"name": "/dev/sdc",
"destroydata": false
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node01"
],
"storage": [
"192.168.20.30"
]
},
"zone":
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": false
}
]
},
{
"node": {
"hostnames": {
"manage": [
"k8s-node02"
],
"storage": [
"192.168.20.31"
]
},
"zone":
},
"devices": [
{
"name": "/dev/sdb",
"destroydata": false
}
]
}
]
}
]
}
查看当前pod的ClusterIP
[root@k8s-master01 kubernetes]# kubectl get svc | grep heketi
deploy-heketi ClusterIP 10.110.217.153 <none> /TCP 26m
[root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.110.217.153:8080
创建GFS集群
[root@k8s-master01 kubernetes]# heketi-cli topology load --json=topology-sample.json
Creating cluster ... ID: a058723afae149618337299c84a1eaed
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-master01 ... ID: 929909065ceedb59c1b9c235fc3298ec
Adding device /dev/sdc ... OK
Creating node k8s-node01 ... ID: 37409d82b9ef27f73ccc847853eec429
Adding device /dev/sdb ... OK
Creating node k8s-node02 ... ID: e3ab676be27945749bba90efb34f2eb9
Adding device /dev/sdb ... OK
创建heketi持久化卷
yum install device-mapper* -y
[root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage
Saving heketi-storage.json
[root@k8s-master01 kubernetes]# ls
glusterfs-daemonset.json heketi.json heketi-storage.json
heketi-bootstrap.json heketi-service-account.json README.md
heketi-deployment.json heketi-start.sh topology-sample.json
[root@k8s-master01 kubernetes]# kubectl create -f heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
如果出现如下报错:
[root@k8s-master01 kubernetes]# heketi-cli setup-openshift-heketi-storage
Error: /usr/sbin/modprobe failed:
thin: Required device-mapper target(s) not detected in your kernel.
Run `lvcreate --help' for more information.
解决办法:所有节点执行modprobe dm_thin_pool
删除中间产物
[root@k8s-master01 kubernetes]# kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi"
pod "deploy-heketi-59f8dbc97f-5rf6s" deleted
service "deploy-heketi" deleted
service "heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-59f8dbc97f" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
创建持久化Heketi,持久化方式也可以选用其他方法。
[root@k8s-master01 kubernetes]# kubectl create -f heketi-deployment.json
service/heketi created
deployment.extensions/heketi created
待pod起来后,部署完成
[root@k8s-master01 kubernetes]# kubectl get po
NAME READY STATUS RESTARTS AGE
glusterfs-5npwn / Running 3h
glusterfs-8zfzq / Running 3h
glusterfs-bd5dx / Running 3h
heketi-5cb5f55d9f-5mtqt / Running 2m
查看最新部署的持久化Heketi的svc,并更改HEKETI_CLI_SERVER的值
[root@k8s-master01 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
heketi ClusterIP 10.111.95.240 <none> /TCP 12h
heketi-storage-endpoints ClusterIP 10.99.28.153 <none> /TCP 12h
kubernetes ClusterIP 10.96.0.1 <none> /TCP 14d
[root@k8s-master01 kubernetes]# export HEKETI_CLI_SERVER=http://10.111.95.240:8080
[root@k8s-master01 kubernetes]# curl http://10.111.95.240:8080/hello
Hello from Heketi
查看GFS信息
Hello from Heketi[root@k8s-master01 kubernetes]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true
Block: true Volumes: Name: heketidbstorage
Size:
Id: 828dc2dfaa00b7213e831b91c6213ae4
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Mount: 192.168.20.31:heketidbstorage
Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
Durability Type: replicate
Replica:
Snapshot: Disabled Bricks:
Id: 16b7270d7db1b3cfe9656b64c2a3916c
Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
Size (GiB):
Node: fb181b0cef571e9af7d84d2ecf534585
Device: 04290ec786dc7752a469b66f5e94458f Id: 828da093d9d78a2b1c382b13cc4da4a1
Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick
Size (GiB):
Node: d38819746cab7d567ba5f5f4fea45d91
Device: 80b61df999fcac26ebca6e28c4da8e61 Id: e8ef0e68ccc3a0416f73bc111cffee61
Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick
Size (GiB):
Node: 0f00835397868d3591f45432e432ba38
Device: 82af8e5f2fb2e1396f7c9e9f7698a178 Nodes: Node Id: 0f00835397868d3591f45432e432ba38
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostnames: k8s-node02
Storage Hostnames: 192.168.20.31
Devices:
Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB):
Bricks:
Id:e8ef0e68ccc3a0416f73bc111cffee61 Size (GiB): Path: /var/lib/heketi/mounts/vg_82af8e5f2fb2e1396f7c9e9f7698a178/brick_e8ef0e68ccc3a0416f73bc111cffee61/brick Node Id: d38819746cab7d567ba5f5f4fea45d91
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostnames: k8s-node01
Storage Hostnames: 192.168.20.30
Devices:
Id:80b61df999fcac26ebca6e28c4da8e61 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB):
Bricks:
Id:828da093d9d78a2b1c382b13cc4da4a1 Size (GiB): Path: /var/lib/heketi/mounts/vg_80b61df999fcac26ebca6e28c4da8e61/brick_828da093d9d78a2b1c382b13cc4da4a1/brick Node Id: fb181b0cef571e9af7d84d2ecf534585
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostnames: k8s-master01
Storage Hostnames: 192.168.20.20
Devices:
Id:04290ec786dc7752a469b66f5e94458f Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB):
Bricks:
Id:16b7270d7db1b3cfe9656b64c2a3916c Size (GiB): Path: /var/lib/heketi/mounts/vg_04290ec786dc7752a469b66f5e94458f/brick_16b7270d7db1b3cfe9656b64c2a3916c/brick
5、定义StorageClass
[root@k8s-master01 gfs]# cat storageclass-gfs-heketi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi
provisioner: kubernetes.io/glusterfs
parameters:
resturl: "http://10.111.95.240:8080"
restauthenabled: "false"
[root@k8s-master01 gfs]# ku
[root@k8s-master01 gfs]# kubectl create -f storageclass-gfs-heketi.yaml
storageclass.storage.k8s.io/gluster-heketi created
Provisioner参数须设置为"kubernetes.io/glusterfs"
resturl地址为API Server所在主机可以访问到的Heketi服务的某个地址
6、定义PVC及测试Pod
[root@k8s-master01 gfs]# kubectl create -f pod-use-pvc.yaml
pod/pod-use-pvc created
persistentvolumeclaim/pvc-gluster-heketi created
[root@k8s-master01 gfs]# cat pod-use-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-use-pvc
spec:
containers:
- name: pod-use-pvc
image: busybox
command:
- sleep
- ""
volumeMounts:
- name: gluster-volume
mountPath: "/pv-data"
readOnly: false
volumes:
- name: gluster-volume
persistentVolumeClaim:
claimName: pvc-gluster-heketi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-gluster-heketi
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "gluster-heketi"
resources:
requests:
storage: 1Gi
PVC定义一旦生成,系统便触发Heketi进行相应的操作,主要为在GlusterFS集群上创建brick,再创建并启动一个volume
创建的pv及pvc如下
[root@k8s-master01 gfs]# kubectl get pv,pvc | grep gluster persistentvolume/pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO Delete Bound default/pvc-gluster-heketi gluster-heketi 5m
persistentvolumeclaim/pvc-gluster-heketi Bound pvc-4a8033e8-e7f7-11e8-9a09-000c293bfe27 1Gi RWO gluster-heketi 5m
7、测试数据
进入到pod并创建文件
[root@k8s-master01 /]# kubectl exec -ti pod-use-pvc -- /bin/sh
/ # cd /pv-data/
/pv-data # mkdir {..}
/pv-data # ls
{..}
宿主机挂载测试
# 查看volume
[root@k8s-master01 /]# heketi-cli topology info Cluster Id: 5dec5676c731498c2bdf996e110a3e5e File: true
Block: true Volumes: Name: vol_56d636b452d31a9d4cb523d752ad0891
Size:
Id: 56d636b452d31a9d4cb523d752ad0891
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891
Mount Options: backup-volfile-servers=192.168.20.30,192.168.20.20
Durability Type: replicate
Replica:
Snapshot: Enabled
...
...
# 或者使用volume list查看
[root@k8s-master01 mnt]# heketi-cli volume list
Id:56d636b452d31a9d4cb523d752ad0891 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:vol_56d636b452d31a9d4cb523d752ad0891
Id:828dc2dfaa00b7213e831b91c6213ae4 Cluster:5dec5676c731498c2bdf996e110a3e5e Name:heketidbstorage
[root@k8s-master01 mnt]#
vol_56d636b452d31a9d4cb523d752ad0891为volume Name,Mount: 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891,挂载方式。
挂载查看数据
[root@k8s-master01 /]# mount -t glusterfs 192.168.20.31:vol_56d636b452d31a9d4cb523d752ad0891 /mnt/
[root@k8s-master01 /]# cd /mnt/
[root@k8s-master01 mnt]# ls
{..}
8、测试Deployments
[root@k8s-master01 gfs]# cat nginx-gluster.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-gfs
spec:
replicas:
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort:
volumeMounts:
- name: nginx-gfs-html
mountPath: "/usr/share/nginx/html"
- name: nginx-gfs-conf
mountPath: "/etc/nginx/conf.d"
volumes:
- name: nginx-gfs-html
persistentVolumeClaim:
claimName: glusterfs-nginx-html
- name: nginx-gfs-conf
persistentVolumeClaim:
claimName: glusterfs-nginx-conf
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-nginx-html
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "gluster-heketi"
resources:
requests:
storage: 500Mi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-nginx-conf
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "gluster-heketi"
resources:
requests:
storage: 10Mi
[root@k8s-master01 gfs]# kubectl get po,pvc,pv | grep nginx
pod/nginx-gfs-77c758ccc-2hwl6 / Running 4m
pod/nginx-gfs-77c758ccc-kxzfz / ContainerCreating 3m persistentvolumeclaim/glusterfs-nginx-conf Bound pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m
persistentvolumeclaim/glusterfs-nginx-html Bound pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX gluster-heketi 2m persistentvolume/pvc-f40914f8-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-html gluster-heketi 4m
persistentvolume/pvc-f40c5d4b-e800-11e8-8a89-000c293ad492 1Gi RWX Delete Bound default/glusterfs-nginx-conf gluster-heketi 4m
查看挂载情况
[root@k8s-master01 gfs]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- df -Th
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 86G .6G 80G % /
tmpfs tmpfs .8G .8G % /dev
tmpfs tmpfs .8G .8G % /sys/fs/cgroup
/dev/mapper/centos-root xfs 86G .6G 80G % /etc/hosts
shm tmpfs 64M 64M % /dev/shm
192.168.20.20:vol_b9c68075c6f20438b46db892d15ed45a fuse.glusterfs 1014M 43M 972M % /etc/nginx/conf.d
192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 fuse.glusterfs 1014M 43M 972M % /usr/share/nginx/html
tmpfs tmpfs .8G 12K .8G % /run/secrets/kubernetes.io/serviceaccount
tmpfs
挂载并创建index.html
[root@k8s-master01 gfs]# mount -t glusterfs 192.168.20.20:vol_32146a51be9f980c14bc86c34f67ebd5 /mnt/
[root@k8s-master01 gfs]# cd /mnt/
[root@k8s-master01 mnt]# ls
[root@k8s-master01 mnt]# echo "test" > index.html
[root@k8s-master01 mnt]# kubectl exec -ti nginx-gfs-77c758ccc-2hwl6 -- cat /usr/share/nginx/html/index.html
test
扩容nginx
[root@k8s-master01 ~]# kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
heketi 14h
nginx-gfs 23m
[root@k8s-master01 ~]# kubectl scale deploy nginx-gfs --replicas
deployment.extensions/nginx-gfs scaled
[root@k8s-master01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
glusterfs-5npwn / Running 18h
glusterfs-8zfzq / Running 17h
glusterfs-bd5dx / Running 18h
heketi-5cb5f55d9f-5mtqt / Running 14h
nginx-gfs-77c758ccc-2hwl6 / Running 11m
nginx-gfs-77c758ccc-6fphl / Running 8m
nginx-gfs-77c758ccc-kxzfz / Running 10m
查看文件内容
[root@k8s-master01 ~]# kubectl exec -ti nginx-gfs-77c758ccc-6fphl -- cat /usr/share/nginx/html/index.html
test
9、扩容GlusterFS
9.1添加磁盘至已存在的node节点
基于上述节点,假设在k8s-node02上增加磁盘
查看k8s-node02部署的pod name及IP
[root@k8s-master01 ~]# kubectl get po -o wide -l glusterfs-node
NAME READY STATUS RESTARTS AGE IP NODE
glusterfs-5npwn / Running 20h 192.168.20.31 k8s-node02
glusterfs-8zfzq / Running 20h 192.168.20.20 k8s-master01
glusterfs-bd5dx / Running 20h 192.168.20.30 k8s-node01
在node02上确认新添加的盘符
Disk /dev/sdc: 42.9 GB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
使用heketi-cli查看cluster ID和所有node ID
[root@k8s-master01 ~]# heketi-cli cluster info
Error: Cluster id missing
[root@k8s-master01 ~]# heketi-cli cluster list
Clusters:
Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
[root@k8s-master01 ~]# heketi-cli cluster info 5dec5676c731498c2bdf996e110a3e5e
Cluster id: 5dec5676c731498c2bdf996e110a3e5e
Nodes:
0f00835397868d3591f45432e432ba38
d38819746cab7d567ba5f5f4fea45d91
fb181b0cef571e9af7d84d2ecf534585
Volumes:
32146a51be9f980c14bc86c34f67ebd5
56d636b452d31a9d4cb523d752ad0891
828dc2dfaa00b7213e831b91c6213ae4
b9c68075c6f20438b46db892d15ed45a
Block: true File: true
找到对应的k8s-node02的node ID
[root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
Node Id: 0f00835397868d3591f45432e432ba38
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostname: k8s-node02
Storage Hostname: 192.168.20.31
Devices:
Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB): Bricks:
添加磁盘至GFS集群的node02
[root@k8s-master01 ~]# heketi-cli device add --name=/dev/sdc --node=0f00835397868d3591f45432e432ba38
Device added successfully
查看结果
[root@k8s-master01 ~]# heketi-cli node info 0f00835397868d3591f45432e432ba38
Node Id: 0f00835397868d3591f45432e432ba38
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostname: k8s-node02
Storage Hostname: 192.168.20.31
Devices:
Id:5539e74bc2955e7c70b3a20e72c04615 Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB): Bricks:
Id:82af8e5f2fb2e1396f7c9e9f7698a178 Name:/dev/sdb State:online Size (GiB): Used (GiB): Free (GiB): Bricks:
9.2 添加新节点
假设将k8s-master03,IP为192.168.20.22的加入glusterfs集群,并将该节点的/dev/sdc加入到集群
加标签,之后会自动创建pod
[root@k8s-master01 kubernetes]# kubectl label node k8s-master03 storagenode=glusterfs
node/k8s-master03 labeled
[root@k8s-master01 kubernetes]# kubectl get pod -owide -l glusterfs-node
NAME READY STATUS RESTARTS AGE IP NODE
glusterfs-5npwn / Running 21h 192.168.20.31 k8s-node02
glusterfs-8zfzq / Running 21h 192.168.20.20 k8s-master01
glusterfs-96w74 / ContainerCreating 2m 192.168.20.22 k8s-master03
glusterfs-bd5dx / Running 21h 192.168.20.30 k8s-node01
在任意节点执行peer probe
[root@k8s-master01 kubernetes]# kubectl exec -ti glusterfs-5npwn -- gluster peer probe 192.168.20.22
peer probe: success.
将新节点加入到glusterfs集群中
[root@k8s-master01 kubernetes]# heketi-cli cluster list
Clusters:
Id:5dec5676c731498c2bdf996e110a3e5e [file][block]
[root@k8s-master01 kubernetes]# heketi-cli node add --zone= --cluster=5dec5676c731498c2bdf996e110a3e5e --management-host-name=k8s-master03 --storage-host-name=192.168.20.22
Node information:
Id: 150bc8c458a70310c6137e840619758c
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostname k8s-master03
Storage Hostname 192.168.20.22
将新节点的磁盘加入到集群中
[root@k8s-master01 kubernetes]# heketi-cli device add --name=/dev/sdc --node=150bc8c458a70310c6137e840619758c
Device added successfully
验证
[root@k8s-master01 kubernetes]# heketi-cli node list
Id:0f00835397868d3591f45432e432ba38 Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:150bc8c458a70310c6137e840619758c Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:d38819746cab7d567ba5f5f4fea45d91 Cluster:5dec5676c731498c2bdf996e110a3e5e
Id:fb181b0cef571e9af7d84d2ecf534585 Cluster:5dec5676c731498c2bdf996e110a3e5e
[root@k8s-master01 kubernetes]# heketi-cli node info 150bc8c458a70310c6137e840619758c
Node Id: 150bc8c458a70310c6137e840619758c
State: online
Cluster Id: 5dec5676c731498c2bdf996e110a3e5e
Zone:
Management Hostname: k8s-master03
Storage Hostname: 192.168.20.22
Devices:
Id:2d5210c19858fb7ea3f805e6f582ecce Name:/dev/sdc State:online Size (GiB): Used (GiB): Free (GiB): Bricks:
PS:扩容volume可使用heketi-cli volume expand --volume=volumeID --expand-size=10
10、重启heketi报错解决
报错如下:
[heketi] ERROR // :: heketi/apps/glusterfs/app.go::glusterfs.NewApp: Heketi was terminated while performing one or more operations. Server may refuse to start as long as pending operations are present in the db.
解决:
修改heketi.json,添加"brick_min_size_gb" : 1,
[root@k8s-master01 kubernetes]# cat heketi.json
{
"_port_comment": "Heketi Server Port Number",
"port": "", "_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": false, "_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "My Secret"
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "My Secret"
}
}, "_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": "Execute plugin. Possible choices: mock, kubernetes, ssh",
"executor": "kubernetes", "_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db",
"brick_min_size_gb" : , "kubeexec": {
"rebalance_on_expansion": true
}, "sshexec": {
"rebalance_on_expansion": true,
"keyfile": "/etc/heketi/private_key",
"fstab": "/etc/fstab",
"port": "",
"user": "root",
"sudo": false
}
}, "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.",
"backup_db_to_kube_secret": false
}
[root@k8s-master01 kubernetes]# pwd
/root/heketi-client/share/heketi/kubernetes
删除secret并重建
[root@k8s-master01 kubernetes]# kubectl delete secret heketi-config-secret
[root@k8s-master01 ~]# kubectl create secret generic heketi-config-secret --from-file heketi.json
更改heketi的deploy
# env添加变量如下
- name: HEKETI_IGNORE_STALE_OPERATIONS
value: "true"
11、GFS容器无法启动
glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled
解决(新建集群,无数据):
rm -rf /var/lib/heketi/
rm -rf /var/lib/glusterd
rm -rf /etc/glusterfs/
yum remove glusterfs
yum install glusterfs
yum install glusterfs glusterfs-fuse -y
赞助作者:
kubernetes实战(九):k8s集群动态存储管理GlusterFS及使用Heketi扩容GlusterFS集群的更多相关文章
- Kubernetes实战总结 - 阿里云ECS自建K8S集群
一.概述 详情参考阿里云说明:https://help.aliyun.com/document_detail/98886.html?spm=a2c4g.11186623.6.1078.323b1c9b ...
- kubernetes实战-交付dubbo服务到k8s集群(一)准备工作
本次交付的服务架构图:因为zookeeper属于有状态服务,不建议将有状态服务,交付到k8s,如mysql,zk等. 首先部署zk集群:zk是java服务,需要依赖jdk,jdk请自行下载: 集群分布 ...
- kubernetes实战(八):k8s集群安全机制RBAC
1.基本概念 RBAC(Role-Based Access Control,基于角色的访问控制)在k8s v1.5中引入,在v1.6版本时升级为Beta版本,并成为kubeadm安装方式下的默认选项, ...
- kubernetes实战之部署一个接近生产环境的consul集群
系列目录 前面我们介绍了如何在windows单机以及如何基于docker部署consul集群,看起来也不是很复杂,然而如果想要把consul部署到kubernetes集群中并充分利用kubernete ...
- (二)Kubernetes kubeadm部署k8s集群
kubeadm介绍 kubeadm是Kubernetes项目自带的及集群构建工具,负责执行构建一个最小化的可用集群以及将其启动等的必要基本步骤,kubeadm是Kubernetes集群全生命周期的管理 ...
- 【Kubernetes 系列四】Kubernetes 实战:管理 Hello World 集群
目录 1. 创建集群 1.1. 安装 kubectl 1.1.1. 安装 kubectl 到 Linux 1.1.2. 安装 kubectl 到 macOS 1.1.3. 安装 kubectl 到 W ...
- 动态存储管理实战:GlusterFS
文件转载自:https://www.orchome.com/1284 本节以GlusterFS为例,从定义StorageClass.创建GlusterFS和Heketi服务.用户申请PVC到创建Pod ...
- 通过Heketi管理GlusterFS为K8S集群提供持久化存储
参考文档: Github project:https://github.com/heketi/heketi MANAGING VOLUMES USING HEKETI:https://access.r ...
- kubernetes实战(十):k8s使用Helm安装harbor
1.基本概念 对于复杂的应用中间件,需要设置镜像运行的需求.环境变量,并且需要定制存储.网络等设置,最后设计和编写Deployment.Configmap.Service及Ingress等相关yaml ...
随机推荐
- 【UOJ#390】【UNR#3】百鸽笼(动态规划,容斥)
[UOJ#390][UNR#3]百鸽笼(动态规划,容斥) 题面 UOJ 题解 发现这就是题解里说的:"火山喷发概率问题"(大雾 考虑如果是暴力的话,你需要记录下当前每一个位置的鸽笼 ...
- C++回调,函数指针
想要理解回调机制,先要理解函数指针 函数指针 函数指针指向的是函数而非对象,和其他指针一样,函数指针指向某种特定的类型 函数的类型由他的返回类型和参数类型共同决定,与函数名无关,如: bool len ...
- dubbo入门学习
官方网址:http://dubbo.apache.org/zh-cn/index.html 学习可以参考官网中文文档:http://dubbo.apache.org/zh-cn/docs/user/q ...
- C#面试基础知识点:值类型和引用类型(1)(填坑文)
目录 前言 C#值类型和引用类型 基类(共同点) 值类型继承基类(不同点) 应用类型继承 技术经理的问题 值类型与引用类型都可以用Equals来比较吗? 如何将一个数组a的值赋予数组b然后对b做修改而 ...
- ASP.NET Web API 2 的返回结果
HttpResponseMessage IHttpActionResult void 某些其他类型 总结归纳 原文地址:https://www.cnblogs.com/xgzh/p/11208611. ...
- 腾讯云-ASP.NET Core+Mysql+Jexus+CDN上云实践
腾讯云-ASP.NET Core+Mysql+Jexus+CDN上云实践.md 开通腾讯云服务器和Mysql 知识点: ASP.NET Core和 Entity Framework Core的使用 L ...
- jsTree树插件
前言 关于树的数据展示,前后用过两个插件,一是zTree,二是jsTree,无论是提供的例子(可下载),还是提供的API在查找时的便捷程度,zTree比jsTree强多了,也很容易上手,所以这里只讲下 ...
- Ubuntu下搭建Kubernetes集群(2)--docker基本操作
查看当前的容器和images docker ps -a docker images 1.创建新的容器 docker run -it --name 容器名 镜像名 /bin/bash # 挂载目录和端口 ...
- jmeter插件
https://jmeter-plugins.org/install/Install/ plugins-manager.jar放到 lib/ext中,重启可以发现JMeter Plugins Mana ...
- 201871010123-吴丽丽《面向对象程序设计(Java)》第十一周学习总结
201871010123-吴丽丽<面向对象程序设计(Java)>第十一周学习总结 项目 内容 这个作业属于哪个课程 https://www.cnblogs.com/nwnu-daizh/ ...