介绍

大多数情况下,单主节点集群大致足以供开发和测试环境使用。但是,对于生产环境,您需要考虑集群的高可用性。如果关键组件(例如 kube-apiserver、kube-scheduler 和 kube-controller-manager)都在同一个主节点上运行,一旦主节点宕机,Kubernetes 和 KubeSphere 都将不可用。因此,您需要为多个主节点配置负载均衡器,以创建高可用集群。您可以使用任意云负载均衡器或者任意硬件负载均衡器(例如 F5)。此外,也可以使用 Keepalived 和 HAproxy,或者 Nginx 来创建高可用集群。

架构

在您开始操作前,请确保准备了 6 台 Linux 机器,其中 3 台充当主节点,另外 3 台充当工作节点。下图展示了这些机器的详情,包括它们的私有 IP 地址和角色。

配置负载均衡器

您必须在您的环境中创建一个负载均衡器来监听(在某些云平台也称作监听器)关键端口。建议监听下表中的端口。

服务  协议  端口

apiserver TCP 6443

ks-console  TCP 30880

http  TCP 80

https TCP 443

配置免密

root@hello:~# ssh-keygen

root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.12
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.13
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.14
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.15
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.16
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.51
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.52
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.53
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.54
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.55
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.56
root@hello:~# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.57

下载 KubeKey

Kubekey 是新一代安装程序,可以简单、快速和灵活地安装 Kubernetes 和 KubeSphere。

root@cby:~# export KKZONE=cn
root@cby:~# curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.0 sh - Downloading kubekey v1.2.0 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v1.2.0/kubekey-v1.2.0-linux-amd64.tar.gz ... Kubekey v1.2.0 Download Complete!

为 kk 添加可执行权限

root@cby:~# chmod +x kk
root@cby:~# ./kk create config --with-kubesphere v3.2.0 --with-kubernetes v1.22.1

部署 KubeSphere 和 Kubernetes

root@cby:~# vim config-sample.yaml
root@cby:~#
root@cby:~#
root@cby:~# cat config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master1, address: 192.168.1.10, internalAddress: 192.168.1.10, user: root, password: Cby123..}
- {name: master2, address: 192.168.1.11, internalAddress: 192.168.1.11, user: root, password: Cby123..}
- {name: master3, address: 192.168.1.12, internalAddress: 192.168.1.12, user: root, password: Cby123..}
- {name: node1, address: 192.168.1.13, internalAddress: 192.168.1.13, user: root, password: Cby123..}
- {name: node2, address: 192.168.1.14, internalAddress: 192.168.1.14, user: root, password: Cby123..}
- {name: node3, address: 192.168.1.15, internalAddress: 192.168.1.15, user: root, password: Cby123..}
- {name: node4, address: 192.168.1.16, internalAddress: 192.168.1.16, user: root, password: Cby123..}
- {name: node5, address: 192.168.1.51, internalAddress: 192.168.1.51, user: root, password: Cby123..}
- {name: node6, address: 192.168.1.52, internalAddress: 192.168.1.52, user: root, password: Cby123..}
- {name: node7, address: 192.168.1.53, internalAddress: 192.168.1.53, user: root, password: Cby123..}
- {name: node8, address: 192.168.1.54, internalAddress: 192.168.1.54, user: root, password: Cby123..}
- {name: node9, address: 192.168.1.55, internalAddress: 192.168.1.55, user: root, password: Cby123..}
- {name: node10, address: 192.168.1.56, internalAddress: 192.168.1.56, user: root, password: Cby123..}
- {name: node11, address: 192.168.1.57, internalAddress: 192.168.1.57, user: root, password: Cby123..}
roleGroups:
etcd:
- master1
- master2
- master3
master:
- master1
- master2
- master3
worker:
- node1
- node2
- node3
- node4
- node5
- node6
- node7
- node8
- node9
- node10
- node11
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
#internalLoadbalancer: haproxy domain: lb.kubesphere.local
address: "192.168.1.20"
port: 6443
kubernetes:
version: v1.22.1
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: [] ---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.2.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
local_registry: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: false
# replicas: 2
# resources: {}
logging:
enabled: false
containerruntime: docker
logsidecar:
enabled: false
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# adapter:
# resources: {}
# node_exporter:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: [] root@cby:~#

若是haproxy配置如下:

frontend kube-apiserver
bind *:6443
mode tcp
option tcplog
default_backend kube-apiserver backend kube-apiserver
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server kube-apiserver-1 192.168.1.10:6443 check
server kube-apiserver-2 192.168.1.11:6443 check
server kube-apiserver-3 192.168.1.12:6443 check

安装所需环境

root@hello:~# bash -x 1.sh
root@hello:~# cat 1.sh
ssh root@192.168.1.10 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.11 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.12 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.13 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.14 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.15 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.16 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.51 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.52 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.53 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.54 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.55 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.56 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"
ssh root@192.168.1.57 "apt update && apt install sudo curl openssl ebtables socat ipset conntrack nfs-common -y"

开始安装

配置完成后,您可以执行以下命令来开始安装:

root@hello:~# ./kk create cluster -f config-sample.yaml
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node4 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node8 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node11 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| master1 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| node5 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| master2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| node2 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node7 | y | y | y | y | y | y | y | | y | | | UTC 13:26:00 |
| master3 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node6 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node9 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
| node10 | y | y | y | y | y | y | y | | y | | | UTC 13:26:01 |
+---------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+ This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations Continue this installation? [yes/no]: yes
INFO[13:26:06 UTC] Downloading Installation Files
INFO[13:26:06 UTC] Downloading kubeadm ... ---略---

验证安装

运行以下命令查看安装日志。

root@cby:~# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

**************************************************
Collecting installation results ...
#####################################################
### Welcome to KubeSphere! ###
##################################################### Console: http://192.168.1.10:30880
Account: admin
Password: P@88w0rd NOTES:
1. After you log into the console, please check the
monitoring status of service components in
"Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login. #####################################################
https://kubesphere.io 2021-11-10 10:24:00
##################################################### root@hello:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 30m v1.22.1
master2 Ready control-plane,master 29m v1.22.1
master3 Ready control-plane,master 29m v1.22.1
node1 Ready worker 29m v1.22.1
node10 Ready worker 29m v1.22.1
node11 Ready worker 29m v1.22.1
node2 Ready worker 29m v1.22.1
node3 Ready worker 29m v1.22.1
node4 Ready worker 29m v1.22.1
node5 Ready worker 29m v1.22.1
node6 Ready worker 29m v1.22.1
node7 Ready worker 30m v1.22.1
node8 Ready worker 30m v1.22.1
node9 Ready worker 29m v1.22.1

在安装后启用插件

使用 admin 用户登录控制台。点击左上角的平台管理,然后选择集群管理。

点击 CRD,然后在搜索栏中输入 clusterconfiguration。点击搜索结果查看其详情页。

在自定义资源中,点击 ks-installer 右侧的 ,然后选择编辑 YAML。

apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"installer.kubesphere.io/v1alpha1","kind":"ClusterConfiguration","metadata":{"annotations":{},"labels":{"version":"v3.2.0"},"name":"ks-installer","namespace":"kubesphere-system"},"spec":{"alerting":{"enabled":false},"auditing":{"enabled":false},"authentication":{"jwtSecret":""},"common":{"core":{"console":{"enableMultiLogin":true,"port":30880,"type":"NodePort"}},"es":{"basicAuth":{"enabled":false,"password":"","username":""},"elkPrefix":"logstash","externalElasticsearchPort":"","externalElasticsearchUrl":"","logMaxAge":7},"gpu":{"kinds":[{"default":true,"resourceName":"nvidia.com/gpu","resourceType":"GPU"}]},"minio":{"volumeSize":"20Gi"},"monitoring":{"GPUMonitoring":{"enabled":false},"endpoint":"http://prometheus-operated.kubesphere-monitoring-system.svc:9090"},"openldap":{"enabled":false,"volumeSize":"2Gi"},"redis":{"enabled":false,"volumeSize":"2Gi"}},"devops":{"enabled":false,"jenkinsJavaOpts_MaxRAM":"2g","jenkinsJavaOpts_Xms":"512m","jenkinsJavaOpts_Xmx":"512m","jenkinsMemoryLim":"2Gi","jenkinsMemoryReq":"1500Mi","jenkinsVolumeSize":"8Gi"},"etcd":{"endpointIps":"192.168.1.10,192.168.1.11,192.168.1.12","monitoring":false,"port":2379,"tlsEnable":true},"events":{"enabled":false},"kubeedge":{"cloudCore":{"cloudHub":{"advertiseAddress":[""],"nodeLimit":"100"},"cloudhubHttpsPort":"10002","cloudhubPort":"10000","cloudhubQuicPort":"10001","cloudstreamPort":"10003","nodeSelector":{"node-role.kubernetes.io/worker":""},"service":{"cloudhubHttpsNodePort":"30002","cloudhubNodePort":"30000","cloudhubQuicNodePort":"30001","cloudstreamNodePort":"30003","tunnelNodePort":"30004"},"tolerations":[],"tunnelPort":"10004"},"edgeWatcher":{"edgeWatcherAgent":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"nodeSelector":{"node-role.kubernetes.io/worker":""},"tolerations":[]},"enabled":false},"logging":{"containerruntime":"docker","enabled":false,"logsidecar":{"enabled":false,"replicas":2}},"metrics_server":{"enabled":false},"monitoring":{"gpu":{"nvidia_dcgm_exporter":{"enabled":false}},"storageClass":""},"multicluster":{"clusterRole":"none"},"network":{"ippool":{"type":"none"},"networkpolicy":{"enabled":false},"topology":{"type":"none"}},"openpitrix":{"store":{"enabled":false}},"persistence":{"storageClass":""},"servicemesh":{"enabled":false}}}
labels:
version: v3.2.0
name: ks-installer
namespace: kubesphere-system
spec:
alerting:
enabled: true
auditing:
enabled: true
authentication:
jwtSecret: ''
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
es:
basicAuth:
enabled: true
password: ''
username: ''
elkPrefix: logstash
externalElasticsearchPort: ''
externalElasticsearchUrl: ''
logMaxAge: 7
gpu:
kinds:
- default: true
resourceName: nvidia.com/gpu
resourceType: GPU
minio:
volumeSize: 20Gi
monitoring:
GPUMonitoring:
enabled: true
endpoint: 'http://prometheus-operated.kubesphere-monitoring-system.svc:9090'
openldap:
enabled: true
volumeSize: 2Gi
redis:
enabled: true
volumeSize: 2Gi
devops:
enabled: true
jenkinsJavaOpts_MaxRAM: 2g
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
etcd:
endpointIps: '192.168.1.10,192.168.1.11,192.168.1.12'
monitoring: false
port: 2379
tlsEnable: true
events:
enabled: true
kubeedge:
cloudCore:
cloudHub:
advertiseAddress:
- ''
nodeLimit: '100'
cloudhubHttpsPort: '10002'
cloudhubPort: '10000'
cloudhubQuicPort: '10001'
cloudstreamPort: '10003'
nodeSelector:
node-role.kubernetes.io/worker: ''
service:
cloudhubHttpsNodePort: '30002'
cloudhubNodePort: '30000'
cloudhubQuicNodePort: '30001'
cloudstreamNodePort: '30003'
tunnelNodePort: '30004'
tolerations: []
tunnelPort: '10004'
edgeWatcher:
edgeWatcherAgent:
nodeSelector:
node-role.kubernetes.io/worker: ''
tolerations: []
nodeSelector:
node-role.kubernetes.io/worker: ''
tolerations: []
enabled: true
logging:
containerruntime: docker
enabled: true
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: true
monitoring:
gpu:
nvidia_dcgm_exporter:
enabled: true
storageClass: ''
multicluster:
clusterRole: none
network:
ippool:
type: weave-scope
networkpolicy:
enabled: true
topology:
type: none
openpitrix:
store:
enabled: true
persistence:
storageClass: ''
servicemesh:
enabled: true

批量将所有服务器设置阿里云加速

root@hello:~# vim 8
root@hello:~# cat 8
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://ted9wxpi.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker root@hello:~# vim 7
root@hello:~# cat 7
scp 8 root@192.168.1.11:
scp 8 root@192.168.1.12:
scp 8 root@192.168.1.13:
scp 8 root@192.168.1.14:
scp 8 root@192.168.1.15:
scp 8 root@192.168.1.16:
scp 8 root@192.168.1.51:
scp 8 root@192.168.1.52:
scp 8 root@192.168.1.53:
scp 8 root@192.168.1.54:
scp 8 root@192.168.1.55:
scp 8 root@192.168.1.56:
scp 8 root@192.168.1.57:
root@hello:~# bash -x 7 root@hello:~# vim 6
root@hello:~# cat 6
ssh root@192.168.1.10 "bash -x 8"
ssh root@192.168.1.11 "bash -x 8"
ssh root@192.168.1.12 "bash -x 8"
ssh root@192.168.1.13 "bash -x 8"
ssh root@192.168.1.14 "bash -x 8"
ssh root@192.168.1.15 "bash -x 8"
ssh root@192.168.1.16 "bash -x 8"
ssh root@192.168.1.51 "bash -x 8"
ssh root@192.168.1.52 "bash -x 8"
ssh root@192.168.1.53 "bash -x 8"
ssh root@192.168.1.54 "bash -x 8"
ssh root@192.168.1.55 "bash -x 8"
ssh root@192.168.1.56 "bash -x 8"
ssh root@192.168.1.57 "bash -x 8"
root@hello:~# bash -x 6

查看node节点

root@hello:~# kubectl  get node
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 11h v1.22.1
master2 Ready control-plane,master 11h v1.22.1
master3 Ready control-plane,master 11h v1.22.1
node1 Ready worker 11h v1.22.1
node10 Ready worker 11h v1.22.1
node11 Ready worker 11h v1.22.1
node2 Ready worker 11h v1.22.1
node3 Ready worker 11h v1.22.1
node4 Ready worker 11h v1.22.1
node5 Ready worker 11h v1.22.1
node6 Ready worker 11h v1.22.1
node7 Ready worker 11h v1.22.1
node8 Ready worker 11h v1.22.1
node9 Ready worker 11h v1.22.1
root@hello:~#
root@hello:~#

https://blog.csdn.net/qq_33921750

https://my.oschina.net/u/3981543

https://www.zhihu.com/people/chen-bu-yun-2

https://segmentfault.com/u/hppyvyv6/articles

https://juejin.cn/user/3315782802482007

https://space.bilibili.com/352476552/article

https://cloud.tencent.com/developer/column/93230

知乎、CSDN、开源中国、思否、掘金、哔哩哔哩、腾讯云

KubeSphere 高可用集群搭建并启用所有插件的更多相关文章

  1. Hadoop HA高可用集群搭建(Hadoop+Zookeeper+HBase)

    声明:作者原创,转载注明出处. 作者:帅气陈吃苹果 一.服务器环境 主机名 IP 用户名 密码 安装目录 master188 192.168.29.188 hadoop hadoop /home/ha ...

  2. Hadoop 3.1.2(HA)+Zookeeper3.4.13+Hbase1.4.9(HA)+Hive2.3.4+Spark2.4.0(HA)高可用集群搭建

    目录 目录 1.前言 1.1.什么是 Hadoop? 1.1.1.什么是 YARN? 1.2.什么是 Zookeeper? 1.3.什么是 Hbase? 1.4.什么是 Hive 1.5.什么是 Sp ...

  3. MongoDB高可用集群搭建(主从、分片、路由、安全验证)

    目录 一.环境准备 1.部署图 2.模块介绍 3.服务器准备 二.环境变量 1.准备三台集群 2.安装解压 3.配置环境变量 三.集群搭建 1.新建配置目录 2.修改配置文件 3.分发其他节点 4.批 ...

  4. RabbitMQ高级指南:从配置、使用到高可用集群搭建

    本文大纲: 1. RabbitMQ简介 2. RabbitMQ安装与配置 3. C# 如何使用RabbitMQ 4. 几种Exchange模式 5. RPC 远程过程调用 6. RabbitMQ高可用 ...

  5. CentOS7/RHEL7 pacemaker+corosync高可用集群搭建

     TOC \o "1-3" \h \z \u 一.集群信息... PAGEREF _Toc502099174 \h 4 08D0C9EA79F9BACE118C8200AA004B ...

  6. hadoop高可用集群搭建小结

    hadoop高可用集群搭建小结1.Zookeeper集群搭建2.格式化Zookeeper集群 (注:在Zookeeper集群建立hadoop-ha,amenode的元数据)3.开启Journalmno ...

  7. Spark高可用集群搭建

    Spark高可用集群搭建 node1    node2    node3   1.node1修改spark-env.sh,注释掉hadoop(就不用开启Hadoop集群了),添加如下语句 export ...

  8. spring cloud 服务注册中心eureka高可用集群搭建

    spring cloud 服务注册中心eureka高可用集群搭建 一,准备工作 eureka可以类比zookeeper,本文用三台机器搭建集群,也就是说要启动三个eureka注册中心 1 本文三台eu ...

  9. MongoDB 3.4 高可用集群搭建(二)replica set 副本集

    转自:http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html 在上一篇文章<MongoDB 3.4 高可用集群搭建(一):主从模式&g ...

  10. Hbase 完全分布式 高可用 集群搭建

    1.准备 Hadoop 版本:2.7.7 ZooKeeper 版本:3.4.14 Hbase 版本:2.0.5 四台主机: s0, s1, s2, s3 搭建目标如下: HMaster:s0,s1(备 ...

随机推荐

  1. CentOS 8.x系统安装配置图解教程

    说明:截止目前CentOS-8.x最新版本为CentOS-8.4.2105,下面介绍CentOS-8.4.2105的具体安装配置过程 服务器相关设置如下: 操作系统:CentOS-8.4.2105 I ...

  2. C#下解析、生成JAVA的RSA密钥、公钥

    1.从 https://www.nuget.org/packages/BouncyCastle/下载对应的nupkg包,放到本地一个文件夹中 2.打开VS2010,工具->NuGet程序包管理器 ...

  3. ESP32(WeMos D1 R32)开发资料

    1.乐鑫官网 2.ESP32踩坑 ESP32控制摇杆,定义sw的引脚时一定要设置为上拉才行. 3.ESP32入门之arduino IDE环境搭建 4.ESP32 MicroPython编程官网文档 E ...

  4. 软件工程作业:个人项目—wc项目

    软件工程作业:个人项目-WC项目 项目相关要求 wc.exe 是一个常见的工具,它能统计文本文件的字符数.单词数和行数.这个项目要求写一个命令行程序,模仿已有wc.exe 的功能,并加以扩充,给出某程 ...

  5. 阿里云Linux服务器部署JDK8实战教程

    下载地址 https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 文件上传 把下载的文 ...

  6. ubuntu14.04 网络配置ubuntu14.04 网络配置

    流程分析: 在Ubuntu系统网络设备启动的流程中,会依赖/etc/network/interface的配置文件初始化网络接口,所以直接在/etc/network/interface之中配置好对应的d ...

  7. The first blog

    这是一只爱碎觉的汪的第一篇博客. 下面就来简单介绍一下自己吧,爱好广泛,尤其热爱钢琴和运动,喜欢每个按键在手指间跳动的感觉,喜欢一个个音符连起来奏响的一曲曲优美的音乐,也喜欢运动后大汗淋漓的畅快感.肯 ...

  8. CF 1272 D. Remove One Element

    D. Remove One Element time limit per test 2 seconds memory limit per test 256 megabytes input standa ...

  9. 7、jmeter配置元件-HTTP信息头配置元件

    根据对方服务器的要求来配置消息头 (重点)Gzip:发送给你压缩包  提高速度 性能等等 同样的数据  只要页面没有变更 就不会重新压缩 节省带宽

  10. 使用NTC计算温度,增加计算精度的算法

    uint16_t uGetPCB_Temperature(void) { uint16_t x; float Adcn; float k; Adcn = userADC_var.ADCMeasureV ...