Linux搭建ES集群环境
搭建ES集群环境
准备
- 三台服务器
- 其中一台为主机节点
- ES安装自行上传到各个节点home路径下并解压重命名
- 集群名称:
cluster-big-data同一个集群多个节点,集群名称必须相同,节点名称不同。 - 用户账号:es_user
| 节点 | ip地址 | 服务端口 | 传输端口 | 内存(GB) | CPU(核) | 磁盘(GB) |
|---|---|---|---|---|---|---|
| node1 | 192.168.0.114 | 9200 | 9300 | 16G | 8 | 100 |
| node2 | 192.168.0.123 | 9200 | 9300 | 8G | 4 | 50 |
| node3 | 192.168.0.125 | 9200 | 9300 | 8G | 4 | 50 |
重要路径说明
| 名称 | 路径 |
|---|---|
| 根路径 | /home/elasticsearch/ |
| 配置文件路径 | /home/elasticsearch/config/ |
| SSL证书路径 | /home/elasticsearch/config/certs/ |
| 快照共享路径 | /home/elasticsearch/snapshot/ |
开始搭建
这里跳过了用户创建,密码生成等步奏,详情参考ES安装文档
一、创建路径(三个节点都需要创建)
创建快照和证书路径
[root@localhost ~] su es_user
[es_user@localhost ~] mkdir -p /home/elasticsearch/snapshot/
[es_user@localhost ~] mkdir -p /home/elasticsearch/config/certs/
二、NFS共享快照路径(主节点)
切换 root 用户安装共享软件
[root@localhost elasticsearch] su root
[root@localhost elasticsearch] yum -y install nfs-utils rpcbind
编辑
/etc/exports文件,增加以下内容
[root@localhost elasticsearch] vi /etc/exports
/home/elasticsearch/snapshot *(rw,sync,no_root_squash)
使文件生效
exportfs -rv
[root@localhost elasticsearch] exportfs -rv
exporting *:/home/elasticsearch/snapshot
启动
rpcbind和nfs服务
[root@localhost share] systemctl start rpcbind
[root@localhost share] systemctl start nfs
#或centos8
[root@localhost share] systemctl start nfs-server
测试是否可以联机,输出共享地址成功
[root@localhost share] showmount -e localhost
Export list for localhost:
/www/share *
三、生成证书(主节点)
请使用
es_user进行操作生成授权证书,输入密码处直接回车
[es_user@localhost elasticsearch] ./bin/elasticsearch-certutil ca
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'ca' mode generates a new 'certificate authority'
This will create a new X.509 certificate and private key that can be used
to sign certificate when running in 'cert' mode.
Use the 'ca-dn' option if you wish to configure the 'distinguished name'
of the certificate authority
By default the 'ca' mode produces a single PKCS#12 output file which holds:
* The CA certificate
* The CA's private key
If you elect to generate PEM format certificates (the -pem option), then the output will
be a zip file containing individual files for the CA certificate and private key
Please enter the desired output file [elastic-stack-ca.p12]:
Enter password for elastic-stack-ca.p12 :
生成秘钥证书,输入密码处直接回车
[es_user@localhost elasticsearch] ./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Enter password for CA (elastic-stack-ca.p12) :
Please enter the desired output file [elastic-certificates.p12]:
Enter password for elastic-certificates.p12 :
Certificates written to /home/elasticsearch/elastic-certificates.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
For client applications, you may only need to copy the CA certificate and
configure the client to trust this certificate
执行以上两个命令后ES根目录会生成两个文件分别是
elastic-certificates.p12elastic-stack-ca.p12将生成的证书文件拷贝到证书目录下
[es_user@localhost elasticsearch] cp elastic-stack-ca.p12 ./config/certs/
[es_user@localhost elasticsearch] cp elastic-certificates.p12 ./config/certs/
四、其他节点证书
将主节点证书进行打包,分别上传到其他两个节点服务器中
进入ES配置文件路径
[es_user@localhost elasticsearch] cd config/
打包
[es_user@localhost config]$ tar -zcf certs.tar.gz certs/
证书上传到节点1和节点2,这里使用
scp命令进行上传到目标服务,你可以使用其他FTP工具上传,上传过程需要接受秘钥输入yes然后会提示输入密码
scp要上传的文件账号@目标服务器ip:目标服务器路径
[es_user@localhost config]$ scp certs.tar.gz root@192.168.0.123:/home/elasticsearch/config
The authenticity of host '192.168.0.123 (192.168.0.123)' can't be established.
ECDSA key fingerprint is SHA256:iQ6EJttEclqNvpNZIfPEmHemPwT+nbRRMLBXOkB5Kys.
ECDSA key fingerprint is MD5:6b:0d:32:1a:39:98:28:d0:1b:b0:6a:b7:d6:5a:57:c6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.123' (ECDSA) to the list of known hosts.
root@192.168.0.123's password:
certs.tar.gz
[es_user@localhost config]$ scp certs.tar.gz root@192.168.0.125:/home/elasticsearch/config
The authenticity of host '192.168.0.125 (192.168.0.125)' can't be established.
ECDSA key fingerprint is SHA256:iQ6EJttEclqNvpNZIfPEmHemPwT+nbRRMLBXOkB5Kys.
ECDSA key fingerprint is MD5:6b:0d:32:1a:39:98:28:d0:1b:b0:6a:b7:d6:5a:57:c6.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.0.125' (ECDSA) to the list of known hosts.
root@192.168.0.125's password:
certs.tar.gz
挂载共享文件夹
节点2 节点3 安装
[root@log1 ~] yum -y install nfs-utils
启动 nfs 客户端服务
[root@log1 ~] systemctl start nfs-utils
挂载共享
df验证最后一条记录出现证明挂载成功
[root@log1 ~] mount -t nfs 192.168.0.114:/home/elasticsearch/snapshot /home/elasticsearch/snapshot
[root@log1 ~] df
文件系统 1K-块 已用 可用 已用% 挂载点
devtmpfs 3983136 0 3983136 0% /dev
tmpfs 3995008 0 3995008 0% /dev/shm
tmpfs 3995008 12068 3982940 1% /run
tmpfs 3995008 0 3995008 0% /sys/fs/cgroup
/dev/mapper/centos-root 52403200 3525292 48877908 7% /
/dev/mapper/centos-home 64054724 22962908 41091816 36% /home
/dev/sda1 1038336 198548 839788 20% /boot
tmpfs 799004 0 799004 0% /run/user/0
192.168.0.114:/home/elasticsearch/snapshot 43094016 1163264 41930752 3% /home/elasticsearch/snapshot
编写ES配置文件
节点1
cluster.name: cluster-big-data
node.name: node-1
# 不能使用 0.0.0.0 或 127.0.0.1
network.host: 192.168.0.114
http.port: 9200
# 主节点选举
node.master: true
# 允许该节点存储数据
node.data: true
# 集群发现地址
discovery.seed_hosts: ["192.168.0.114","192.168.0.123","192.168.0.125"]
# 集群节点发现
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 快照备份路径
path.repo: /home/elasticsearch/snapshot/
# 开启系统监控日志收集
xpack.monitoring.collection.enabled: true
# 数据保留时间默认 7天
xpack.monitoring.history.duration: 7d
xpack.ml.enabled: false
# 开启系统安全
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: "optional"
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
节点2
cluster.name: cluster-big-data
node.name: node-2
# 不能使用 0.0.0.0 或 127.0.0.1
network.host: 192.168.0.123
http.port: 9200
# 主节点选举
node.master: true
# 允许该节点存储数据
node.data: true
# 集群发现地址
discovery.seed_hosts: ["192.168.0.114","192.168.0.123","192.168.0.125"]
# 集群节点发现
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 快照备份路径
path.repo: /home/elasticsearch/snapshot/
# 开启系统监控日志收集
xpack.monitoring.collection.enabled: true
# 数据保留时间默认 7天
xpack.monitoring.history.duration: 7d
xpack.ml.enabled: false
# 开启系统安全
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: "optional"
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
节点3(不参与选举)
cluster.name: cluster-big-data
node.name: node-3
# 不能使用 0.0.0.0 或 127.0.0.1
network.host: 192.168.0.125
http.port: 9200
# 主节点选举
node.master: true
# 允许该节点存储数据
node.data: true
# 集群发现地址
discovery.seed_hosts: ["192.168.0.114","192.168.0.123","192.168.0.125"]
# 集群节点发现
cluster.initial_master_nodes: ["node-1","node-2","node-3"]
# 快照备份路径
path.repo: /home/elasticsearch/snapshot/
# 开启系统监控日志收集
xpack.monitoring.collection.enabled: true
# 数据保留时间默认 7天
xpack.monitoring.history.duration: 7d
xpack.ml.enabled: false
# 开启系统安全
xpack.security.enabled: true
xpack.security.http.ssl.enabled: false
xpack.security.http.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.http.ssl.client_authentication: "optional"
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /home/elasticsearch/config/certs/elastic-certificates.p12
启动服务
注意事项:
- 先启动节点1(切换 es_user 用户)
- 解压各个节点certs证书压缩包
- 将上面的配置文件拷贝到各个节点
配置文件路径文件名称elasticsearch.yml - 测试调试阶段建议前台方式启动,
./elasticsearch不要加-d方便调试 - 只有集群服务正常启动并相互连接成功,才可以进SSL加密盘配置已经启用户账号密码
进入节点1 es 跟路径下的
bin目录启动。其他节点同样操作
[es_user@localhost bin]$ ./elasticsearch
当控制输出以下信息,服务启动成功
[2022-09-24T13:52:58,346][INFO ][o.e.h.AbstractHttpServerTransport] [node-1] publish_address {192.168.0.114:9200}, bound_addresses {192.168.0.114:9200}
[2022-09-24T13:52:58,347][INFO ][o.e.n.Node ] [node-1] started
[2022-09-24T13:53:00,587][INFO ][o.e.x.s.a.TokenService ] [node-1] refresh keys
[2022-09-24T13:53:01,546][INFO ][o.e.x.s.a.TokenService ] [node-1] refreshed keys
[2022-09-24T13:53:03,308][INFO ][o.e.c.s.ClusterApplierService] [node-1] added node-3{nFwDnacnSymxg8gF1jUW2Q}{kONSgIoTRWC3Mz5am24Gdw}{192.168.0.125}{192.168.0.125:9300}{cdfhirstw}}, term: 1, version: 21, reason: ApplyCommitRequest{term=1, version=21, sourceNode={node-2}{xJUzhmMXTS-mDBsIdMFNeA}{PspPplLTQH-cAwhk8ZveqQ}{192.168.0.123}{192.168.0.123:9300}{cdfhimrstw}{xpack.installed=true, transform.node=true}}
[2022-09-24T13:53:06,031][INFO ][o.e.l.LicenseService ] [node-1] license [212e41c8-54e6-476d-b453-1e7f03a7a4ca] mode [basic] - valid
[2022-09-24T13:53:06,034][INFO ][o.e.x.s.a.Realms ] [node-1] license mode is [basic], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
[2022-09-24T13:53:06,038][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [node-1] Active license is now [BASIC]; Security is enabled
[2022-09-24T13:53:10,908][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] downloading geoip database [GeoLite2-ASN.mmdb] to [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-ASN.mmdb.tmp.gz]
[2022-09-24T13:53:11,042][INFO ][o.e.x.s.a.AuthorizationService] [node-1] Took [67ms] to resolve [1] indices for action [indices:data/read/search] and user [_xpack]
[2022-09-24T13:53:12,810][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] successfully reloaded changed geoip database file [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-ASN.mmdb]
[2022-09-24T13:53:22,738][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] downloading geoip database [GeoLite2-City.mmdb] to [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-City.mmdb.tmp.gz]
[2022-09-24T13:53:25,217][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] downloading geoip database [GeoLite2-Country.mmdb] to [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-Country.mmdb.tmp.gz]
[2022-09-24T13:53:25,889][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] successfully reloaded changed geoip database file [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-Country.mmdb]
[2022-09-24T13:53:29,254][INFO ][o.e.i.g.DatabaseRegistry ] [node-1] successfully reloaded changed geoip database file [/tmp/elasticsearch-2213883239317334326/geoip-databases/IIU3ckuVSJOrT4zp61GBJg/GeoLite2-City.mmdb]
账号密码
节点1新开窗口,进入节点1 es 跟路径下 生成账号密码。生成的账号密码妥善保管,账号密码信息会自动同步到ES各个节点上
[root@localhost elasticsearch]./bin/elasticsearch-setup-passwords auto
warning: usage of JAVA_HOME is deprecated, use ES_JAVA_HOME
Future versions of Elasticsearch will require Java 11; your Java version from [/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.342.b07-1.el7_9.x86_64/jre] does not meet this requirement. Consider switching to a distribution of Elasticsearch with a bundled JDK. If you are already using a distribution with a bundled JDK, ensure the JAVA_HOME environment variable is not set.
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
The passwords will be randomly generated and printed to the console.
Please confirm that you would like to continue [y/N]y
Changed password for user apm_system
PASSWORD apm_system = eXy9NUaSVDhESUE6DS6R
Changed password for user kibana_system
PASSWORD kibana_system = OSjK4VW5EV70AWtGapYy
Changed password for user kibana
PASSWORD kibana = OSjK4VW5EV70AWtGapYy
Changed password for user logstash_system
PASSWORD logstash_system = bUQQv1tCRJdIYn9qby7D
Changed password for user beats_system
PASSWORD beats_system = uMLotKFFRfqGkO9TFRWq
Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = 7WHftn76huCDoMUi6XtN
Changed password for user elastic
PASSWORD elastic = z6DV6iFBcat0euvD7o5g
验证ES集群,查看节点
http://elastic:z6DV6iFBcat0euvD7o5g@192.168.0.114:9200/_cat/nodes
http://账号:ES密码@主机IP地址:端口/_cat/nodes我们可以看到此时node-2为主节点,有
*表示主节点
[root@localhost elasticsearch] curl http://elastic:z6DV6iFBcat0euvD7o5g@192.168.0.114:9200/_cat/nodes
192.168.0.114 12 59 7 1.18 1.48 1.15 cdfhimrstw - node-1
192.168.0.125 9 98 6 0.24 0.40 0.43 cdfhirstw - node-3
192.168.0.123 5 98 5 0.40 0.44 0.48 cdfhimrstw * node-2
启用SSL加密
Ctrl+c 停止所有节点服务,修改所有节点
/home/elasticsearch/config/elasticsearch.yml配置文件
# 修改前
xpack.security.http.ssl.enabled: false
# 修改后
xpack.security.http.ssl.enabled: true
再次启动各个服务,启动成功再次验证
注意:
- 此时请求协议变成了
https主节点变成了node1,我们需要把主节点的信息配置到java配置文件中(参考打包部署章节)- 主节点不是固定不变的,当其中一个节点发生宕机,那么其他节点可以选举为主节点。
[root@localhost elasticsearch] curl https://elastic:z6DV6iFBcat0euvD7o5g@192.168.0.114:9200/_cat/nodes --insecure
192.168.0.114 13 60 9 1.52 1.60 1.33 cdfhimrstw * node-1
192.168.0.123 7 98 4 0.15 0.45 0.47 cdfhimrstw - node-2
192.168.0.125 7 98 4 0.45 0.65 0.53 cdfhirstw - node-3
软件平台连接
此时我们平台已经成功连接到ES集群,此刻ES集群搭建到此结束


Linux搭建ES集群环境的更多相关文章
- Linux 搭建Hadoop集群 成功
内容基于(自己的真是操作步骤编写) Linux 搭建Hadoop集群---Jdk配置 Linux 搭建Hadoop集群 ---SSH免密登陆 一:下载安装 Hadoop 1.1:下载指定的Hadoop ...
- Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境
Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境 一.环境说明 个人理解:zookeeper可以独立搭建集群,hbase本身不能独立搭建集群需要和hadoo ...
- 在搭建Hadoop集群环境时遇到的一些问题
最近在学习搭建hadoop集群环境,在搭建的过程中遇到很多问题,在这里做一些记录.1. SSH相关的问题 问题一: ssh: connect to host localhost port 22: Co ...
- docker-compose搭建zookeeper集群环境 CodingCode
docker-compose搭建zookeeper集群环境 使用docker-compose搭建zookeeper集群环境 zookeeper是一个集群环境,用来管理微服务架构下面的配置管理功能. 这 ...
- kubeadm 搭建kubernetes集群环境
需求 kubeadm 搭建kubernetes集群环境 准备条件 三台VPS(本文使用阿里云香港 - centos7.7) 一台能SSH连接到VPS的本地电脑 (推荐连接工具xshell) 安装步骤 ...
- centos7 快速搭建redis集群环境
本文主要是记录一下快速搭建redis集群环境的方式. 环境简介:centos 7 + redis-3.2.4 本次用两个服务6个节点来搭建:192.168.116.120 和 192.168.1 ...
- 【redis】 linux 下redis 集群环境搭建
Redis集群 (要让集群正常工作至少需要3个主节点,在这里我们要创建6个redis节点,其中三个为主节点,三个为从节点,对应的redis节点的ip和端口对应关系如下) 127.0.0.1:63791 ...
- docker容器中搭建kafka集群环境
Kafka集群管理.状态保存是通过zookeeper实现,所以先要搭建zookeeper集群 zookeeper集群搭建 一.软件环境: zookeeper集群需要超过半数的的node存活才能对外服务 ...
- 搭建ES集群
服务版本选择 TEG的ctsdb当前最高版本采用的是es的6.4.3版本,为了日后与ctsdb衔接方便,部署开源版es时也采用该版本.6.4.3版本的es依赖的jdk版本要求在8u181以上,测试环境 ...
- Kubernetes 搭建 ES 集群(存储使用 cephfs)
一.集群规划 使用 cephfs 实现分布式存储和数据持久化 ES 集群的 master 节点至少需要三个,防止脑裂. 由于 master 在配置过程中需要保证主机名固定和唯一,所以搭建 master ...
随机推荐
- 本周二晚19:00战码先锋第8期直播丨如何多方位参与OpenHarmony开源贡献
OpenAtom OpenHarmony(以下简称"OpenHarmony")工作委员会首度发起「OpenHarmony开源贡献者计划」,旨在鼓励开发者参与OpenHarmony开 ...
- Qt Create开发,修改 .Pro 文件改变 exe 的名称
// .pro // 修改 TARGET 就可以改变生成的exe的名称 TARGET = Test // 要是生成的exe名称中需要带有空格,需要用到$$quote TARGET = $$quote( ...
- openGauss/MogDB零字节问题处理
openGauss/MogDB 零字节问题处理 问题描述:java 应用端程序调用 GZIP 压缩类对数据进行编码压缩后入库 ,然后从数据库取出进行解压,原来再 mysql 数据库中是正常的,但迁移到 ...
- openGauss Sqlines 使用指导
openGauss Sqlines 使用指导 Sqlines 简介 Sqlines 是一款开源软件,支持多种数据库之间的 SQL 语句语法的的转换,openGauss 将此工具修改适配,新增了 ope ...
- 7月27日19:30直播预告:HarmonyOS3及华为全场景新品发布会
7月27日 19:30 HarmonyOS 3 及华为全场景新品发布会 高能来袭! 在HarmonyOS开发者社区企微直播间 一起见证HarmonyOS的又一次智慧进化 扫码预约直播,与您不见不散!
- HarmonyOS 性能优化
如何合理使用动效来获得更好的性能 组件转场动画使用 transition: 推荐使用转场动画(transition)而不是组件动画(animateTo),因为 transition 只需要在条件改变时 ...
- Linux下源码安装Kong网关
kong是基于openresty构建的一个网关,并且直接带了很多的功能比如反向代理.负载均衡.限流等模块直接开箱即用,同时兼具OpenResty的高性能,大部分情况下无需编程就可以实现想要的功能,下面 ...
- 使用 Silk.NET 创建 OpenGL 空窗口项目例子
本文告诉大家如何使用 Silk.NET 创建 OpenGL 空窗口项目.在 dotnet 基金会下,开源维护 Silk.NET 仓库,此仓库提供了渲染相关的封装逻辑,包括 DX 和 OpenGL 等等 ...
- dotnet 在析构函数调用 ThreadLocal 也许会抛出对方已释放
我在不自量力做一个数组池,就是为了减少使用 System.Buffers.dll 程序集,然而在数组池里面,所用的 ThreadLocal 类型,在我对象析构函数进行归还数组时,抛出了无法访问已释放对 ...
- Docker镜像基本原理
前言 Docker系列文章: 如果没有安装过Docker请参考本文最后部分,大家从现在开始一定要按照我做的Demo都手敲一遍,印象会更加深刻的,加油! 为什么学习Docker Docker基本概念 什 ...