转载自:https://mp.weixin.qq.com/s?__biz=MzUyNzk0NTI4MQ==&mid=2247483816&idx=1&sn=bfaf70613bcb775ccf5d40c2871a05a8&chksm=fa769a86cd011390f22ff178071a580a8f17791e57166dfc8463984a5613c11875ef2ebb2ad7&mpshare=1&scene=1&srcid=11253n8AXjLegAeaoHiCssEs&sharer_sharetime=1574686178097&sharer_shareid=6ec87ec9a11a0c18d61cde7663a9ef87#rd

基于ElasticSearch多实例架构,实现资源合理分配、冷热数据分离。

ES多实例部署,将不同热度的数据存在不同的磁盘上,实现了数据冷热分离、资源合理分配。

在一个集群中部署多个ES实例,来实现资源合理分配。例如data服务器存在SSD与SAS硬盘,可以将热数据存放到SSD,而冷数据存放到SAS,实现数据冷热分离。

192.168.1.51 elasticsearch-data部署双实例

索引迁移

(此步不能忽略):将192.168.1.51上的索引放到其它2台data节点上

    curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index.routing.allocation.include._ip": "192.168.1.52,192.168.1.53"
}'

确认当前索引存储位置

确认所有索引不在192.168.1.51节点上

    curl "http://192.168.1.31:9200/_cat/shards?h=n"

停掉192.168.1.51的进程,修改目录结构及配置:请自行按SSD和SAS硬盘挂载好数据盘

    # 安装包下载和部署请参考第一篇《EFK-1: 快速指南》

    cd /opt/software/

    tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz

    mv /opt/elasticsearch /opt/elasticsearch-SAS

    mv elasticsearch-7.3.2 /opt/

    mv /opt/elasticsearch-7.3.2 /opt/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/elasticsearch-* -R

    rm -rf /data/SAS/*

    chown elasticsearch.elasticsearch /data/* -R

    mkdir -p /opt/logs/elasticsearch-SAS

    mkdir -p /opt/logs/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/logs/* -R
# SAS实例/opt/elasticsearch-SAS/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.51-SAS path.data: /data/SAS path.logs: /opt/logs/elasticsearch-SAS network.host: 192.168.1.51 http.port: 9200 transport.port: 9300 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2 # SSD实例/opt/elasticsearch-SSD/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.51-SSD path.data: /data/SSD path.logs: /opt/logs/elasticsearch-SSD network.host: 192.168.1.51 http.port: 9201 transport.port: 9301 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2

SAS实例和SSD实例启动方式

    sudo -u elasticsearch /opt/elasticsearch-SAS/bin/elasticsearch

    sudo -u elasticsearch /opt/elasticsearch-SSD/bin/elasticsearch

确认SAS和SSD已启2实例

    curl "http://192.168.1.31:9200/_cat/nodes?v"

192.168.1.52 elasticsearch-data部署双实例

索引迁移

(此步不能忽略):将192.168.1.52上的索引放到其它2台data节点上

    curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index.routing.allocation.include._ip": "192.168.1.51,192.168.1.53"
}'

确认当前索引存储位置

确认所有索引不在192.168.1.52节点上

    curl "http://192.168.1.31:9200/_cat/shards?h=n"

停掉192.168.1.52的进程,修改目录结构及配置:请自行按SSD和SAS硬盘挂载好数据盘

    # 安装包下载和部署请参考第一篇《EFK-1: 快速指南》

    cd /opt/software/

    tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz

    mv /opt/elasticsearch /opt/elasticsearch-SAS

    mv elasticsearch-7.3.2 /opt/

    mv /opt/elasticsearch-7.3.2 /opt/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/elasticsearch-* -R

    rm -rf /data/SAS/*

    chown elasticsearch.elasticsearch /data/* -R

    mkdir -p /opt/logs/elasticsearch-SAS

    mkdir -p /opt/logs/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/logs/* -R

# SAS实例/opt/elasticsearch-SAS/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.52-SAS path.data: /data/SAS path.logs: /opt/logs/elasticsearch-SAS network.host: 192.168.1.52 http.port: 9200 transport.port: 9300 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2 # SSD实例/opt/elasticsearch-SSD/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.52-SSD path.data: /data/SSD path.logs: /opt/logs/elasticsearch-SSD network.host: 192.168.1.52 http.port: 9201 transport.port: 9301 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2

SAS实例和SSD实例启动方式

    sudo -u elasticsearch /opt/elasticsearch-SAS/bin/elasticsearch

    sudo -u elasticsearch /opt/elasticsearch-SSD/bin/elasticsearch

确认SAS和SSD已启2实例

    curl "http://192.168.1.31:9200/_cat/nodes?v"

192.168.1.53 elasticsearch-data部署双实例

索引迁移

(此步不能忽略):一定要做这步,将192.168.1.53上的索引放到其它2台data节点上

    curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index.routing.allocation.include._ip": "192.168.1.51,192.168.1.52"
}'

确认当前索引存储位置

确认所有索引不在192.168.1.52节点上

    curl "http://192.168.1.31:9200/_cat/shards?h=n"

停掉192.168.1.53的进程,修改目录结构及配置:请自行按SSD和SAS硬盘挂载好数据盘

    # 安装包下载和部署请参考第一篇《EFK-1: 快速指南》

    cd /opt/software/

    tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz

    mv /opt/elasticsearch /opt/elasticsearch-SAS

    mv elasticsearch-7.3.2 /opt/

    mv /opt/elasticsearch-7.3.2 /opt/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/elasticsearch-* -R

    rm -rf /data/SAS/*

    chown elasticsearch.elasticsearch /data/* -R

    mkdir -p /opt/logs/elasticsearch-SAS

    mkdir -p /opt/logs/elasticsearch-SSD

    chown elasticsearch.elasticsearch /opt/logs/* -R
# SAS实例/opt/elasticsearch-SAS/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.53-SAS path.data: /data/SAS path.logs: /opt/logs/elasticsearch-SAS network.host: 192.168.1.53 http.port: 9200 transport.port: 9300 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2 # SSD实例/opt/elasticsearch-SSD/config/elasticsearch.yml配置
cluster.name: my-application node.name: 192.168.1.53-SSD path.data: /data/SSD path.logs: /opt/logs/elasticsearch-SSD network.host: 192.168.1.53 http.port: 9201 transport.port: 9301 # discovery.seed_hosts和cluster.initial_master_nodes 一定要带上端口号,不然会走http.port和transport.port端口 discovery.seed_hosts: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] cluster.initial_master_nodes: ["192.168.1.31:9300","192.168.1.32:9300","192.168.1.33:9300"] http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.ingest: false node.data: true # 本机只允行启2个实例 node.max_local_storage_nodes: 2

SAS实例和SSD实例启动方式

    sudo -u elasticsearch /opt/elasticsearch-SAS/bin/elasticsearch

    sudo -u elasticsearch /opt/elasticsearch-SSD/bin/elasticsearch

确认SAS和SSD已启2实例

    curl "http://192.168.1.31:9200/_cat/nodes?v"

测试

将所有索引移到SSD硬盘上

    # 下面的参数会在后面的文章讲解,此处照抄即可

    curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'

    {

      "index.routing.allocation.include._host_ip": "",

      "index.routing.allocation.include._host": "",

      "index.routing.allocation.include._name": "",

      "index.routing.allocation.include._ip": "",

      "index.routing.allocation.require._name": "*-SSD"

    }'

确认所有索引全在SSD硬盘上

    curl "http://192.168.1.31:9200/_cat/shards?h=n"

将nginx9月份的日志索引迁移到SAS硬盘上

    curl -X PUT "192.168.1.31:9200/nginx_*_2019.09/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index.routing.allocation.require._name": "*-SAS"
}'

确认nginx9月份的日志索引迁移到SAS硬盘上

    curl "http://192.168.1.31:9200/_cat/shards"

EFK-3: ES多实例部署的更多相关文章

  1. mysql 5.5多实例部署【图解】

    mysql5.5数据库多实例部署,我们可以分以下几个步骤来完成. 1. mysql多实例的原理 2. mysql多实例的特点 3. mysql多实例应用场景 4. mysql5.5多实例部署方法 一. ...

  2. 烂泥:mysql5.5多实例部署

    本文由秀依林枫提供友情赞助,首发于烂泥行天下. mysql5.5数据库多实例部署,我们可以分以下几个步骤来完成. 1. mysql多实例的原理 2. mysql多实例的特点 3. mysql多实例应用 ...

  3. Mysql 数据库单机多实例部署手记

        最近的研发机器需要部署多个环境,包括数据库.为了管理方便考虑将mysql数据库进行隔离,即采用单机多实例部署的方式.找了会资料发现用的人也不是太多,一般的生产环境为了充分发挥机器性能都是单机单 ...

  4. MySQL5.6多实例部署

    原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 .作者信息和本声明.否则将追究法律责任.http://suifu.blog.51cto.com/9167728/1850560 无论是迫 ...

  5. MySQL-5.6.36-多实例-部署(编译版)

    MySQL多实例_沁贰百科 注:部署双实例前,首先需要部署单实例,单实例部署详情如下: https://www.cnblogs.com/wangqiner/p/9081002.html 1.如已经安装 ...

  6. Tomcat多实例部署

    前言 以前总是采用很Low的方式太同一台服务器上部署多个Web应用,步骤是这样的:Copy Tomcat目录-->更改conf/server.xml三个端口号----->部署war包--- ...

  7. 性能测试二十:环境部署之Tomcat多实例部署+日志监控

    一个tomcat性能有限,所以需要部署等多个tomcat 单实例部署与windows下类似,项目包放到webapp目录下,启动bin目录下的startup.sh即可启动命令:./startup.sh启 ...

  8. redis安装,修改配置文件,多实例部署 redis-server

    redis 安装 解压: [root@Aliyun software]# tar -xvf redis-3.2.11.tar.gz 进入redis根目录: [root@Aliyun software] ...

  9. tomcat单机多实例部署

    最近在面试的过程中,一家公司在面试时提到了有关tomcat单机多实例部署的提问, 正好, 之前使用IntelliJ IDEA 13.1.4这款IDE开发web项目,在开发的过程中,因为有多个web项目 ...

随机推荐

  1. HashMap设计原理与实现(下篇)200行带你写自己的HashMap!!!

    HashMap设计原理与实现(下篇)200行带你写自己的HashMap!!! 我们在上篇文章哈希表的设计原理当中已经大体说明了哈希表的实现原理,在这篇文章当中我们将自己动手实现我们自己的HashMap ...

  2. e.printStackTrace() 原理的分析

    e.printStackTrace(); 先查看下源码 如图片中1所示,使用的是 PrintStreamOrWriter public void printStackTrace() { printSt ...

  3. Kafka启动遇到ERROR Exiting Kafka due to fatal exception (kafka.Kafka$) 解决办法 从kafka的根目录启动 bin/kafka-server-start.sh config/server.properties

    Mysql配置读写数据库 ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) fo ...

  4. 字符串运算符&&三元运算符

    public class Demo01 { public static void main(String[] args) { //字符串连接符 + String int a=20; int b=10; ...

  5. dfs-1756:八皇后及1700:八皇后问题

    总时间限制: 1000ms 内存限制: 65536kB 描述 会下国际象棋的人都很清楚:皇后可以在横.竖.斜线上不限步数地吃掉其他棋子.如何将8个皇后放在棋盘上(有8 * 8个方格),使它们谁也不能被 ...

  6. OpenCV4之C++入门详解

    OpenCV之C++入门 1.Visual Studio安装及环境配置与搭建 下载地址:https://my.visualstudio.com/Downloads?q=Visual,下载后按照说明安装 ...

  7. 《吐血整理》进阶系列教程-拿捏Fiddler抓包教程(10)-Fiddler如何设置捕获Firefox浏览器的Https会话

    1.简介 经过上一篇对Fiddler的配置后,绝大多数的Https的会话,我们可以成功捕获抓取到,但是有些版本的Firefox浏览器仍然是捕获不到其的Https会话,需要我们更进一步的配置才能捕获到会 ...

  8. BZOJ3732 (Kruskal重构树)

    Kruskal重构树上\(x\)和\(v\)的\(lca\)的权值即为它们最长路最小值 #include <cstdio> #include <iostream> #inclu ...

  9. LuoguP4782 【模板】2-SAT 问题 (2-SAT)

    Not difficult, the only problem is how to deal with give 0/1 to the var. Tarjan offers the reverse t ...

  10. LINUX下基于NVIDIA HPC SDK 的 VASP6.3.x编译安装报错整理

    关于gcc 用旧版本安装NVIDIA HPC SDK再编译会报错: "/opt/rh/devtoolset-8/root/usr/include/c++/8/bits/move.h" ...