Full cluster restart upgrade

Elasticsearch requires a full cluster restart when upgrading across major versions. Rolling upgrades are not supported across major versions. Consult this table to verify that a full cluster restart is required.

The process to perform an upgrade with a full cluster restart is as follows:

  1. Disable shard allocation——防止分片大量复制数据

    When you shut down a node, the allocation process will immediately try to replicate the shards that were on that node to other nodes in the cluster, causing a lot of wasted I/O. This can be avoided by disabling allocation before shutting down a node:

    PUT _cluster/settings
    {
    "persistent": {
    "cluster.routing.allocation.enable": "none"
    }
    }
  2. Perform a synced flush

    Shard recovery will be much faster if you stop indexing and issue a synced-flush request:

    POST _flush/synced

    A synced flush request is a “best effort” operation. It will fail if there are any pending indexing operations, but it is safe to reissue the request multiple times if necessary.

  3. Shutdown and upgrade all nodes

    Stop all Elasticsearch services on all nodes in the cluster. Each node can be upgraded following the same procedure described in [upgrade-node].

  4. Upgrade any plugins

    Elasticsearch plugins must be upgraded when upgrading a node. Use the elasticsearch-plugin script to install the correct version of any plugins that you need.

  5. Start the cluster——先启动主节点,然后再是数据节点

    If you have dedicated master nodes — nodes with node.master set to true(the default) and node.data set to false —  then it is a good idea to start them first. Wait for them to form a cluster and to elect a master before proceeding with the data nodes. You can check progress by looking at the logs.

    As soon as the minimum number of master-eligible nodes have discovered each other, they will form a cluster and elect a master. From that point on, the _cat/health and _cat/nodesAPIs can be used to monitor nodes joining the cluster:

    GET _cat/health
    
    GET _cat/nodes

    Use these APIs to check that all nodes have successfully joined the cluster.

  6. Wait for yellow

    As soon as each node has joined the cluster, it will start to recover any primary shards that are stored locally. Initially, the _cat/health request will report a status of red, meaning that not all primary shards have been allocated.

    Once each node has recovered its local shards, the status will become yellow, meaning all primary shards have been recovered, but not all replica shards are allocated. This is to be expected because allocation is still disabled.

  7. Reenable allocation

    Delaying the allocation of replicas until all nodes have joined the cluster allows the master to allocate replicas to nodes which already have local shard copies. At this point, with all the nodes in the cluster, it is safe to reenable shard allocation:

    PUT _cluster/settings
    {
    "persistent": {
    "cluster.routing.allocation.enable": "all"
    }
    }

    The cluster will now start allocating replica shards to all data nodes(难道升级集群发生shard allocation是因为要分配replica节点???). At this point it is safe to resume indexing and searching, but your cluster will recover more quickly if you can delay indexing and searching until all shards have recovered.

    You can monitor progress with the _cat/health and _cat/recovery APIs:

    GET _cat/health
    
    GET _cat/recovery

    Once the status column in the _cat/health output has reached green, all primary and replica shards have been successfully allocated.

ES跨版本升级?——难道升级集群发生shard allocation是因为要分配replica节点???的更多相关文章

  1. kubernetes使用kubeadm升级集群

    升级前准本  官网: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-upgrade/查看可升级的组件 [root@h ...

  2. 项目进阶 之 集群环境搭建(三)多管理节点MySQL集群

    上次的博文项目进阶 之 集群环境搭建(二)MySQL集群中,我们搭建了一个基础的MySQL集群,这篇博客咱们继续讲解MySQL集群的相关内容,同时针对上一篇遗留的问题提出一个解决方案. 1.单管理节点 ...

  3. [转]ZooKeeper 集群环境搭建 (本机3个节点)

    ZooKeeper 集群环境搭建 (本机3个节点) 是一个简单的分布式同步数据库(或者是小文件系统) ------------------------------------------------- ...

  4. HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)

    HBase的多节点集群详细启动步骤(3或5节点)分为: 1.HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐) 2.HBASE_MANAGES_ZK的默认值是tru ...

  5. Hadoop的多节点集群详细启动步骤(3或5节点)

    版本1 利用自己写的脚本来启动,见如下博客 hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3节点集群搭建 hadoop-2.6.0.tar.gz的集群搭建(3节点) hadoop ...

  6. ES系列十六、集群配置和维护管理

    一.修改配置文件 1.节点配置 1.vim elasticsearch.yml # ======================== Elasticsearch Configuration ===== ...

  7. 使用 Velero 跨云平台迁移集群资源到 TKE

    概述 Velero 是一个非常强大的开源工具,可以安全地备份和还原,执行灾难恢复以及迁移Kubernetes群集资源和持久卷,可以在 TKE 平台上使用 Velero 备份.还原和迁移集群资源,关于如 ...

  8. KingbaseES V8R3集群维护案例之---在线添加备库管理节点

    案例说明: 在KingbaseES V8R3主备流复制的集群中 ,一般有两个节点是集群的管理节点,分为master和standby:如对于一主二备的架构,其中有两个节点是管理节点,三个数据节点:管理节 ...

  9. Redis随笔(四)Centos7 搭redis3.2.9集群-3主3从的6个节点服务

    1.虚拟机环境 使用的Linux环境已经版本: Centos 7   64位系统 主机ip: 192.168.56.180 192.168.56.181 192.168.56.182 每台服务器是1主 ...

随机推荐

  1. zzulioj--1858--单词翻转(模拟)

    1858: 单词翻转 Time Limit: 1 Sec  Memory Limit: 128 MB Submit: 88  Solved: 35 SubmitStatusWeb Board Desc ...

  2. 从极大似然估计的角度理解深度学习中loss函数

    从极大似然估计的角度理解深度学习中loss函数 为了理解这一概念,首先回顾下最大似然估计的概念: 最大似然估计常用于利用已知的样本结果,反推最有可能导致这一结果产生的参数值,往往模型结果已经确定,用于 ...

  3. ubuntu安装之后root用户配置

    安装ubuntu之后发现不切换到root显示:su: Authentication failure   需要进行一下操作   表示成功切换到root用户

  4. 弹出ifame页面(jquery.reveal.js)

    <body> <a data-reveal-id="myModalDailyModify" data-animation="fade" tit ...

  5. Windows 10 游戏录制工具栏

  6. Caffe_Scale层解析

    Caffe Scale层解析 前段时间做了caffe的batchnormalization层的解析,由于整体的BN层实现在Caffe是分段实现的,因此今天抽时间总结下Scale层次,也会后续两个层做合 ...

  7. Pyhton学习——Day10

    #################################################################################################### ...

  8. 使用multiprocessing模块操作进程

    1.Process模块介绍 process模块是一个创建进程的模块,借助这个模块,就可以完成进程的创建. Process([group [, target [, name [, args [, kwa ...

  9. web自动化-selenium2入门讲解(mac版本)

    最近要做一个selenium2的分享,于是总结了下我用selenium2的感受,希望分享出来,可以对入门的小伙伴有一点帮助,也希望得到大佬的指教   一,环境搭建maven+selenium2+tes ...

  10. php 密码hash加密

    做密码加密,记录一下. password_hash 函数在 PHP 5.5 时被引入. 此函数现在使用的是目前 PHP 所支持的最强大的加密算法 BCrypt .例子: $passwordHash = ...