Install pip if necessary

curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
python get-pip.py

Install Curator for Elasticsearch

Elasticsearch Curator helps you curate, or manage, your Elasticsearch indices and snapshots by:

  • Obtaining the full list of indices (or snapshots) from the cluster, as the actionable list
  • Iterate through a list of user-defined filters to progressively remove indices (or snapshots) from this actionable list as needed.
  • Perform various actions on the items which remain in the actionable list.
pip install elasticsearch-curator
pip install click==6.7

Configure curator

mkdir -p /var/log/elastic
touch /var/log/elastic/curator.log
mkdir ~/.curator
vi ~/.curator/curator.yml
curator.yml
# Remember, leave a key empty if there is no value. None will be a string,
## not a Python "NoneType"
client:
hosts: [Elasticsearch Server IP]
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
 
logging:
loglevel: INFO
logfile: /var/log/elastic/curator.log
logformat: default
blacklist: ['elasticsearch', 'urllib3']

Have a test, now you can get the indices list
curator_cli show_indices

Create repository

Configure elasticseach.yml default in /etc/elasticsearch/elasticsearch.yml

elasticsearch.yml
path.repo:  /u01/elasticsearch/backup
http.max_header_size: 16kb

Restart elasticsearch service (service elasticsearch restart) to make the configurations work.

Create repository elasticsearch. Ensure location points to a valid path which is configured in path.repo, accesable from all nodes.

curl -XPUT http://localhost:9200/_snapshot/es_backup -H "Content-Type: application/json" -d @repository.json
repository.json
{
   "type""fs",
   "settings": {
      "compress"true,
      "location""/u01/elasticsearch/backup"
   }
}

Have a test

curl -XGET 'localhost:9200/_snapshot/_all?pretty=true'

Create curator yaml action files

daily_backup.yml

Customize the snapshot name in name option
action 1: backup all indices before today to repository elasticsearch with specified snapshot name
action 2: delete indices older than 185 days

daily_backup.yml
---
actions:
  1:
    action: snapshot
    description: >-
      Snapshot selected all indices to repository 'elasticsearch' with the snapshot name
    options:
      repository: es_backup
      name: '<c4cert-{now/d-1d}>'
      wait_for_completion: True
      max_wait: 4800
      wait_interval: 30
    filters:
    - filtertype: age
      source: name
      direction: older
      unit: days
      unit_count: 1
      timestring: "%Y.%m.%d"
 
 
  2:
    action: delete_indices
    description: >-
      Delete indices which is older than 185 days
    filters:
    - filtertype: age
      source: name
      direction: older
      unit: days
      unit_count: 185
      timestring: "%Y.%m.%d"

del_snapshot.yml
action 1: Delete snapshots from repository elasticsearch which is older than 185 days

del_snapshot.yml
---
 
actions:
  1:
    action: delete_snapshots
    description: >-
      Delete snapshots from repository which is older than 185 days
    options:
      repository: es_backup
      retry_interval: 120
      retry_count: 3
    filters:
    - filtertype: age
      source: creation_date
      direction: older
      unit: days
      unit_count: 185

restore.yml
action 1: Restore all indices in the most recent snapshot with state SUCCESS.

restore.yml
---
 
actions:
  1:
    action: restore
    description: >-
      Restore all indices in the most recent snapshot with state SUCCESS.  Wait
      for the restore to complete before continuing.  Do not skip the repository
      filesystem access check.  Use the other options to define the index/shard
      settings for the restore.
    options:
      repository: es_backup
      # If name is blank, the most recent snapshot by age will be selected
      name:
      # If indices is blank, all indices in the snapshot will be restored
      indices:
      wait_for_completion: True
      max_wait: 3600
      wait_interval: 10
    filters:
    - filtertype: state
      state: SUCCESS

Note: use --dry-run option to verify your action without any change. Find the dry run results in log path.
Curator --dry-run daily_backup.yml

Shell script and crontab

run.sh
#!/bin/sh
curator /u01/curator/del_snapshot.yml
curator /u01/curator/daily_backup.yml

crontab -e

Here configured the job run on every 3 AM

crontab
0 3 * * * /bin/sh /u01/curator/run.sh

Restore

Curator restore.yml

Tested OK in CERT env.

Some useful API 

# get all repositories
curl -XGET 'localhost:9200/_snapshot/_all?pretty=true'
 
# delete repository
curl -XDELETE 'localhost:9200/_snapshot/es-snapshot?pretty=true'
 
# show snapshots
curator_cli show_snapshots --repository es_backup
 
# show indices
curator_cli show_indices

Elasticsearch日志收集的更多相关文章

  1. Linux下单机部署ELK日志收集、分析环境

    一.ELK简介 ELK是elastic 公司旗下三款产品ElasticSearch .Logstash .Kibana的首字母组合,主要用于日志收集.分析与报表展示. ELK Stack包含:Elas ...

  2. 快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana)

    快速搭建应用服务日志收集系统(Filebeat + ElasticSearch + kibana) 概要说明 需求场景,系统环境是CentOS,多个应用部署在多台服务器上,平时查看应用日志及排查问题十 ...

  3. 日志收集之--将Kafka数据导入elasticsearch

    最近需要搭建一套日志监控平台,结合系统本身的特性总结一句话也就是:需要将Kafka中的数据导入到elasticsearch中.那么如何将Kafka中的数据导入到elasticsearch中去呢,总结起 ...

  4. GlusterFS + lagstash + elasticsearch + kibana 3 + redis日志收集存储系统部署 01

    因公司数据安全和分析的需要,故调研了一下 GlusterFS + lagstash + elasticsearch + kibana 3 + redis 整合在一起的日志管理应用: 安装,配置过程,使 ...

  5. 基于logstash+elasticsearch+kibana的日志收集分析方案(Windows)

    一 方案背景     通常,日志被分散的储存不同的设备上.如果你管理数十上百台服务器,你还在使用依次登录每台机器的传统方法查阅日志.这样是不是感觉很繁琐和效率低下.开源实时日志分析ELK平台能够完美的 ...

  6. 用ElasticSearch,LogStash,Kibana搭建实时日志收集系统

    用ElasticSearch,LogStash,Kibana搭建实时日志收集系统 介绍 这套系统,logstash负责收集处理日志文件内容存储到elasticsearch搜索引擎数据库中.kibana ...

  7. ELK(Elasticsearch + Logstash + Kibana) 日志收集

    单体应用或微服务的场景下,每个服务部署在不同的服务器上,需要对日志进行集重收集,然后统一查看所以日志. ELK日志收集流程: 1.微服务器上部署Logstash,对日志文件进行数据采集,将采集到的数据 ...

  8. logstash+elasticsearch+kibana搭建日志收集分析系统

    来源: http://blog.csdn.net/xifeijian/article/details/50829617 日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散 ...

  9. syslog+rsyslog+logstash+elasticsearch+kibana搭建日志收集

    最近rancher平台上docker日志收集捣腾挺久的,尤其在配置上,特写下记录 Unix/Linux系统中的大部分日志都是通过一种叫做syslog的机制产生和维护的.syslog是一种标准的协议,分 ...

随机推荐

  1. spring3+quartz2

    听说来自这里www.ydyrx.com 转载的: 最近公司要用定时任务,自己想着学习并完成任务,百度,google,360,必应,能用的搜索都用了,参差不齐,搞了一整天,也没找到一个好的例子.没办法, ...

  2. hadoop spark 总结

    yarn  由,资源管理器rm,应用管理器am appMaster,节点管理器nm 组成! 图侵删 yarn 的设计,是为了代替hadoop 1.x的jobtracker 集中式一对多的资源管理「资源 ...

  3. 【转】上拉下拉电阻、I/O输出(开漏、推挽等)

    作者:BakerZhang 链接:https://www.jianshu.com/p/3ac3a29b0f58来源:简书 感谢! ——————————————————————————————————— ...

  4. PAT_A1149#Dangerous Goods Packaging

    Source: PAT A1149 Dangerous Goods Packaging (25 分) Description: When shipping goods with containers, ...

  5. PAT_A1118#Birds in Forest

    Source: PAT A1118 Birds in Forest (25 分) Description: Some scientists took pictures of thousands of ...

  6. NOIP2013 DAY2 T3火车运输

    传送门 题目描述 A 国有 n 座城市,编号从 1 到 n,城市之间有 m 条双向道路.每一条道路对车辆都有重量限制,简称限重.现在有 q 辆货车在运输货物, 司机们想知道每辆车在不超过车辆限重的情况 ...

  7. netsh wlan set hostednetwork 之后如何删除掉 配置 及终端无法获取IP的解决方法

    来源 微软 技术支持网站 仅仅禁用hostednetwork 是不行的,配置其实还在,彻底删除 配置可以按照如下操作进行: net stop wlansvcGet-ItemProperty " ...

  8. C#的WaitHandle : 管理多线程状态

    有时候,我们创建了多线程,需要知道是否都完成了各自的工作.比如说,开启了多线程的下载,如何终止所有的线程并且在确保所有线程都终止之后才继续执行程序的退出呢? public partial class ...

  9. 51nod——T1103 N的倍数

    题目来源: Ural 1302 基准时间限制:1 秒 空间限制:131072 KB 分值: 40 难度:4级算法题  收藏  关注 一个长度为N的数组A,从A中选出若干个数,使得这些数的和是N的倍数. ...

  10. 数据结构----队列:顺序队列&顺序循环队列、链式队列、顺序优先队列

    一.队列的概念: 队列(简称作队,Queue)也是一种特殊的线性表,队列的数据元素以及数据元素间的逻辑关系和线性表完全相同,其差别是线性表允许在任意位置插入和删除,而队列只允许在其一端进行插入操作在其 ...