http://blog.sina.com.cn/s/blog_8ea8e9d50102wwik.html

Docker部署Elasticsearch集群
参考文档:
 
 
 
环境:
CentOS 7.2
docker-engine-1.11.2
elasticsearch-2.3.3
 
 
前言:
虚拟机节点部署请参看Elasticsearch 负载均衡集群,这里简单介绍下docker部署
 

本实验采用不同类型的节点集群(client x1, master x3, data x2)

ela-client.example.com:192.168.8.10(client node)
ela-master1.example.com:192.168.8.101(master node)
ela-master2.example.com:192.168.8.102(master node)
ela-master3.example.com:192.168.8.103(master node)
ela-data1.example.com:192.168.8.201(data node)
ela-data2.example.com:192.168.8.202(data node)
 
 
一.安装docker-engine(所有节点)
docker镜像加速请参看Docker Hub加速及私有镜像搭建
 
 
二.获取elasticsearch docker镜像
docker pull elasticsearch:2.3.3
 
测试运行elasticsearch
docker run -d -p 9200:9200 -p 9300:9300 --name=elasticsearch-test elasticsearch:2.3.3
查看容器配置
docker inspect elasticsearch-test
删除测试容器
docker rm -f $(docker ps -aq)
提示:对于持续增长的目录或文件可以通过映射到docker主机来提高可定制性,如data,logs
 
 
三.配置并启动节点
创建集群名为elasticsearch_cluster的集群(默认的集群名为elasticsearch)
A.client node
ela-client.example.com:192.168.8.10(client node)

docker run -d --restart=always -p 9200:9200 -p 9300:9300 --name=elasticsearch-client --oom-kill-disable=true --memory-swappiness=1 -v /opt/elasticsearch/data:/usr/share/elasticsearch/data -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs elasticsearch:2.3.3

cat >elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: false

node.data: false

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.host: 0.0.0.0

network.publish_host: 192.168.8.10

transport.tcp.port: 9300

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker cp elasticsearch.yml elasticsearch-client:/usr/share/elasticsearch/config/elasticsearch.yml

docker restart elasticsearch-client

直接将修改好的配置文件cp到容器对应位置后重启容器

man docker-run

       --net="bridge"

          Set the Network mode for the container

                                      'bridge': create a network stack on the default Docker bridge

                                      'none': no networking

                                      'container:': reuse another container's network stack

                                      'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus and is  therefore  consid‐

       ered insecure.

说明:docker默认的网络模式为bridge,会自动为容器分配一个私有地址,如果是多台宿主机之间集群通信需要借助Consul,Etcd,Doozer等服务发现,自动注册组件来协同。请参看Docker集群之Swarm+Consul+Shipyard

必须通过network.publish_host: 192.168.8.10参数指定elasticsearch节点对外的监听地址,非常重要,不指定的话,集群节点间无法正常通信,报错如下

[2016-06-21 05:50:19,123][INFO ][discovery.zen ] [consul-s2.example.com] failed to send join request to master[{consul-s1.example.com}{DeKixlVMS2yoynzX8Y-gdA}{172.17.0.1}{172.17.0.1:9300}{data=false, master=true}], reason [RemoteTransportException[[consul-s2.example.com][172.17.0.1:9300]

最简单的,还可以网络直接设为host模式--net=host,直接借用宿主机的网络接口,换言之,不做网络层的layer

同时可禁用OOM,还可根据宿主机内存来设置-m参数(默认为0,无限)限制容器内存大小

[root@ela-client ~]# docker ps

CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES

762e4d21aaf8        elasticsearch:2.3.3   "/docker-entrypoint.s"   2 minutes ago       Up 2 minutes                            elasticsearch-client

[root@ela-client ~]# netstat -tunlp|grep java

tcp        0      0 0.0.0.0:9200            0.0.0.0:*               LISTEN      18952/java

tcp        0      0 0.0.0.0:9300            0.0.0.0:*               LISTEN      18952/java

[root@ela-client ~]# ls /opt/elasticsearch/

data  logs

[root@ela-client ~]# docker logs $(docker ps -q)

[2016-06-13 16:09:51,308][INFO ][node                     ] [Sunfire] version[2.3.3], pid[1], build[218bdf1/2016-05-17T15:40:04Z]

[2016-06-13 16:09:51,311][INFO ][node                     ] [Sunfire] initializing ...

... ...

[2016-06-13 16:09:56,408][INFO ][node                     ] [Sunfire] started

[2016-06-13 16:09:56,417][INFO ][gateway                  ] [Sunfire] recovered [0] indices into cluster_state

 
或者
卷映射,个人认为卷映射更便于管理与备份
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: false

node.data: false

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.host: 0.0.0.0

network.publish_host: 192.168.8.10

transport.tcp.port: 9300

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-client \
    elasticsearch:2.3.3
 
B.master node
ela-master1.example.com:192.168.8.101(master node)
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: true

node.data: false

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.host: 0.0.0.0

network.publish_host: 192.168.8.101

transport.tcp.port: 9300

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-master1 \
    elasticsearch:2.3.3
ela-master2.example.com:192.168.8.102(master node)
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: true

node.data: false

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.host: 0.0.0.0

network.publish_host: 192.168.8.102

transport.tcp.port: 9300

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-master2 \
    elasticsearch:2.3.3
ela-master3.example.com:192.168.8.103(master node)
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: true

node.data: false

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.publish_host: 192.168.8.103

transport.tcp.port: 9300

network.host: 0.0.0.0

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-master3 \
    elasticsearch:2.3.3
 
C.data node
ela-data1.example.com:192.168.8.201(data node)
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: false

node.data: true

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.publish_host: 192.168.8.201

transport.tcp.port: 9300

network.host: 0.0.0.0

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-data1 \
    elasticsearch:2.3.3
ela-data2.example.com:192.168.8.202(data node)
mkdir -p /opt/elasticsearch/config

cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE

cluster.name: elasticsearch_cluster

node.name: ${HOSTNAME}

node.master: false

node.data: true

path.data: /usr/share/elasticsearch/data

path.logs: /usr/share/elasticsearch/logs

bootstrap.mlockall: true

network.host: 0.0.0.0

network.publish_host: 192.168.8.202

transport.tcp.port: 9300

http.port: 9200

index.refresh_interval: 5s

script.inline: true

script.indexed: true

HERE
docker run -tid --restart=always \
    -p 9200:9200 \
    -p 9300:9300 \
    --oom-kill-disable=true \
    --memory-swappiness=1 \
    -v /opt/elasticsearch/data:/usr/share/elasticsearch/data \
    -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs \
    -v /opt/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
    --name=elasticsearch-data2 \
    elasticsearch:2.3.3
 
 
四.配置集群(所有节点)

通过discovery模块来将节点加入集群

在以上节点的配置文件/opt/elasticsearch/config/elasticsearch.yml中加入如下行后重启

cat >>/opt/elasticsearch/config/elasticsearch.yml <<HERE

discovery.zen.ping.timeout: 100s

discovery.zen.fd.ping_timeout: 100s

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: ["192.168.8.101:9300", "192.168.8.102:9300", "192.168.8.103:9300", "192.168.8.201:9300", "192.168.8.202:9300","192.168.8.10:9300"]

HERE

docker restart $(docker ps -a|grep elasticsearch|awk '{print $1}')

 

 

五.确认集群

等待30s左右,集群节点会自动join,在集群的含意节点上都会看到如下类似输出,说明集群运行正常

REST API调用请参看Elasticsearch REST API小记

https://www.elastic.co/guide/en/elasticsearch/reference/current/_cluster_health.html

[root@ela-client ~]#curl 'http://localhost:9200/_cat/health?v'

epoch      timestamp cluster               status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent

1465843145 18:39:05  elasticsearch_cluster green           6         2      0   0    0    0        0             0                  -                100.0%

[root@ela-client ~]#curl 'localhost:9200/_cat/nodes?v'

host            ip              heap.percent ram.percent load node.role master name

192.168.8.102 192.168.8.102           14          99 0.00 -         *      ela-master2.example.com

192.168.8.103 192.168.8.103            4          99 0.14 -         m      ela-master3.example.com

192.168.8.202 192.168.8.202           11          99 0.00 d         -      ela-data2.example.com

192.168.8.10  192.168.8.10            10          98 0.17 -         -      ela-client.example.com

192.168.8.201 192.168.8.201           11          99 0.00 d         -      ela-data1.example.com

192.168.8.101 192.168.8.101           12          99 0.01 -         m      ela-master1.example.com

[root@ela-master2 ~]#curl 'http://localhost:9200/_nodes/process?pretty'

{

"cluster_name" : "elasticsearch_cluster",

"nodes" : {

"naMz_y4uRRO-FzyxRfTNjw" : {

"name" : "ela-data2.example.com",

"transport_address" : "192.168.8.202:9300",

"host" : "192.168.8.202",

"ip" : "192.168.8.202",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.202:9200",

"attributes" : {

"master" : "false"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

},

"7FwFY20ESZaRtIWhYMfDAg" : {

"name" : "ela-data1.example.com",

"transport_address" : "192.168.8.201:9300",

"host" : "192.168.8.201",

"ip" : "192.168.8.201",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.201:9200",

"attributes" : {

"master" : "false"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

},

"X0psLpQyR42A4ThiP8ilhA" : {

"name" : "ela-master3.example.com",

"transport_address" : "192.168.8.103:9300",

"host" : "192.168.8.103",

"ip" : "192.168.8.103",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.103:9200",

"attributes" : {

"data" : "false",

"master" : "true"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

},

"MG_GlSAZRkqLq8gMqaZITw" : {

"name" : "ela-master1.example.com",

"transport_address" : "192.168.8.101:9300",

"host" : "192.168.8.101",

"ip" : "192.168.8.101",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.101:9200",

"attributes" : {

"data" : "false",

"master" : "true"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

},

"YxNHUPqVRNK3Liilw_hU9A" : {

"name" : "ela-master2.example.com",

"transport_address" : "192.168.8.102:9300",

"host" : "192.168.8.102",

"ip" : "192.168.8.102",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.102:9200",

"attributes" : {

"data" : "false",

"master" : "true"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

},

"zTKJJ4ipQg6xAcwy1aE-9g" : {

"name" : "ela-client.example.com",

"transport_address" : "192.168.8.10:9300",

"host" : "192.168.8.10",

"ip" : "192.168.8.10",

"version" : "2.3.3",

"build" : "218bdf1",

"http_address" : "192.168.8.10:9200",

"attributes" : {

"data" : "false",

"master" : "true"

},

"process" : {

"refresh_interval_in_millis" : 1000,

"id" : 1,

"mlockall" : false

}

}

}

}

问题:

修改jvm内存,初始值-Xms256m -Xmx1g

https://hub.docker.com/r/itzg/elasticsearch/

https://hub.docker.com/_/elasticsearch/

实测:修改JAVA_OPTS,ES_JAVA_OPTS都只能追加,而不能覆盖,只能通过

-e ES_HEAP_SIZE="32g" 来覆盖



补充:

head,Marvel等监控可视化插件有兴趣的朋友可以试下

Docker部署Elasticsearch集群的更多相关文章

  1. Centos8 Docker部署ElasticSearch集群

    ELK部署 部署ElasticSearch集群 1.拉取镜像及批量生成配置文件 # 拉取镜像 [root@VM-24-9-centos ~]# docker pull elasticsearch:7. ...

  2. 利用 docker 部署 elasticsearch 集群(单节点多实例)

    文章目录 1.环境介绍 2.拉取 `elasticserach` 镜像 3.创建 `elasticsearch` 数据目录 4.创建 `elasticsearch` 配置文件 5.配置JVM线程数量限 ...

  3. Centos8 部署 ElasticSearch 集群并搭建 ELK,基于Logstash同步MySQL数据到ElasticSearch

    Centos8安装Docker 1.更新一下yum [root@VM-24-9-centos ~]# yum -y update 2.安装containerd.io # centos8默认使用podm ...

  4. 日志分析系统 - k8s部署ElasticSearch集群

    K8s部署ElasticSearch集群 1.前提准备工作 1.1 创建elastic的命名空间 namespace编排文件如下: elastic.namespace.yaml --- apiVers ...

  5. Docker部署Hadoop集群

    Docker部署Hadoop集群 2016-09-27 杜亦舒 前几天写了文章"Hadoop 集群搭建"之后,一个朋友留言说希望介绍下如何使用Docker部署,这个建议很好,Doc ...

  6. Azure vm 扩展脚本自动部署Elasticsearch集群

    一.完整过程比较长,我仅给出Azure vm extension script 一键部署Elasticsearch集群的安装脚本,有需要的同学,可以邮件我,我给你完整的ARM Template 如果你 ...

  7. 基于Docker部署ETCD集群

    基于Docker部署ETCD集群 关于ETCD要不要使用TLS? 首先TLS的目的是为了鉴权为了防止别人任意的连接上你的etcd集群.其实意思就是说如果你要放到公网上的ETCD集群,并开放端口,我建议 ...

  8. Docker部署zookeeper集群和kafka集群,实现互联

    本文介绍在单机上通过docker部署zookeeper集群和kafka集群的可操作方案. 0.准备工作 创建zk目录,在该目录下创建生成zookeeper集群和kafka集群的yml文件,以及用于在该 ...

  9. Elasticsearch使用系列-Docker搭建Elasticsearch集群

    Elasticsearch使用系列-ES简介和环境搭建 Elasticsearch使用系列-ES增删查改基本操作+ik分词 Elasticsearch使用系列-基本查询和聚合查询+sql插件 Elas ...

随机推荐

  1. numpy库常用基本操作

    NumPy数组的维数称为秩(rank),一维数组的秩为1,二维数组的秩为2,以此类推.在NumPy中,每一个线性的数组称为是一个轴(axes),秩其实是描述轴的数量.比如说,二维数组相当于是一个一维数 ...

  2. Python爬取视频(其实是一篇福利)

    窗外下着小雨,作为单身程序员的我逛着逛着发现一篇好东西,来自知乎 你都用 Python 来做什么?的第一个高亮答案. 到上面去看了看,地址都是明文的,得,赶紧开始吧. 下载流式文件,requests库 ...

  3. Micro Templating源码分析

    关于模板,写页面的人们其实一直在用,asp.net , jsp , php, nodejs等等都有他的存在,当然那是服务端的模板. 前端模板,作为前端人员肯定是多少有接触的,Handlebars.js ...

  4. 常见的DBCP连接池配置

    项目中使用mybatis出现一个问题,项目刚启动时,查询项目列表是ok的,过上一段时间之后,再次查询项目列表,查询失败,初步判断是因为mysql的连接问题,最后查阅资料,发现是连接池中的连接失效,导致 ...

  5. MySQL innodb_flush_method

    innodb_flush_method这个参数控制着innodb数据文件及redo log的打开.刷写模式,对于这个参数,文档上是这样描述的: 有三个值:fdatasync(默认),O_DSYNC,O ...

  6. javascript 执行环境细节分析、原理-12

    前言 前面几篇说了执行环境相关的概念,本篇在次回顾下 执行环境(Execution context,简称EC,也称执行上下文 ) 定义了变量或者函数有权访问的数据,决定了各自行为,每个执行环境都有一个 ...

  7. Java集合系列[1]----ArrayList源码分析

    本篇分析ArrayList的源码,在分析之前先跟大家谈一谈数组.数组可能是我们最早接触到的数据结构之一,它是在内存中划分出一块连续的地址空间用来进行元素的存储,由于它直接操作内存,所以数组的性能要比集 ...

  8. Centos上安装jdk版本出错的问题

    今天买了个阿里云的服务器,于是手动安装了一遍JavaWeb运行环境,首先安装jdk与配置jdk就遇到了问题. 我下载的是解压版的jdk-8u151-linux-i586.tar.gz.安装和配置到是蛮 ...

  9. java.io与网络通信

    文件IO java.io.File是用于操作文件或目录的类: File file = new File("hello.txt"); 实例化File时不关心路径的目标并不会去读取文件 ...

  10. vb代码之-------当窗体BorderStyle属性为0时,添加窗口预览到任务栏

    入吾QQ群183435019 (学习 交流+唠嗑) 有很多时候,我们为了美观,将会自己画一个标题栏,这时候我们会把原来的标题栏取消掉,最简单的方法是吧窗体的BorderStyle设置成为0, 然后自己 ...