Docker部署Elasticsearch集群
http://blog.sina.com.cn/s/blog_8ea8e9d50102wwik.html
本实验采用不同类型的节点集群(client x1, master x3, data x2)
docker run -d --restart=always -p 9200:9200 -p 9300:9300 --name=elasticsearch-client --oom-kill-disable=true --memory-swappiness=1 -v /opt/elasticsearch/data:/usr/share/elasticsearch/data -v /opt/elasticsearch/logs:/usr/share/elasticsearch/logs elasticsearch:2.3.3
cat >elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: false
node.data: false
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.host: 0.0.0.0
network.publish_host: 192.168.8.10
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
docker restart elasticsearch-client
直接将修改好的配置文件cp到容器对应位置后重启容器
man docker-run
--net="bridge"
Set the Network mode for the container
'bridge': create a network stack on the default Docker bridge
'none': no networking
'container:': reuse another container's network stack
'host': use the Docker host network stack. Note: the host mode gives the container full access to local system services such as D-bus and is therefore consid‐
ered insecure.
说明:docker默认的网络模式为bridge,会自动为容器分配一个私有地址,如果是多台宿主机之间集群通信需要借助Consul,Etcd,Doozer等服务发现,自动注册组件来协同。请参看Docker集群之Swarm+Consul+Shipyard
必须通过network.publish_host: 192.168.8.10参数指定elasticsearch节点对外的监听地址,非常重要,不指定的话,集群节点间无法正常通信,报错如下
[2016-06-21 05:50:19,123][INFO ][discovery.zen ] [consul-s2.example.com] failed to send join request to master[{consul-s1.example.com}{DeKixlVMS2yoynzX8Y-gdA}{172.17.0.1}{172.17.0.1:9300}{data=false, master=true}], reason [RemoteTransportException[[consul-s2.example.com][172.17.0.1:9300]
最简单的,还可以网络直接设为host模式--net=host,直接借用宿主机的网络接口,换言之,不做网络层的layer
同时可禁用OOM,还可根据宿主机内存来设置-m参数(默认为0,无限)限制容器内存大小
[root@ela-client ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
762e4d21aaf8 elasticsearch:2.3.3 "/docker-entrypoint.s" 2 minutes ago Up 2 minutes elasticsearch-client
[root@ela-client ~]# netstat -tunlp|grep java
tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN 18952/java
tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN 18952/java
[root@ela-client ~]# ls /opt/elasticsearch/
data logs
[root@ela-client ~]# docker logs $(docker ps -q)
[2016-06-13 16:09:51,308][INFO ][node ] [Sunfire] version[2.3.3], pid[1], build[218bdf1/2016-05-17T15:40:04Z]
[2016-06-13 16:09:51,311][INFO ][node ] [Sunfire] initializing ...
... ...
[2016-06-13 16:09:56,408][INFO ][node ] [Sunfire] started
[2016-06-13 16:09:56,417][INFO ][gateway ] [Sunfire] recovered [0] indices into cluster_state
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: false
node.data: false
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.host: 0.0.0.0
network.publish_host: 192.168.8.10
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: true
node.data: false
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.host: 0.0.0.0
network.publish_host: 192.168.8.101
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: true
node.data: false
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.host: 0.0.0.0
network.publish_host: 192.168.8.102
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: true
node.data: false
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.publish_host: 192.168.8.103
transport.tcp.port: 9300
network.host: 0.0.0.0
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: false
node.data: true
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.publish_host: 192.168.8.201
transport.tcp.port: 9300
network.host: 0.0.0.0
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
cat >/opt/elasticsearch/config/elasticsearch.yml <<HERE
cluster.name: elasticsearch_cluster
node.name: ${HOSTNAME}
node.master: false
node.data: true
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
bootstrap.mlockall: true
network.host: 0.0.0.0
network.publish_host: 192.168.8.202
transport.tcp.port: 9300
http.port: 9200
index.refresh_interval: 5s
script.inline: true
script.indexed: true
通过discovery模块来将节点加入集群
在以上节点的配置文件/opt/elasticsearch/config/elasticsearch.yml中加入如下行后重启
cat >>/opt/elasticsearch/config/elasticsearch.yml <<HERE
discovery.zen.ping.timeout: 100s
discovery.zen.fd.ping_timeout: 100s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.8.101:9300", "192.168.8.102:9300", "192.168.8.103:9300", "192.168.8.201:9300", "192.168.8.202:9300","192.168.8.10:9300"]
HERE
docker restart $(docker ps -a|grep elasticsearch|awk '{print $1}')
五.确认集群
等待30s左右,集群节点会自动join,在集群的含意节点上都会看到如下类似输出,说明集群运行正常
REST API调用请参看Elasticsearch REST API小记
https://www.elastic.co/guide/en/elasticsearch/reference/current/_cluster_health.html
[root@ela-client ~]#curl 'http://localhost:9200/_cat/health?v'
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1465843145 18:39:05 elasticsearch_cluster green 6 2 0 0 0 0 0 0 - 100.0%
[root@ela-client ~]#curl 'localhost:9200/_cat/nodes?v'
host ip heap.percent ram.percent load node.role master name
192.168.8.102 192.168.8.102 14 99 0.00 - * ela-master2.example.com
192.168.8.103 192.168.8.103 4 99 0.14 - m ela-master3.example.com
192.168.8.202 192.168.8.202 11 99 0.00 d - ela-data2.example.com
192.168.8.10 192.168.8.10 10 98 0.17 - - ela-client.example.com
192.168.8.201 192.168.8.201 11 99 0.00 d - ela-data1.example.com
192.168.8.101 192.168.8.101 12 99 0.01 - m ela-master1.example.com
[root@ela-master2 ~]#curl 'http://localhost:9200/_nodes/process?pretty'
{
"cluster_name" : "elasticsearch_cluster",
"nodes" : {
"naMz_y4uRRO-FzyxRfTNjw" : {
"name" : "ela-data2.example.com",
"transport_address" : "192.168.8.202:9300",
"host" : "192.168.8.202",
"ip" : "192.168.8.202",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.202:9200",
"attributes" : {
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
},
"7FwFY20ESZaRtIWhYMfDAg" : {
"name" : "ela-data1.example.com",
"transport_address" : "192.168.8.201:9300",
"host" : "192.168.8.201",
"ip" : "192.168.8.201",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.201:9200",
"attributes" : {
"master" : "false"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
},
"X0psLpQyR42A4ThiP8ilhA" : {
"name" : "ela-master3.example.com",
"transport_address" : "192.168.8.103:9300",
"host" : "192.168.8.103",
"ip" : "192.168.8.103",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.103:9200",
"attributes" : {
"data" : "false",
"master" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
},
"MG_GlSAZRkqLq8gMqaZITw" : {
"name" : "ela-master1.example.com",
"transport_address" : "192.168.8.101:9300",
"host" : "192.168.8.101",
"ip" : "192.168.8.101",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.101:9200",
"attributes" : {
"data" : "false",
"master" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
},
"YxNHUPqVRNK3Liilw_hU9A" : {
"name" : "ela-master2.example.com",
"transport_address" : "192.168.8.102:9300",
"host" : "192.168.8.102",
"ip" : "192.168.8.102",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.102:9200",
"attributes" : {
"data" : "false",
"master" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
},
"zTKJJ4ipQg6xAcwy1aE-9g" : {
"name" : "ela-client.example.com",
"transport_address" : "192.168.8.10:9300",
"host" : "192.168.8.10",
"ip" : "192.168.8.10",
"version" : "2.3.3",
"build" : "218bdf1",
"http_address" : "192.168.8.10:9200",
"attributes" : {
"data" : "false",
"master" : "true"
},
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 1,
"mlockall" : false
}
}
}
}
问题:
修改jvm内存,初始值-Xms256m -Xmx1g
https://hub.docker.com/r/itzg/elasticsearch/
https://hub.docker.com/_/elasticsearch/
实测:修改JAVA_OPTS,ES_JAVA_OPTS都只能追加,而不能覆盖,只能通过
-e ES_HEAP_SIZE="32g" 来覆盖
补充:
head,Marvel等监控可视化插件有兴趣的朋友可以试下
Docker部署Elasticsearch集群的更多相关文章
- Centos8 Docker部署ElasticSearch集群
ELK部署 部署ElasticSearch集群 1.拉取镜像及批量生成配置文件 # 拉取镜像 [root@VM-24-9-centos ~]# docker pull elasticsearch:7. ...
- 利用 docker 部署 elasticsearch 集群(单节点多实例)
文章目录 1.环境介绍 2.拉取 `elasticserach` 镜像 3.创建 `elasticsearch` 数据目录 4.创建 `elasticsearch` 配置文件 5.配置JVM线程数量限 ...
- Centos8 部署 ElasticSearch 集群并搭建 ELK,基于Logstash同步MySQL数据到ElasticSearch
Centos8安装Docker 1.更新一下yum [root@VM-24-9-centos ~]# yum -y update 2.安装containerd.io # centos8默认使用podm ...
- 日志分析系统 - k8s部署ElasticSearch集群
K8s部署ElasticSearch集群 1.前提准备工作 1.1 创建elastic的命名空间 namespace编排文件如下: elastic.namespace.yaml --- apiVers ...
- Docker部署Hadoop集群
Docker部署Hadoop集群 2016-09-27 杜亦舒 前几天写了文章"Hadoop 集群搭建"之后,一个朋友留言说希望介绍下如何使用Docker部署,这个建议很好,Doc ...
- Azure vm 扩展脚本自动部署Elasticsearch集群
一.完整过程比较长,我仅给出Azure vm extension script 一键部署Elasticsearch集群的安装脚本,有需要的同学,可以邮件我,我给你完整的ARM Template 如果你 ...
- 基于Docker部署ETCD集群
基于Docker部署ETCD集群 关于ETCD要不要使用TLS? 首先TLS的目的是为了鉴权为了防止别人任意的连接上你的etcd集群.其实意思就是说如果你要放到公网上的ETCD集群,并开放端口,我建议 ...
- Docker部署zookeeper集群和kafka集群,实现互联
本文介绍在单机上通过docker部署zookeeper集群和kafka集群的可操作方案. 0.准备工作 创建zk目录,在该目录下创建生成zookeeper集群和kafka集群的yml文件,以及用于在该 ...
- Elasticsearch使用系列-Docker搭建Elasticsearch集群
Elasticsearch使用系列-ES简介和环境搭建 Elasticsearch使用系列-ES增删查改基本操作+ik分词 Elasticsearch使用系列-基本查询和聚合查询+sql插件 Elas ...
随机推荐
- MicroPython教程之TPYBoard开发板DIY小型家庭气象站
众所周知,iPhone6/6Plus内置气压传感器,不过大家对于气压传感器还是很陌生.跟字面的意思一样,气压传感器就是用来测量气压的,但测量气压对于普通的手机用户来说又有什么作用呢? 海拔高度测量 对 ...
- TPYBoard V102:能跑Python的stm32开发板
近来micropython语言做硬件编程很火,随之而来的就开始带动着支持micropython语言编程的开发板也开始火的发烫,今天小编就来和大家介绍一款很经典的micropython开发板-TPYBo ...
- jsp 之 解决mysql不是内部或外部命令问题
安装Mysql后,当我们在cmd中敲入mysql时会出现'Mysql'不是内部或外部命令,也不是可运行的程序或其处理文件. 打开我的电脑在我的电脑右键中选择属性,然后单击选择高级系统设置. 在系统属性 ...
- 初用MssqlOnLinux 【1】
https://docs.microsoft.com/zh-cn/sql/linux/quickstart-install-connect-red-hat 使用 Centos7,NetCore2.0, ...
- input 光标在 chrome下不兼容 解决方案
input 光标在 chrome下不兼容 解决方案 height: 52px; line-height: normal; line-height:52px\9 .list li input[type= ...
- 一种基于http协议的敏感数据传输方案
最近公司需要通过公网与其它平台完成接口对接,但是基于开发时间和其它因素的考虑,本次对接无法采用https协议实现.既然不能用https协议,那就退而求其次采用http协议吧! 那么问题来了!在对接的过 ...
- Oracle触发bug(cursor: mutex S),造成数据库服务器CPU接近100%
问题现象: 项目反馈系统反应非常缓慢,数据库服务器CPU接近100%! INSERT INTO GSPAudit1712(ID,TypeID,CategoryID,DateTime,UserID,Us ...
- cs231n spring 2017 lecture15 Efficient Methods and Hardware for Deep Learning 听课笔记
1. 深度学习面临的问题: 1)模型越来越大,很难在移动端部署,也很难网络更新. 2)训练时间越来越长,限制了研究人员的产量. 3)耗能太多,硬件成本昂贵. 解决的方法:联合设计算法和硬件. 计算硬件 ...
- 二维字符数组利用gets输入
char a[10][81];for(int i=0;i<10;i++)gets(a[i]); a是二维数组的数组名,相当于一维数组的指针,所以a[i]就相当于指向第i个数组的指针,类型就相当于 ...
- BZOJ2441: [中山市选2011]小W的问题
题目:http://www.lydsy.com/JudgeOnline/problem.php?id=2441 首先要注意到x1>x3且x5>x3(要是没有这个设定就是树状数组水题了.. ...