参考架构图:

consul server cluster 搭建

  • consul 基本配置格式
{
"server": true,
"node_name": "$NODE_NAME",
"datacenter": "dc1",
"data_dir": "$CONSUL_DATA_PATH",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "$ADVERTISE_ADDR",
"bootstrap_expect": 3,
"retry_join": ["$JOIN1", "$JOIN2", "$JOIN3"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}

参数说明

  • $NODE_NAME this is a unique label for the node; in our case, this will be consul_s1, consul_s2, and consul_s3 respectively.
  • $CONSUL_DATA_PATH: absolute path to Consul data directory; ensure that this directory is writable by the Consul process user.
  • $ADVERTISE_ADDR: set to address that you prefer the Consul servers advertise to the other servers in the cluster and should not be set to 0.0.0.0; for this guide, it should be set to the Consul server’s IP address in each instance of the configuration file, or 10.1.42.101,10.1.42.102, and 10.1.42.103 respectively.
  • JOIN1,JOIN2, $JOIN3: This example uses the retry_join method of joining the server agents to form a cluster; as such, the values for this guide would be 10.1.42.101, 10.1.42.102, and 10.1.42.103 respectively.
  • 参考配置
consul server 1
{
"server": true,
"node_name": "consul_s1",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.101",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
consul server 2
{
"server": true,
"node_name": "consul_s2",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.102",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
consul server 3
{
"server": true,
"node_name": "consul_s3",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.103",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
  • systemd 配置
### BEGIN INIT INFO
# Provides: consul
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Consul agent
# Description: Consul service discovery framework
### END INIT INFO [Unit]
Description=Consul server agent
Requires=network-online.target
After=network-online.target [Service]
User=consul
Group=consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir -p /var/run/consul
ExecStartPre=/bin/chown -R consul:consul /var/run/consul
ExecStart=/usr/local/bin/consul agent \
-config-file=/usr/local/etc/consul/server_agent.json \
-pid-file=/var/run/consul/consul.pid
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s [Install]
WantedBy=multi-user.target

consul agent 配置

  • 格式说明
{
"server": false,
"datacenter": "dc1",
"node_name": "$NODE_NAME",
"data_dir": "$CONSUL_DATA_PATH",
"bind_addr": "$BIND_ADDR",
"client_addr": "127.0.0.1",
"retry_join": ["$JOIN1", "$JOIN2", "$JOIN3"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}

参数说明

  • $NODE_NAME this is a unique label for the node; in our case, this will be consul_c1 and consul_c2 respectively.
  • $CONSUL_DATA_PATH: absolute path to Consul data directory; ensure that this directory is writable by the Consul process user.
  • $BIND_ADDR: this should be set to address that you prefer the Consul servers advertise to the other servers in the cluster and should not be set to 0.0.0.0; for this guide, it should be set to the Vault server’s IP address in each instance of the configuration file, or 10.1.42.201 and 10.1.42.202 respectively.
  • JOIN1,JOIN2, $JOIN3: This example uses the retry_join method of joining the server agents to form a cluster; as such, the values for this guide would be 10.1.42.101, 10.1.42.102, and 10.1.42.103 respectively.
  • 参考
agent1
{
"server": false,
"datacenter": "dc1",
"node_name": "consul_c1",
"data_dir": "/var/consul/data",
"bind_addr": "10.1.42.201",
"client_addr": "127.0.0.1",
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
agent2
{
"server": false,
"datacenter": "dc1",
"node_name": "consul_c2",
"data_dir": "/var/consul/data",
"bind_addr": "10.1.42.202",
"client_addr": "127.0.0.1",
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
  • systemd
### BEGIN INIT INFO
# Provides: consul
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Consul agent
# Description: Consul service discovery framework
### END INIT INFO [Unit]
Description=Consul client agent
Requires=network-online.target
After=network-online.target [Service]
User=consul
Group=consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir -p /var/run/consul
ExecStartPre=/bin/chown -R consul:consul /var/run/consul
ExecStart=/usr/local/bin/consul agent \
-config-file=/usr/local/etc/consul/client_agent.json \
-pid-file=/var/run/consul/consul.pid
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s [Install]
WantedBy=multi-user.target

vault 配置

主要配置参数
api_addr , cluster_addr

  • vault active
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "10.1.42.201:8201"
tls_disable = "true"
} storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
} api_addr = "http://10.1.42.201:8200"
cluster_addr = "https://10.1.42.201:8201"
  • vault standby
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "10.1.42.202:8201"
tls_disable = "true"
} storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
} api_addr = "http://10.1.42.202:8200"
cluster_addr = "https://10.1.42.202:8201"
  • systemd 配置
### BEGIN INIT INFO
# Provides: vault
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Vault server
# Description: Vault secret management tool
### END INIT INFO [Unit]
Description=Vault secret management tool
Requires=network-online.target
After=network-online.target [Service]
User=vault
Group=vault
PIDFile=/var/run/vault/vault.pid
ExecStart=/usr/local/bin/vault server -config=/etc/vault/vault_server.hcl -log-level=debug
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s
LimitMEMLOCK=infinity [Install]
WantedBy=multi-user.target

loadbalance 说明

  • 参考图
  • haproxy 配置
listen vault
bind 0.0.0.0:80
balance roundrobin
option httpchk GET /v1/sys/health
server vault1 192.168.33.10:8200 check
server vault2 192.168.33.11:8200 check
server vault3 192.168.33.12:8200 check

参考配置资料

https://www.vaultproject.io/docs/concepts/ha.html
https://www.vaultproject.io/guides/operations/vault-ha-consul.html
https://www.vaultproject.io/guides/operations/reference-architecture.html
https://github.com/rongfengliang/vault-consul-ha

 
 
 
 

vault 集群搭建(active standby 模式)的更多相关文章

  1. Redis 主从集群搭建及哨兵模式配置

    最近搭建了redis集群及哨兵模式,为方便以后查看特此记录下来: 1.Redis安装 2.主从架构 2.1 Redis主从架构图 2.2Redis主从结构搭建 Redis集群不用安装多个Redis,只 ...

  2. MongoDB集群搭建之主从模式

    单机搭建 #创建docker持久化数据目录 [root@docker ~]# mkdir -p /root/application/program/mongodb/data/master-slaveM ...

  3. redis集群搭建和哨兵模式以及AOF和RDB持久化

    Redis主从+哨兵模式 1.环境准备 (1)三台独立的linux主机 (2)IP分别为:10.150.200.182 (从) 10.150.200.184(从)  10.150.200.195(主) ...

  4. Spark集群搭建(local、standalone、yarn)

    Spark集群搭建 local本地模式 下载安装包解压即可使用,测试(2.2版本)./bin/spark-submit --class org.apache.spark.examples.SparkP ...

  5. 搭建高可用的redis集群,避免standalone模式带给你的苦难

    现在项目上用redis的话,很少说不用集群的情况,毕竟如果生产上只有一台redis会有极大的风险,比如机器挂掉,或者内存爆掉,就比如我们生产环境 曾今也遭遇到这种情况,导致redis内存不够挂掉的情况 ...

  6. 28.zookeeper单机(Standalones模式)和集群搭建笔记

    zookeeper单机(Standalones模式)和集群搭建: 前奏: (1).zookeeper也可以在windows下使用,和linux一样可以单机也可以集群,具体就是解压zookeeper-3 ...

  7. Redit集群搭建-Sentinel模式搭建

    Redit集群搭建 学习了: Windows:http://blog.csdn.net/mrxiagc/article/details/52799081 Linux:https://www.cnblo ...

  8. linux系统——Redis集群搭建(主从+哨兵模式)

    趁着这几天刚好有点空,就来写一下redis的集群搭建,我跟大家先说明,本文的redis集群因为linux服务器只是阿里云一台服务器,所以集群是redis启动不同端口,但是也能达到集群的要求.其实不同服 ...

  9. dataguard集群搭建

    dataguard集群搭建 1. 创建虚拟机 创建一台虚拟机配置如下: 系统Red Hat Enterprise 6(64位).16vCPU.8G内存.两块VM Network类型网卡.三块硬盘分别为 ...

随机推荐

  1. Java 的对象和类

    Java 是一种面向对象的语言.作为一个面向的语言,Java 具有面向对象的特性,Java 能够支持下面的一些基本概念 − 多态(Polymorphism) 继承(Inheritance) 封装(En ...

  2. 快速读入fread

    struct FastIO { static const int S = 1e7; int wpos; char wbuf[S]; FastIO() : wpos(0) {} inline int x ...

  3. Oracle11g温习-第九章:表空间和数据文件管理

    2013年4月27日 星期六 10:37 1.tablespace 功能:从逻辑上简化数据库的管理 2.tablespace 概述 一个database 对应多个tablespace ,一个table ...

  4. Leetcode 96

    class Solution { public: int numTrees(int n) { ]; dp[] = ; dp[] = ; dp[] = ; ;i <= n;i++){ ; ;j & ...

  5. spring 监听器 IntrospectorCleanupListener

    org.springframework.web.util.IntrospectorCleanupListener监听器 主要负责处理由JavaBean Introspector使用而引起的缓冲泄露,  ...

  6. javascript的逼格

    1.解释性脚本语言,无需编译,逐行解释运行 2.跨平台性,不依赖操作系统,只需要浏览器支持 javascript引擎:单线程

  7. [转载]Java集成PageOffice在线打开编辑word文件 - Spring Boot

    开发环境:JDK1.8.Eclipse.Sping Boot + Thymeleaf框架. 一. 构建Sping Boot + Thymeleaf框架的项目(不再详述): 1. 新建一个maven p ...

  8. 工作中遇到的oracle分页查询问题及多表查询相关

    在工作中,有时,我们会用到oracle分页查询.这时,就需要先了解oracle的rownum.rowmun是oracle的伪列,只能用符号(<.<=.!=),而不能用这些符号(>,& ...

  9. ACdream 1067:Triangles

    Problem Description 已知一个圆的圆周被N个点分成了N段等长圆弧,求任意取三个点,组成锐角三角形的个数. Input 多组数据,每组数据一个N(N <= 1000000) Ou ...

  10. DevExpress v17.2最新版帮助文档下载大全

    DevExpress v17.2.4帮助文档下载列表大全来啦!包含.NET.VCL.HTML/JS系列所有帮助文档,提供CHM和PDF两个版本.除已停止更新的Silverlight.Windows 8 ...