参考架构图:

consul server cluster 搭建

  • consul 基本配置格式
{
"server": true,
"node_name": "$NODE_NAME",
"datacenter": "dc1",
"data_dir": "$CONSUL_DATA_PATH",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "$ADVERTISE_ADDR",
"bootstrap_expect": 3,
"retry_join": ["$JOIN1", "$JOIN2", "$JOIN3"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}

参数说明

  • $NODE_NAME this is a unique label for the node; in our case, this will be consul_s1, consul_s2, and consul_s3 respectively.
  • $CONSUL_DATA_PATH: absolute path to Consul data directory; ensure that this directory is writable by the Consul process user.
  • $ADVERTISE_ADDR: set to address that you prefer the Consul servers advertise to the other servers in the cluster and should not be set to 0.0.0.0; for this guide, it should be set to the Consul server’s IP address in each instance of the configuration file, or 10.1.42.101,10.1.42.102, and 10.1.42.103 respectively.
  • JOIN1,JOIN2, $JOIN3: This example uses the retry_join method of joining the server agents to form a cluster; as such, the values for this guide would be 10.1.42.101, 10.1.42.102, and 10.1.42.103 respectively.
  • 参考配置
consul server 1
{
"server": true,
"node_name": "consul_s1",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.101",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
consul server 2
{
"server": true,
"node_name": "consul_s2",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.102",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
consul server 3
{
"server": true,
"node_name": "consul_s3",
"datacenter": "dc1",
"data_dir": "/var/consul/data",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"advertise_addr": "10.1.42.103",
"bootstrap_expect": 3,
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"ui": true,
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
  • systemd 配置
### BEGIN INIT INFO
# Provides: consul
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Consul agent
# Description: Consul service discovery framework
### END INIT INFO [Unit]
Description=Consul server agent
Requires=network-online.target
After=network-online.target [Service]
User=consul
Group=consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir -p /var/run/consul
ExecStartPre=/bin/chown -R consul:consul /var/run/consul
ExecStart=/usr/local/bin/consul agent \
-config-file=/usr/local/etc/consul/server_agent.json \
-pid-file=/var/run/consul/consul.pid
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s [Install]
WantedBy=multi-user.target

consul agent 配置

  • 格式说明
{
"server": false,
"datacenter": "dc1",
"node_name": "$NODE_NAME",
"data_dir": "$CONSUL_DATA_PATH",
"bind_addr": "$BIND_ADDR",
"client_addr": "127.0.0.1",
"retry_join": ["$JOIN1", "$JOIN2", "$JOIN3"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}

参数说明

  • $NODE_NAME this is a unique label for the node; in our case, this will be consul_c1 and consul_c2 respectively.
  • $CONSUL_DATA_PATH: absolute path to Consul data directory; ensure that this directory is writable by the Consul process user.
  • $BIND_ADDR: this should be set to address that you prefer the Consul servers advertise to the other servers in the cluster and should not be set to 0.0.0.0; for this guide, it should be set to the Vault server’s IP address in each instance of the configuration file, or 10.1.42.201 and 10.1.42.202 respectively.
  • JOIN1,JOIN2, $JOIN3: This example uses the retry_join method of joining the server agents to form a cluster; as such, the values for this guide would be 10.1.42.101, 10.1.42.102, and 10.1.42.103 respectively.
  • 参考
agent1
{
"server": false,
"datacenter": "dc1",
"node_name": "consul_c1",
"data_dir": "/var/consul/data",
"bind_addr": "10.1.42.201",
"client_addr": "127.0.0.1",
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
agent2
{
"server": false,
"datacenter": "dc1",
"node_name": "consul_c2",
"data_dir": "/var/consul/data",
"bind_addr": "10.1.42.202",
"client_addr": "127.0.0.1",
"retry_join": ["10.1.42.101", "10.1.42.102", "10.1.42.103"],
"log_level": "DEBUG",
"enable_syslog": true,
"acl_enforce_version_8": false
}
  • systemd
### BEGIN INIT INFO
# Provides: consul
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Consul agent
# Description: Consul service discovery framework
### END INIT INFO [Unit]
Description=Consul client agent
Requires=network-online.target
After=network-online.target [Service]
User=consul
Group=consul
PIDFile=/var/run/consul/consul.pid
PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir -p /var/run/consul
ExecStartPre=/bin/chown -R consul:consul /var/run/consul
ExecStart=/usr/local/bin/consul agent \
-config-file=/usr/local/etc/consul/client_agent.json \
-pid-file=/var/run/consul/consul.pid
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s [Install]
WantedBy=multi-user.target

vault 配置

主要配置参数
api_addr , cluster_addr

  • vault active
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "10.1.42.201:8201"
tls_disable = "true"
} storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
} api_addr = "http://10.1.42.201:8200"
cluster_addr = "https://10.1.42.201:8201"
  • vault standby
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "10.1.42.202:8201"
tls_disable = "true"
} storage "consul" {
address = "127.0.0.1:8500"
path = "vault/"
} api_addr = "http://10.1.42.202:8200"
cluster_addr = "https://10.1.42.202:8201"
  • systemd 配置
### BEGIN INIT INFO
# Provides: vault
# Required-Start: $local_fs $remote_fs
# Required-Stop: $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Vault server
# Description: Vault secret management tool
### END INIT INFO [Unit]
Description=Vault secret management tool
Requires=network-online.target
After=network-online.target [Service]
User=vault
Group=vault
PIDFile=/var/run/vault/vault.pid
ExecStart=/usr/local/bin/vault server -config=/etc/vault/vault_server.hcl -log-level=debug
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
RestartSec=42s
LimitMEMLOCK=infinity [Install]
WantedBy=multi-user.target

loadbalance 说明

  • 参考图
  • haproxy 配置
listen vault
bind 0.0.0.0:80
balance roundrobin
option httpchk GET /v1/sys/health
server vault1 192.168.33.10:8200 check
server vault2 192.168.33.11:8200 check
server vault3 192.168.33.12:8200 check

参考配置资料

https://www.vaultproject.io/docs/concepts/ha.html
https://www.vaultproject.io/guides/operations/vault-ha-consul.html
https://www.vaultproject.io/guides/operations/reference-architecture.html
https://github.com/rongfengliang/vault-consul-ha

 
 
 
 

vault 集群搭建(active standby 模式)的更多相关文章

  1. Redis 主从集群搭建及哨兵模式配置

    最近搭建了redis集群及哨兵模式,为方便以后查看特此记录下来: 1.Redis安装 2.主从架构 2.1 Redis主从架构图 2.2Redis主从结构搭建 Redis集群不用安装多个Redis,只 ...

  2. MongoDB集群搭建之主从模式

    单机搭建 #创建docker持久化数据目录 [root@docker ~]# mkdir -p /root/application/program/mongodb/data/master-slaveM ...

  3. redis集群搭建和哨兵模式以及AOF和RDB持久化

    Redis主从+哨兵模式 1.环境准备 (1)三台独立的linux主机 (2)IP分别为:10.150.200.182 (从) 10.150.200.184(从)  10.150.200.195(主) ...

  4. Spark集群搭建(local、standalone、yarn)

    Spark集群搭建 local本地模式 下载安装包解压即可使用,测试(2.2版本)./bin/spark-submit --class org.apache.spark.examples.SparkP ...

  5. 搭建高可用的redis集群,避免standalone模式带给你的苦难

    现在项目上用redis的话,很少说不用集群的情况,毕竟如果生产上只有一台redis会有极大的风险,比如机器挂掉,或者内存爆掉,就比如我们生产环境 曾今也遭遇到这种情况,导致redis内存不够挂掉的情况 ...

  6. 28.zookeeper单机(Standalones模式)和集群搭建笔记

    zookeeper单机(Standalones模式)和集群搭建: 前奏: (1).zookeeper也可以在windows下使用,和linux一样可以单机也可以集群,具体就是解压zookeeper-3 ...

  7. Redit集群搭建-Sentinel模式搭建

    Redit集群搭建 学习了: Windows:http://blog.csdn.net/mrxiagc/article/details/52799081 Linux:https://www.cnblo ...

  8. linux系统——Redis集群搭建(主从+哨兵模式)

    趁着这几天刚好有点空,就来写一下redis的集群搭建,我跟大家先说明,本文的redis集群因为linux服务器只是阿里云一台服务器,所以集群是redis启动不同端口,但是也能达到集群的要求.其实不同服 ...

  9. dataguard集群搭建

    dataguard集群搭建 1. 创建虚拟机 创建一台虚拟机配置如下: 系统Red Hat Enterprise 6(64位).16vCPU.8G内存.两块VM Network类型网卡.三块硬盘分别为 ...

随机推荐

  1. 『PyTorch』第二弹重置_Tensor对象

    『PyTorch』第二弹_张量 Tensor基础操作 简单的初始化 import torch as t Tensor基础操作 # 构建张量空间,不初始化 x = t.Tensor(5,3) x -2. ...

  2. python-day71--django多表操作

    表关系: 1 一对多 2 多对多 3 一对一 添加记录: 一对多:书与出版社 #创建一对多: publish=models.ForeignKey("Publish") 注意:pub ...

  3. python-day8-元组的内置方法

    #为何要有元组,存放多个值,元组不可变,更多的是用来做查询# t=(1,[1,3],'sss',(1,2)) #t=tuple((1,[1,3],'sss',(1,2)))# print(type(t ...

  4. thinkphp查询缓存

    S()函数的使用:  ThinkPHP把各种缓存方式都抽象成统一的缓存类来调用,而且ThinkPHP把所有的缓存机制统一成一个S方法来进行操作,所以在使用  不同的缓存方式的时候并不需要关注具体的缓存 ...

  5. thinkphp if标签

    1.thinkphp框架中的if标签,用于html页面中.在html中编写php代码 1).从控制器中得到数据在循环中if else 判断:<volist name="system_r ...

  6. hdu多校(二) 1004 1007 1010

    Game Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)Total Submis ...

  7. 全源最短路径 - floyd算法 - O(N ^ 3)

    Floyd-Warshall算法的原理是动态规划. 设Di,j,k为从i到j的只以(1..k)集合中的节点为中间节点的最短路径的长度. 若最短路径经过点k,则Di,j,k = Di,k,k − 1 + ...

  8. vs2015 系统找不到指定的文件(异常来自HRESULT:0x80070002)问题的解决方法

    vs2015 创建mvc项目时,弹出错误信息内容(系统找不到指定的文件(异常来自HRESULT:0x80070002)) 弹出窗体如下图所示: 导致整个原因是:未安装NuGet包 解决方法: 1)打开 ...

  9. 在主Android Activity中加载Fragment的一般简易方法 ,来模拟一个微信界面。

    在Fragment的生命周期中,需要重点关注onCreate.onCreateView.onViewCreated.Activity与Fragment生命周期在设计模式上大体一致. package c ...

  10. FR报表 FileName

    在设计或者打印预览时,如果设置了FileName,可能反而出错. procedure TfrxReport.ShowPreparedReport; var WndExStyles: Integer; ...