etcd集群部署
,创建etcd可执行文件,配置文件,证书文件存放目录
mkdir /opt/etcd/{bin,cfg,ssl} -p ,创建包文件存放目录
mkdir /soft -p ,解压etcd包。并将可执行文件移动到/opt/etcd/bin
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ,etcd配置文件
$ cat etcd
#[Member]
ETCD_NAME="etcd01" #节点名称,如果有多个节点,这里必须要改,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_LISTEN_PEER_URLS="https://192.168.1.63:2380" #集群沟通端口2380
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.63:2379" #客户端沟通端口2379 #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.63:2380" #集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.63:2379" #客户端通告地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.63:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.66:2380" #这个集群中所有节点,每个节点都要有
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群token
ETCD_INITIAL_CLUSTER_STATE="new" #新创建集群,existing表示加入已有集群
root@k8s-master: /opt/etcd/cfg ::
$ ,systemd管理etcd
#里面的参数都是需要引用主配置文件的变量,所有如果报错,尝试查看一下主配置文件是否配置出错,/opt/etcd/cfs/etcd
root@k8s-master: /opt/etcd/cfg ::
$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
root@k8s-master: /opt/etcd/cfg ::
$ ,重新加载配置文件并启动
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd ,查看启动日志
tail -f /var/log/messages #会出现与node01和node02无法沟通的状况
#看下边日志,这是因为客户端并没有配置etcd节点文件和ssl,所以会一直报错,systemctl start etcd其实是启动成功,但是沟通不到,所以会启动很长时间
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT") ,node01,node02操作 #将master节点配置文件scp到node01,node02 #将/opt/etcd/下的配置文件文件,文件夹递归传到node01,node02的opt下
scp -r /opt/etcd/ root@192.168.1.66:/opt
scp -r /opt/etcd/ root@192.168.1.65:/opt #将systemctl下的etcd.service传到node01,node02的/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.65:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.66:/usr/lib/systemd/system/ #这时在tail -f /var/log/messages
ps:
#由于环境是虚拟机环境所以,以下日志是master和node节点时间不同步造成的ntpdate time.windows.com Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792944111s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861673928s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858782669s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.793075827s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.795990455s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858938895s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861743791s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.796159244s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792476037s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE") $ crontab -l
* * * * ntpdate time.windows.com >/dev/null >& ,最后测试一下集群节点状态
(完成)
#如果输出下面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
root@k8s-master: ~ ::
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.63:2379,https://192.168.1.65:2379,https://192.168.1.66:2379" cluster-health
member 472edcb0986774fe is healthy: got healthy result from https://192.168.1.65:2379
member 89e49aedde68fee4 is healthy: got healthy result from https://192.168.1.66:2379
member ddaf91a76208ea00 is healthy: got healthy result from https://192.168.1.63:2379
cluster is healthy
root@k8s-master: ~ ::
$

k8s集群———etcd-三节点部署的更多相关文章

  1. k8s集群之master节点部署

    apiserver的部署 api-server的部署脚本 [root@mast-1 k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ...

  2. k8s集群节点更换ip 或者 k8s集群添加新节点

    1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...

  3. 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线

    情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...

  4. k8s集群———单master节点2node节点

    #部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...

  5. 使用KubeOperator扩展k8s集群的worker节点

    官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...

  6. K8S集群etcd备份与恢复

    参考链接: K8S集群多master:Etcd v3备份与恢复 K8S集群单master:Kubernetes Etcd 数据备份与恢复 ETCD系列之一:简介:https://developer.a ...

  7. 使用kubeadm安装k8s集群故障处理三则

    最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...

  8. 高可用etcd集群(三节点) + ssl双向认证

    # etcd下载地址 https://github.com/etcd-io/etcd/tags wget https://github.com/etcd-io/etcd/releases/downlo ...

  9. 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...

  10. k8s集群添加node节点(使用kubeadm搭建的集群)

    1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...

随机推荐

  1. 2019-10-22-Roslyn-打包自定义的文件到-NuGet-包

    title author date CreateTime categories Roslyn 打包自定义的文件到 NuGet 包 lindexi 2019-10-22 19:45:34 +0800 2 ...

  2. web移动开发小贴士

    1.判断手机类型 var u = navigator.userAgent; || u.indexOf(; //android var isiOS = !!u.match(/\(i[^;]+;( U;) ...

  3. git 生成秘钥连接远程仓库

    二.打开GitBash ,用cd命令进入本地项目目,然后把初始化一下,把本地的目录变成git本地仓库, git status 可以查看本地目录的状态信息 git init git status 三.将 ...

  4. Example-09-01

    #define _CRT_SECURE_NO_WARNINGS #include <cstdio> #include <cstring> int min(int a, int ...

  5. hdu 1556 Color the ball(区间更新,单点求值)

    Color the ball Time Limit: 9000/3000 MS (Java/Others)    Memory Limit: 32768/32768 K (Java/Others)To ...

  6. ios9.3.3 h5的js代码全部失效

    做微信公众号页面时,ios9.3.3 h5的js代码全部失效描述: 机型iphone6 plus,ios9.3.3js代码全部失效,刚开始还以为是ios和jq兼容问题, 后来发现是es6语法不能读,导 ...

  7. H3C 二层ACL与用户自定义ACL

  8. 添加gitignore文件后使其生效

    https://www.cnblogs.com/AliliWl/p/7880243.html 遇到的问题 我们发现在添加.gitignore文件后,当我们想push文件的时候,我们声明的忽略文件还是会 ...

  9. 2019牛客暑期多校训练营(第二场)F.Partition problem

    链接:https://ac.nowcoder.com/acm/contest/882/F来源:牛客网 Given 2N people, you need to assign each of them ...

  10. hihocoeder1384

    hihocoeder1384 算法竞赛进阶指南上的题目 我们肯定是吧最大值和最小值匹配,次大值和次小值匹配以此类推 首先,类似于区间覆盖的思想,我们对于一个\(L\),找到最大的满足条件的\(R\) ...