etcd集群部署
,创建etcd可执行文件,配置文件,证书文件存放目录
mkdir /opt/etcd/{bin,cfg,ssl} -p ,创建包文件存放目录
mkdir /soft -p ,解压etcd包。并将可执行文件移动到/opt/etcd/bin
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ,etcd配置文件
$ cat etcd
#[Member]
ETCD_NAME="etcd01" #节点名称,如果有多个节点,这里必须要改,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_LISTEN_PEER_URLS="https://192.168.1.63:2380" #集群沟通端口2380
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.63:2379" #客户端沟通端口2379 #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.63:2380" #集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.63:2379" #客户端通告地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.63:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.66:2380" #这个集群中所有节点,每个节点都要有
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群token
ETCD_INITIAL_CLUSTER_STATE="new" #新创建集群,existing表示加入已有集群
root@k8s-master: /opt/etcd/cfg ::
$ ,systemd管理etcd
#里面的参数都是需要引用主配置文件的变量,所有如果报错,尝试查看一下主配置文件是否配置出错,/opt/etcd/cfs/etcd
root@k8s-master: /opt/etcd/cfg ::
$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
root@k8s-master: /opt/etcd/cfg ::
$ ,重新加载配置文件并启动
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd ,查看启动日志
tail -f /var/log/messages #会出现与node01和node02无法沟通的状况
#看下边日志,这是因为客户端并没有配置etcd节点文件和ssl,所以会一直报错,systemctl start etcd其实是启动成功,但是沟通不到,所以会启动很长时间
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT") ,node01,node02操作 #将master节点配置文件scp到node01,node02 #将/opt/etcd/下的配置文件文件,文件夹递归传到node01,node02的opt下
scp -r /opt/etcd/ root@192.168.1.66:/opt
scp -r /opt/etcd/ root@192.168.1.65:/opt #将systemctl下的etcd.service传到node01,node02的/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.65:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.66:/usr/lib/systemd/system/ #这时在tail -f /var/log/messages
ps:
#由于环境是虚拟机环境所以,以下日志是master和node节点时间不同步造成的ntpdate time.windows.com Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792944111s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861673928s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858782669s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.793075827s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.795990455s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858938895s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861743791s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.796159244s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792476037s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE") $ crontab -l
* * * * ntpdate time.windows.com >/dev/null >& ,最后测试一下集群节点状态
(完成)
#如果输出下面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
root@k8s-master: ~ ::
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.63:2379,https://192.168.1.65:2379,https://192.168.1.66:2379" cluster-health
member 472edcb0986774fe is healthy: got healthy result from https://192.168.1.65:2379
member 89e49aedde68fee4 is healthy: got healthy result from https://192.168.1.66:2379
member ddaf91a76208ea00 is healthy: got healthy result from https://192.168.1.63:2379
cluster is healthy
root@k8s-master: ~ ::
$

k8s集群———etcd-三节点部署的更多相关文章

  1. k8s集群之master节点部署

    apiserver的部署 api-server的部署脚本 [root@mast-1 k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ...

  2. k8s集群节点更换ip 或者 k8s集群添加新节点

    1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...

  3. 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线

    情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...

  4. k8s集群———单master节点2node节点

    #部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...

  5. 使用KubeOperator扩展k8s集群的worker节点

    官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...

  6. K8S集群etcd备份与恢复

    参考链接: K8S集群多master:Etcd v3备份与恢复 K8S集群单master:Kubernetes Etcd 数据备份与恢复 ETCD系列之一:简介:https://developer.a ...

  7. 使用kubeadm安装k8s集群故障处理三则

    最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...

  8. 高可用etcd集群(三节点) + ssl双向认证

    # etcd下载地址 https://github.com/etcd-io/etcd/tags wget https://github.com/etcd-io/etcd/releases/downlo ...

  9. 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署

    1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...

  10. k8s集群添加node节点(使用kubeadm搭建的集群)

    1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...

随机推荐

  1. iOS编译错误#ld: warning: ignoring file# 之 Undefined symbols for architecture x86_64 - ld: symbol(s) not found for architecture x86_64

    ld: warning: ignoring file xxxPath/libbaidumapapi.a, missing required architecture x86_64 in file xx ...

  2. cPickle对python对象进行序列化,序列化到文件或内存

    pickle模块使用的数据格式是python专用的,并且不同版本不向后兼容,同时也不能被其他语言说识别.要和其他语言交互,可以使用内置的json包 cPickle可以对任意一种类型的python对象进 ...

  3. Linux系统服务及软件包的管理

     要点回顾 free命令查看内存 整理buffer与cache的作用 1.buffer(缓冲) 是为了提高内存和硬盘(或其他I/O设备)之间的数据交换的速度而设计的. 2.cache(缓存) 从CPU ...

  4. HDFS概念

  5. behavior planning——inputs to transition functions

    the answer is that we have to pass all  of the data into transition function except for the previous ...

  6. H3C ISDN DCC基本配置示例

  7. Java如何计算hashcode值

    在设计一个类的时候,很可能需要重写类的hashCode()方法,此外,在集合HashSet的使用上,我们也需要重写hashCode方法来判断集合元素是否相等. 下面给出重写hashCode()方法的基 ...

  8. Python--day72--ajax完整版

    来源: AJAX准备知识:JSON 什么是 JSON ? JSON 指的是 JavaScript 对象表示法(JavaScript Object Notation) JSON 是轻量级的文本数据交换格 ...

  9. Pycharm中Python PEP8 的警告

    https://blog.csdn.net/serizawa_tamao/article/details/88658694

  10. H3C PPP MP实现方式