k8s集群———etcd-三节点部署
etcd集群部署
,创建etcd可执行文件,配置文件,证书文件存放目录
mkdir /opt/etcd/{bin,cfg,ssl} -p ,创建包文件存放目录
mkdir /soft -p ,解压etcd包。并将可执行文件移动到/opt/etcd/bin
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ,etcd配置文件
$ cat etcd
#[Member]
ETCD_NAME="etcd01" #节点名称,如果有多个节点,这里必须要改,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_LISTEN_PEER_URLS="https://192.168.1.63:2380" #集群沟通端口2380
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.63:2379" #客户端沟通端口2379 #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.63:2380" #集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.63:2379" #客户端通告地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.63:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.66:2380" #这个集群中所有节点,每个节点都要有
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群token
ETCD_INITIAL_CLUSTER_STATE="new" #新创建集群,existing表示加入已有集群
root@k8s-master: /opt/etcd/cfg ::
$ ,systemd管理etcd
#里面的参数都是需要引用主配置文件的变量,所有如果报错,尝试查看一下主配置文件是否配置出错,/opt/etcd/cfs/etcd
root@k8s-master: /opt/etcd/cfg ::
$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
root@k8s-master: /opt/etcd/cfg ::
$ ,重新加载配置文件并启动
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd ,查看启动日志
tail -f /var/log/messages #会出现与node01和node02无法沟通的状况
#看下边日志,这是因为客户端并没有配置etcd节点文件和ssl,所以会一直报错,systemctl start etcd其实是启动成功,但是沟通不到,所以会启动很长时间
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT") ,node01,node02操作 #将master节点配置文件scp到node01,node02 #将/opt/etcd/下的配置文件文件,文件夹递归传到node01,node02的opt下
scp -r /opt/etcd/ root@192.168.1.66:/opt
scp -r /opt/etcd/ root@192.168.1.65:/opt #将systemctl下的etcd.service传到node01,node02的/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.65:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.66:/usr/lib/systemd/system/ #这时在tail -f /var/log/messages
ps:
#由于环境是虚拟机环境所以,以下日志是master和node节点时间不同步造成的ntpdate time.windows.com Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792944111s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861673928s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858782669s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.793075827s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.795990455s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858938895s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861743791s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.796159244s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792476037s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE") $ crontab -l
* * * * ntpdate time.windows.com >/dev/null >& ,最后测试一下集群节点状态
(完成)
#如果输出下面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
root@k8s-master: ~ ::
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.63:2379,https://192.168.1.65:2379,https://192.168.1.66:2379" cluster-health
member 472edcb0986774fe is healthy: got healthy result from https://192.168.1.65:2379
member 89e49aedde68fee4 is healthy: got healthy result from https://192.168.1.66:2379
member ddaf91a76208ea00 is healthy: got healthy result from https://192.168.1.63:2379
cluster is healthy
root@k8s-master: ~ ::
$
k8s集群———etcd-三节点部署的更多相关文章
- k8s集群之master节点部署
apiserver的部署 api-server的部署脚本 [root@mast-1 k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ...
- k8s集群节点更换ip 或者 k8s集群添加新节点
1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...
- 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线
情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...
- k8s集群———单master节点2node节点
#部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...
- 使用KubeOperator扩展k8s集群的worker节点
官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...
- K8S集群etcd备份与恢复
参考链接: K8S集群多master:Etcd v3备份与恢复 K8S集群单master:Kubernetes Etcd 数据备份与恢复 ETCD系列之一:简介:https://developer.a ...
- 使用kubeadm安装k8s集群故障处理三则
最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...
- 高可用etcd集群(三节点) + ssl双向认证
# etcd下载地址 https://github.com/etcd-io/etcd/tags wget https://github.com/etcd-io/etcd/releases/downlo ...
- 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署
1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...
- k8s集群添加node节点(使用kubeadm搭建的集群)
1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...
随机推荐
- Java8 Date与LocalDate互转
Java8 日期时间API,新增了LocalDate.LocalDateTime.LocalTime等线程安全类,接下来要说的是LocalDate与java.util.Date之间的转换. 1.Loc ...
- @NOI模拟2017.06.30 - T3@ Right
目录 @description@ @solution@ @part - 1@ @part - 2@ @accepted code@ @details@ @description@ JOHNKRAM 和 ...
- 2018-2-13-win10-uwp-获得Slider拖动结束的值
title author date CreateTime categories win10 uwp 获得Slider拖动结束的值 lindexi 2018-2-13 17:23:3 +0800 201 ...
- 【BestCoder Round #93 1002】MG loves apple
[题目链接]:http://acm.hdu.edu.cn/showproblem.php?pid=6020 [题意] 给你一个长度为n的数字,然后让你删掉k个数字,问你有没有删数方案使得剩下的N-K个 ...
- Oracle的dual是什么东西啊
原文:https://zhidao.baidu.com/question/170487574.html?fr=iks&word=dual&ie=gbk Oracle的dual是什么东西 ...
- 2007年NOIP普及组复赛题解
题目涉及算法: 奖学金:结构体排序: 纪念品分组:贪心: 守望者的逃离:动态规划: Hanoi 双塔问题:递推. 奖学金 题目链接:https://www.luogu.org/problem/P109 ...
- 【知识小结】PHP使用svn笔记总结
在公司里,我们要养成每天上班前更新代码,下班前提交代码的习惯,并且做好说明. svn更新代码的时候,先右键点击需要更新的项目,在team中进入资源库同步界面,选择incoming mode,显示的文件 ...
- Python--day63--添加书籍和修改表结构的注意事项
- tf.contrib.layers.xavier_initializer
https://blog.csdn.net/yinruiyang94/article/details/78354257xavier_initializer( uniform=True, seed=No ...
- webpack学习(二)初识打包配置
前言:webpack打包工具让整个项目的不同文件夹相互关联,遵循我们想要的规则.想 .vue文件, .scss文件浏览器并不认识,因此webpage暗中做了很多转译,编译等工作. 事实上,如果我们在没 ...