k8s集群———etcd-三节点部署
etcd集群部署
,创建etcd可执行文件,配置文件,证书文件存放目录
mkdir /opt/etcd/{bin,cfg,ssl} -p ,创建包文件存放目录
mkdir /soft -p ,解压etcd包。并将可执行文件移动到/opt/etcd/bin
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ ,etcd配置文件
$ cat etcd
#[Member]
ETCD_NAME="etcd01" #节点名称,如果有多个节点,这里必须要改,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #数据目录
ETCD_LISTEN_PEER_URLS="https://192.168.1.63:2380" #集群沟通端口2380
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.63:2379" #客户端沟通端口2379 #[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.63:2380" #集群通告地址
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.63:2379" #客户端通告地址
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.63:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.66:2380" #这个集群中所有节点,每个节点都要有
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群token
ETCD_INITIAL_CLUSTER_STATE="new" #新创建集群,existing表示加入已有集群
root@k8s-master: /opt/etcd/cfg ::
$ ,systemd管理etcd
#里面的参数都是需要引用主配置文件的变量,所有如果报错,尝试查看一下主配置文件是否配置出错,/opt/etcd/cfs/etcd
root@k8s-master: /opt/etcd/cfg ::
$ cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target [Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd --name=${ETCD_NAME} --data-dir=${ETCD_DATA_DIR} --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} --initial-cluster=${ETCD_INITIAL_CLUSTER} --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE= [Install]
WantedBy=multi-user.target
root@k8s-master: /opt/etcd/cfg ::
$ ,重新加载配置文件并启动
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd ,查看启动日志
tail -f /var/log/messages #会出现与node01和node02无法沟通的状况
#看下边日志,这是因为客户端并没有配置etcd节点文件和ssl,所以会一直报错,systemctl start etcd其实是启动成功,但是沟通不到,所以会启动很长时间
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: health check for peer 472edcb0986774fe could not connect: dial tcp 192.168.1.65:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: health check for peer 89e49aedde68fee4 could not connect: dial tcp 192.168.1.66:: connect: connection refused (prober "ROUND_TRIPPER_SNAPSHOT") ,node01,node02操作 #将master节点配置文件scp到node01,node02 #将/opt/etcd/下的配置文件文件,文件夹递归传到node01,node02的opt下
scp -r /opt/etcd/ root@192.168.1.66:/opt
scp -r /opt/etcd/ root@192.168.1.65:/opt #将systemctl下的etcd.service传到node01,node02的/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.65:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service root@192.168.1.66:/usr/lib/systemd/system/ #这时在tail -f /var/log/messages
ps:
#由于环境是虚拟机环境所以,以下日志是master和node节点时间不同步造成的ntpdate time.windows.com Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792944111s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861673928s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858782669s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.793075827s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.795990455s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.858938895s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE")
Mar :: localhost etcd: the clock difference against peer 89e49aedde68fee4 is too high [.861743791s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.796159244s > 1s] (prober "ROUND_TRIPPER_SNAPSHOT")
Mar :: localhost etcd: the clock difference against peer 472edcb0986774fe is too high [.792476037s > 1s] (prober "ROUND_TRIPPER_RAFT_MESSAGE") $ crontab -l
* * * * ntpdate time.windows.com >/dev/null >& ,最后测试一下集群节点状态
(完成)
#如果输出下面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
root@k8s-master: ~ ::
$ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.1.63:2379,https://192.168.1.65:2379,https://192.168.1.66:2379" cluster-health
member 472edcb0986774fe is healthy: got healthy result from https://192.168.1.65:2379
member 89e49aedde68fee4 is healthy: got healthy result from https://192.168.1.66:2379
member ddaf91a76208ea00 is healthy: got healthy result from https://192.168.1.63:2379
cluster is healthy
root@k8s-master: ~ ::
$
k8s集群———etcd-三节点部署的更多相关文章
- k8s集群之master节点部署
apiserver的部署 api-server的部署脚本 [root@mast-1 k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ...
- k8s集群节点更换ip 或者 k8s集群添加新节点
1.需求情景:机房网络调整,突然要回收我k8s集群上一台node节点机器的ip,并调予新的ip到这台机器上,所以有了k8s集群节点更换ip一说:同时,k8s集群节点更换ip也相当于k8s集群添加新节点 ...
- 记录一个奇葩的问题:k8s集群中master节点上部署一个单节点的nacos,导致master节点状态不在线
情况详细描述; k8s集群,一台master,两台worker 在master节点上部署一个单节点的nacos,导致master节点状态不在线(不论是否修改nacos的默认端口号都会导致master节 ...
- k8s集群———单master节点2node节点
#部署node节点 ,将kubelet-bootstrap用户绑定到系统集群角色中(颁发证书的最小权限) kubectl create clusterrolebinding kubelet-boots ...
- 使用KubeOperator扩展k8s集群的worker节点
官方文档网址:https://kubeoperator.io/docs/installation/install/ 背景说明 原先是一个三节点的k8s集群,一个master,三个woker(maste ...
- K8S集群etcd备份与恢复
参考链接: K8S集群多master:Etcd v3备份与恢复 K8S集群单master:Kubernetes Etcd 数据备份与恢复 ETCD系列之一:简介:https://developer.a ...
- 使用kubeadm安装k8s集群故障处理三则
最近在作安装k8s集群,测试了几种方法,最终觉得用kubeadm应该最规范. 限于公司特别的网络情况,其安装比网上不能访问google的情况还要艰难. 慢慢积累经验吧. 今天遇到的三则故障记下来作参考 ...
- 高可用etcd集群(三节点) + ssl双向认证
# etcd下载地址 https://github.com/etcd-io/etcd/tags wget https://github.com/etcd-io/etcd/releases/downlo ...
- 8.实战交付一套dubbo微服务到k8s集群(1)之Zookeeper部署
1.基础架构 主机名 角色 ip HDSS7-11.host.com K8S代理节点1,zk1 10.4.7.11 HDSS7-12.host.com K8S代理节点2,zk2 10.4.7.12 H ...
- k8s集群添加node节点(使用kubeadm搭建的集群)
1.安装docker.kubelet.kubectl.kubeadm.socat # cat kubernets.repo[kubernetes]name=Kubernetesbaseurl=http ...
随机推荐
- 基于TableStore的海量气象格点数据解决方案实战
前言 气象数据是一类典型的大数据,具有数据量大.时效性高.数据种类丰富等特点.气象数据中大量的数据是时空数据,记录了时间和空间范围内各个点的各个物理量的观测量或者模拟量,每天产生的数据量常在几十TB到 ...
- oracle使用日期
当使用日期是,需要注意如果有超过5位小数加到日期上, 这个日期会进到下一天! 例如: 1. SELECT TO_DATE(‘01-JAN-93’+.99999) FROM DUAL; Returns: ...
- 2018-6-24-WPF-使用RPC调用其他进程
title author date CreateTime categories WPF 使用RPC调用其他进程 lindexi 2018-06-24 14:41:29 +0800 2018-2-13 ...
- Pytorch Bi-LSTM + CRF 代码详解
久闻LSTM + CRF的效果强大,最近在看Pytorch官网文档的时候,看到了这段代码,前前后后查了很多资料,终于把代码弄懂了.我希望在后来人看这段代码的时候,直接就看我的博客就能完全弄懂这段代码. ...
- 教你怎么让vi和vim显示行数
首先我们来看看没有行号是多么难看. 2 再来看看有行号后的效果. 3 设置行号很简单. 我们要到命令模式下,输入set number :set number 按下回车 来看看效果 4 那么怎么关闭行号 ...
- 有趣的一行 Python 代码
https://mp.weixin.qq.com/s/o9rm4tKsJeEWyqQDgVEQiQ https://mp.weixin.qq.com/s/G5F_GaUGI0w-kugOZX145g ...
- java 合并流(SequenceInputStream)
需要两个源文件,还有输出的目标文件 SequenceInputStream: 将两个文件的内容合并成一个文件 该类提供的方法: SequenceInputStream(InputStream s1, ...
- asp dotnet core 图片在浏览器没访问可能原因
我写了一个项目用来广告就用到广告的图片,但是广告的图片放在博客的链接无法访问,连我的方法都没有调用,而我尝试网页直接访问图片链接是可以访问的,最后找到原因是广告插件禁用了图片访问 我在一个方法创建了广 ...
- Eclipse修改控制台字体
步骤:Window-->Preference-->General-->Appearance-->Colors and Fonts-->Basic-->Text Fo ...
- H3C查看保存的配置文件