kubadm创建k8s集群

1:服务器信息以及节点介绍

主机名
ip
备注
k8s-master 192.168.0.104 master etcd keepalived
k8s-client1 192.168.0.99 master etcd keepalived
k8s-client2 192.168.0.114 node

虚拟IP:192.168.0.105

2.版本说明

docker 17.03.2-ce

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64

3.环境要求

3.1设置主机名称

hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-client1
hostnamectl set-hostname k8s-client2

3.2.配置主机映射

 192.168.0.104  k8s-master
192.168.0.99  k8s-client1
192.168.0.114  k8s-client2

3.3k8s-master上执行ssh免密码登陆配置

ssh-keygen #一路回车即可
ssh-copy-id k8s-client1
ssh-copy-id k8s-client2
ssh-copy-id k8s-client2

3.4主机配置、停防火墙、关闭Swap、关闭Selinux、设置内核、K8S的yum源、安装依赖包、配置ntp(配置完后建议重启一次

systemctl stop firewalld
systemctl disable firewalld

swapoff -a
sed -i 's/.swap./#&/' /etc/fstab

setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config

modprobe br_netfilter
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
【kubernete】
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim ntpdate libseccomp libtool-ltdl

systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536" >> /etc/security/limits.conf
echo "* hard nproc 65536" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf

4.安装、配置keepalived(主节点)

4.1:安装keepalived

yum install -y keepalived 
systemctl enable keepalived

k8s-master的keepalived.conf

cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.0.105:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3
    virtual_router_id 61
    priority 100
    advert_int 1
    mcast_src_ip 192.168.0.104
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        192.168.0.99
    }
    virtual_ipaddress {
        192.168.0.105/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

k8s-client1的keepalived.conf


cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://192.168.0.105:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface enp0s3
    virtual_router_id 61
    priority 90
    advert_int 1
    mcast_src_ip 192.168.0.99
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        192.168.0.104
    }
    virtual_ipaddress {
        192.168.0.105/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

4.2:启动keepalived

systemctl restart keepalived

可以看到VIP已经绑定到k8s-master上面了

enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:b2:09:6a brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.104/24 brd 192.168.150.255 scope global enp0s3
       valid_lft forever preferred_lft forever
    inet 192.168.0.105/24 scope global secondary enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::e3d1:55df:2f64:8571/64 scope link
       valid_lft forever preferred_lft forever

5.创建etcd证书(k8s-master上执行即可)

设置cfssl环境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH

创建 CA 配置文件(下面配置的IP为etc节点的IP)

 

mkdir /root/ssl

cd /root/ssl

cat >  ca-config.json <<EOF

{

"signing": {

"default": {

"expiry": "8760h"

},

"profiles": {

"kubernetes-Soulmate": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "8760h"

}

}

}

}

EOF

cat >  ca-csr.json <<EOF

{

"CN": "kubernetes-Soulmate",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "shanghai",

"L": "shanghai",

"O": "k8s",

"OU": "System"

}

]

}

EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > etcd-csr.json <<EOF

{

"CN": "etcd",

"hosts": [

"127.0.0.1",

"192.168.0.104",

"192.168.0.99"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "shanghai",

"L": "shanghai",

"O": "k8s",

"OU": "System"

}

]

}

EOF

cfssl gencert -ca=ca.pem \

-ca-key=ca-key.pem \

-config=ca-config.json \

-profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

k8s-master分发etcd证书到k8s-client1

mkdir -p /etc/etcd/ssl 
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/ 
ssh -n k8s-client1 "mkdir -p /etc/etcd/ssl && exit" 
scp -r /etc/etcd/ssl/*.pem k8s-client1:/etc/etcd/ssl/

6.安装配置etcd (两个主节点)

安装etcd

yum install etcd -y 
mkdir -p /var/lib/etcd

配置k8s-master的etcd.service


cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name k8s-master \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.0.104:2380 \
  --listen-peer-urls https://192.168.0.104:2380 \
  --listen-client-urls https://192.168.0.104:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.0.104:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster k8s-master=https://192.168.0.104:2380,k8s-client1=https://192.168.0.99:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

配置k8s-client1的etcd.service


cat <<EOF >/etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
  --name k8s-client1 \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls https://192.168.0.99:2380 \
  --listen-peer-urls https://192.168.0.99:2380 \
  --listen-client-urls https://192.168.0.99:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://192.168.0.99:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster k8s-master=https://192.168.0.104:2380,k8s-client1=https://192.168.0.99:2380 \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

添加自启动(etc集群最少2个节点才能启动,启动报错看mesages日志)

mv etcd.service /usr/lib/systemd/system/

systemctl daemon-reload

systemctl enable etcd

systemctl start etcd

systemctl status etcd

在两个etcd节点执行一下命令检查

etcdctl --endpoints=https://192.168.0.104:2379,https://192.168.0.99:2379 \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

7.所有节点安装配置docker

安装docker(kubeadm目前支持docker最高版本是17.03.x)
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  -y

修改配置文件 vim /usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --registry-mirror=https://ms3cfraz.mirror.aliyuncs.com

启动docker

systemctl daemon-reload

systemctl restart docker

systemctl enable docker

systemctl status docker

8.安装、配置kubeadm

所有节点安装kubelet kubeadm kubectl

yum install -y kubelet kubeadm kubectl 
systemctl enable kubelet

所有节点修改kubelet配置文件 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#修改这一行 
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
#添加这一行
Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

所有节点修改完配置文件一定要重新加载配置

systemctl daemon-reload

systemctl enable kubelet

命令补全

yum install -y bash-completion

source /usr/share/bash-completion/bash_completion

source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

 

9.初始化集群

k8s-master、k8s-client1添加集群初始配置文件(集群配置文件一样)


cat <<EOF > config.yaml
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://192.168.0.104:2379
  - https://192.168.0.99:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/etcd.pem
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 172.30.0.0/16
kubernetesVersion: 1.10.0
api:
  advertiseAddress: "192.168.0.105"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"
apiServerCertSANs:
- k8s-master
- k8s-client1
- 192.168.0.104
- 192.168.0.99
- 192.168.0.114
- 192.168.0.105
featureGates:
  CoreDNS: true
imageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth"
EOF

首先k8s-master初始化集群

kubeadm init --config config.yaml 

初始化失败后处理办法

kubeadm reset

初始化正常的结果如下。 kubeadm join之后的内容要保存,以后node加入集群使用

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.0.105:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315

k8s-master上面执行如下命令

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm生成证书密码文件分发到k8s-client1

 scp -r /etc/kubernetes/pki k8s-client1:/etc/kubernetes/

部署flannel网络,只需要在k8s-master执行就行

1)下载flannel镜像并且修改tag

docker pull cnych/flannel:v0.10.0-amd64

docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

2)wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 
#版本信息:quay.io/coreos/flannel:v0.10.0-amd64
kubectl create -f kube-flannel.yml

执行命令查看集群节点

[root@k8s-master ~]# kubectl   get node

NAME      STATUS    ROLES     AGE       VERSION

k8s-master    Ready     master    31m       v1.10.0

在k8s-client1上面分别执行初始化

kubeadm init --config config.yaml

#初始化的结果和k8s-master的结果完全一样

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点信息

[root@k8s-master ~]# kubectl get nodes

NAME      STATUS    ROLES     AGE       VERSION

k8s-master    Ready     master    1h        v1.10.0

k8s-client1    Ready     master    1h        v1.10.0

让master也运行pod(默认master不运行pod)

kubectl taint nodes --all node-role.kubernetes.io/master-

10.添加k8s-client2节点到集群

在k8s-client2节点执行如下命令,即可将节点添加进集群

kubeadm join  192.168.0.105 :6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315

 

[root@k8s-master ~]# kubectl get node

NAME      STATUS    ROLES     AGE       VERSION

k8s-master    Ready     master    45m       v1.10.0

k8s-client1    Ready     master    15m       v1.10.0

k8s-client2    Ready     <none>    13m       v1.10.0

kubadm创建k8s v1.10集群的更多相关文章

  1. kubeadm安装Kubernetes V1.10集群详细文档

    https://www.kubernetes.org.cn/3808.html?tdsourcetag=s_pcqq_aiomsg 1:服务器信息以及节点介绍 系统信息:centos1708 mini ...

  2. [原创]自动化部署K8S(v1.10.11)集群

          标准运维实现自动化部署K8S集群主要分两步,第一步是部署gse-agent,拱第二步执行部署. 第一步:部署gse-agent.如下: 第二步:部署k8s集群.主要通过作业平台分为5小步执 ...

  3. .Net Core2.1 秒杀项目一步步实现CI/CD(Centos7.2)系列一:k8s高可用集群搭建总结以及部署API到k8s

    前言:本系列博客又更新了,是博主研究很长时间,亲自动手实践过后的心得,k8s集群是购买了5台阿里云服务器部署的,这个集群差不多搞了一周时间,关于k8s的知识点,我也是刚入门,这方面的知识建议参考博客园 ...

  4. k8s 使本地集群支持 LoadBalancer 服务

    k8s 使本地集群支持 LoadBalancer 服务 为了使本地集群支持 LoadBalancer 服务,可以参考以下两种实现方案: keepalived-cloud-provider metalL ...

  5. wsl2 ubuntu20.04 上使用 kubeadm 创建一个单主集群

    wsl2 ubuntu20.04 上使用 kubeadm 创建一个单主集群 官方文档使用 kubeadm 创建一个单主集群 环境初始化 建议尽可能初始化环境,命令wsl --unregister Ub ...

  6. Kubeadm部署K8S(kubernetes)集群(测试、学习环境)-单主双从

    1. kubernetes介绍 1.1 kubernetes简介 kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理.目的是实现资源管理的自动 ...

  7. 运行一个nodejs服务,先发布为deployment,然后创建service,让集群外可以访问

    问题来源 海口-老男人 17:42:43 就是我要运行一个nodejs服务,先发布为deployment,然后创建service,让集群外可以访问 旧报纸 17:43:35 也就是 你的需求为 一个a ...

  8. K8S线上集群排查,实测排查Node节点NotReady异常状态

    一,文章简述 大家好,本篇是个人的第 2 篇文章.是关于在之前项目中,k8s 线上集群中 Node 节点状态变成 NotReady 状态,导致整个 Node 节点中容器停止服务后的问题排查. 文章中所 ...

  9. 日志分析系统 - k8s部署ElasticSearch集群

    K8s部署ElasticSearch集群 1.前提准备工作 1.1 创建elastic的命名空间 namespace编排文件如下: elastic.namespace.yaml --- apiVers ...

随机推荐

  1. 27.反射2.md

    目录 1.反射 2.类对象获取 3.构造函数获取 4.函数获取 4.注解反射 1.反射 定义:把一个字节码文件加载到内存中,jvm对该字节码文件解析,创造一个Class对象,把字节码文件中的信息全部存 ...

  2. app开发中读取数据库信息的vue页面

    <template> <!-- 容器 --> <div class="container"> <!-- 标头 --> <div ...

  3. Hibernate学习笔记2.5(Hibernate核心开发接口和三种状态)

    1.configuration(配置信息管理,产生sessionfactory) sessionfactory管理一系列的连接池 opensession 永远打开新的,需要手动close getcur ...

  4. Signing Your Applications(Android签名相关)

    In this document Signing Overview Signing in Debug Mode Signing in Release Mode Signing Android Wear ...

  5. 微信小程序---picker

    picker 从底部弹起的滚动选择器,现支持五种选择器,通过mode来区分,分别是普通选择器,多列选择器,时间选择器,日期选择器,省市区选择器,默认是普通选择器. wxml: 普通选择器(mode = ...

  6. php支付宝接口 的使用

    下载地址(java/php都有) https://doc.open.alipay.com/doc2/detail?treeId=66&articleId=103571&docType= ...

  7. 跨时代的分布式数据库 – 阿里云DRDS详解(转)

    原文章地址:https://www.csdn.net/article/a/2015-08-28/15827676 跨时代的分布式数据库 – 阿里云DRDS详解 发表于2015-08-28 18:39| ...

  8. Linux 学习总结(三)

    一. yum 命令 .列出所有可更新的软件清单命令:yum check-update .更新所有软件命令:yum update .仅安装指定的软件命令:yum install <package_ ...

  9. WAS 与IHS集成问题

    1.安装好WAS与IHS后 发布Web发现无法启动 查阅资料后发现缺少插件Plugins 于是去下载安装对应版本的Plugins 发现还是有问题 后来想起发布web01时,插件还未安装.因此重新发布一 ...

  10. Django的几种缓存的配置

    1.缓存的简介 在动态网站中,用户所有的请求,服务器都会去数据库中进行相应的增,删,查,改,渲染模板,执行业务逻辑,最后生成用户看到的页面. 当一个网站的用户访问量很大的时候,每一次的的后台操作,都会 ...