本文参考kubernetes docs 使用kubeadm创建single master的Kuberntes集群

虚机Centos75

Kubernetes Yum Repo采用国内阿里源

版本 v1.14.1 (该版本发布时间2019-04-09)

Pod网络采用Calico

1 配置镜像源

以yum为例,Ubuntu可以采用中科大ustc的源

官方Google源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

国内阿里源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2 初始化主机

# 2.1 设置 selinux, 可以直接disable,也可以设置为permissive
## 我一般会关掉firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
# 2.2 关闭swap,不关闭的话 需要设置kubelet的参数 --fail-swap-on=false (默认为true)
swapoff -a
## 查看swap
free -m
## 编辑 /etc/fstab 取消开机加载swap # 2.3 设置网络
## 2.3.1 加载内核模块 (可以直接进行2.3.3)
modprobe br_netfilter
lsmod | grep br_netfilter
## 2.3.2 配置网络等系统参数
cat <<EOF > /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system ## 2.3.3 配置内核启动自动加载
将需要加载的内核写入 /etc/modules-load.d 目录下某个文件中,注意文件权限 ## 2.3.4 安装ipvs, 取代 iptables
yum -y install ipvsadm
cat > /etc/modules-load.d/ipvs << EOF
br_netfilter
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs
EOF
for m in `cat /etc/modules-load.d/ipvs`; do echo $m; done

3 下载kubernetes文件

# 查看当前yum提供的kubeadm版本,选择需要安装的版本
yum search --showduplicates kubeadmin
# 直接安装最新版
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 如果用docker,kubelet会自动探测docker使用的cgroup driver,一般是cgroupfs,安装docker如下
docker version
yum install -y docker --disableexcludes=docker-ce
> 阿里有docker源,配置如下
> curl -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 如果使用其他cri 需要修改kubelet参数文件设置合适的cgroup driver
> KUBELET_EXTRA_ARGS=--cgroup-driver=whatyouwant # 配置开机启动
systemctl enable --now kubelet (--now 同时启动kubelet)
> 此时 systemctl status kubelet 会得到kubelet无法启动,因为 /var/lib/kubelet/config.yaml 文件不存在
> 可以暂时不处理, 后面 kubeadm init 命令会创建该文件

4 master 节点

kubeadm  init --pod-network-cidr=192.169.0.0/16 --image-repository registry.aliyuncs.com/google_containers
> --pod-network-cidr 注意不要和现有主机上的各网卡的网络冲突
> --kubernetes-version v1.14.1 可用该参数制定版本,需要比刚才下载的kubelet的版本低,这样才能兼容
> --apiserver-advertise-address= 指定服务监听ip

从该命令的输出, 可以看到创建了数字证书,设置/var/lib/kubelet/config.yaml 等配置文件各关键步骤

同时:

  • 1 提示需要安装pod network
  • 2 设置kubectl命令的配置文件的方法
  • 3 添加其他节点所使用的命令及参数

输出如下:

I0501 19:53:16.073098   11685 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0501 19:53:16.073210 11685 version.go:97] falling back to the local client version: v1.14.1
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [c75a.shared localhost] and IPs [10.211.55.7 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [c75a.shared localhost] and IPs [10.211.55.7 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c75a.shared kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.7]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.003746 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node c75a.shared as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node c75a.shared as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: uurhat.duj8060jmku42htb
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb \
--discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2

完成此步后,我们可以用 kubectl get nodes查看当前集群的节点信息, 以及 kubectl describe node 查看 node 的 label信息,可以发现master节点被打上了node-role.kubernetes.io/master label

5 pod network

以calico为例,calico自身也可以单独安装在主机上,而非Kuberntes集群上

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

如果kubeadm init时采用的--pod-network-cidr不是192.168.0.0/16 则需要先下载calico.yaml 修改为正确的配置

查看calico pods kubectl get pods --all-namespaces

6 添加node节点

root用户执行上面kubeadm init输出的命令, 即:

kubeadmin join apiserver_ip:apiserver_port --token token --discovery-token-ca-cert-hash sha256:<hash>

  1. 如果apiserver_ip是ipv6,则采用如下格式配置ip和端口: [fd00::101]:6443

  2. 如果token过期或忘记,可用命令查看或创建 kubeadm token list or kubeadm token create

  3. 如果忘记证书hash,可用如下命令:

 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa - pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

命令如下:

kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb \
--discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2

输出如下:

root@c75b ~# kubeadm join 10.211.55.7:6443 --token uurhat.duj8060jmku42htb \                                                  130
\ --discovery-token-ca-cert-hash sha256:2a3487a02927c7c496a7516af076ac3ad16e6b3721ee6c6a025bb87beace89e2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

然后在前面的master节点上执行kubectl get nodes, 新节点需要1分钟左右注册到master集群,时间取决于网络和主机性能

7 删除节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

为便于下次再次添加该节点需要执行:kubeadm reset

网络需要单独清理,清理之前可以先查看是否有除k8s以外的网络

iptables

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

ipvs

ipvsadm -C

8 部署Dashbord Add-on

8.1 执行命令

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

8.2 这个需要FQ下载容器镜像 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

通过该文(引用3)办法,已下载镜像,可以直接使用

docker pull registry.cn-hangzhou.aliyuncs.com/xw9/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.cn-hangzhou.aliyuncs.com/xw9/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

8.3 配置集群外访问

通过kubectl get svc -n kube-system 可以看到kubernetes-dashboard服务了

curl -vk https://10.110.167.233 可以看到访问成功

为了集群外访问,配置该服务为NodePort类型,也可以用Ingress等方式

kubectl edit svc kubernetes-dashboard -n kube-system
# 将spec中的type由ClusterIP改为NodePort
kubectl get svc -n kube-system # 查看使用的端口号,使用https协议即可访问

打开界面,可以看到需要上传kubeconfig文件或输入令牌

8.4 创建一个管理员用户(也可以将kubernetes-dashboard加入到cluster-admin角色中)

cat > admin-user.yaml << EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system ---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF kubectl apply -f admin-user.yaml # 获取令牌
kubectl describe secrets `kubectl get sa admin-user -o 'jsonpath={.secrets[0].name}' -n kube-system` -n kube-system | awk '$1=="token:"{print $2}'

10 运行Nginx

kubectl run nginx --image=nginx:1.16.0

11 备注

IPVS已在GA了,但安装完发现,仍然使用的是iptables,还需显式指定才能生效

kubeadm init --feature-gates=SupportIPVSProxyMode=true

待下次测试

Ref 本文参考致谢

1 kubernetes

2 zzphper blog 使用kubeadm快速部署kubernetes

3 k8s镜像推送国内

4 dashbord access

5 kubernetes dashboard user

本文为xiaowei原创,基于CC BY-NC-SA 4.0协议公开许可, 2019-05-01

五一Happy

kubeadm部署单master Kuberntes集群的更多相关文章

  1. Kubeadm部署K8S(kubernetes)集群(测试、学习环境)-单主双从

    1. kubernetes介绍 1.1 kubernetes简介 kubernetes的本质是一组服务器集群,它可以在集群的每个节点上运行特定的程序,来对节点中的容器进行管理.目的是实现资源管理的自动 ...

  2. kubeadm部署高可用K8S集群(v1.14.2)

    1. 简介 测试环境Kubernetes 1.14.2版本高可用搭建文档,搭建方式为kubeadm 2. 服务器版本和架构信息 系统版本:CentOS Linux release 7.6.1810 ( ...

  3. K8s二进制部署单节点 etcd集群,flannel网络配置 ——锥刺股

    K8s 二进制部署单节点 master    --锥刺股 k8s集群搭建: etcd集群 flannel网络插件 搭建master组件 搭建node组件 1.部署etcd集群 2.Flannel 网络 ...

  4. Kubeadm部署-Kubernetes-1.18.6集群

    环境配置 IP hostname 操作系统 10.11.66.44 k8s-master centos7.6 10.11.66.27 k8s-node1 centos7.7 10.11.66.28 k ...

  5. Kubeadm部署高可用K8S集群

    一 基础环境 1.1 资源 节点名称 ip地址 VIP 192.168.12.150 master01 192.168.12.48 master02 192.168.12.242 master03 1 ...

  6. 使用Kubeadm部署Kubernetes1.14.1集群

    一.环境说明 主机名 IP地址 角色 系统 k8s-node-1 192.170.38.80 k8s-master Centos7.6 k8s-node-2 192.170.38.81 k8s-nod ...

  7. 使用kubeadm进行单master(single master)和高可用(HA)kubernetes集群部署

    kubeadm部署k8s 使用kubeadm进行k8s的部署主要分为以下几个步骤: 环境预装: 主要安装docker.kubeadm等相关工具. 集群部署: 集群部署分为single master(单 ...

  8. 基于 kubeadm 部署单控制平面的 k8s 集群

    单控制平面不符合 HA 要求,但用于开发/测试环境不会有任何问题,如果资源足够的话(10台以上服务器,3台用于APIserver.3台用于 etcd 存储.至少3台用于工作节点.1台作为负载均衡),可 ...

  9. 最新二进制安装部署kubernetes1.15.6集群---超详细教程

    00.组件版本和配置策略 00-01.组件版本 Kubernetes 1.15.6 Docker docker-ce-18.06.1.ce-3.el7 Etcd v3.3.13 Flanneld v0 ...

  10. Kubernetes 部署 Nebula 图数据库集群

    Kubernetes 是什么 Kubernetes 是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes 的目标是让部署容器化的应用简单并且高效,Kubernetes 提供了应 ...

随机推荐

  1. JDBC之Statement

    Statement 目录 Statement Statement Statement概述 要执行的SQL分为两类 查询 增删改 Statement继承体系 SQL注入问题 SQL注入问题解决 获取得到 ...

  2. drf从入门到飞升仙界 02

    restful规范 # restful是一种定义web API接口的设计风格,适用于前后端分离的应用模式中 # 关于restful的10个规范 -1.数据的安全保障,通常使用https协议(http+ ...

  3. tomcat 1 - Servlet 容器

    Socket socket = new Socket ( "yahoo.com", 80); OutputStream os = socket.getOutputStream(); ...

  4. JS学习-Canvas

    Canvas Canvas API 提供了一个通过JavaScript 和 HTML的<canvas>元素来绘制图形的方式.它可以用于动画.游戏画面.数据可视化.图片编辑以及实时视频处理等 ...

  5. JS实现另存/打印功能

    代码实现 <div id="main"> <-- 需要保存的内容 --></div> <div @click="printdiv ...

  6. Python request模块 携带cookie

    # _*_coding:utf-8 _*_ import time import requests import json import sys import random import string ...

  7. hive支持的压缩算法

    压缩格式的设置 set mapred.output.compression= 压缩格式 工具 算法 扩展名 是否支持分割 Hadoop编码/解码器 default deflate .deflate N ...

  8. 【MYSQL】group_concat长度问题分析

    今天在生产环境发现一个ArrayIndexOutOfBounds的问题,经过排查,发现是group_concat拼接的字符串太长,超过了1024,导致报错. 我们可以通过 : SET [SESSION ...

  9. C/C++ 关键字 static 详细解析

    static关键字是一个修饰符,根const类似,被它修饰的变量和函数分别被称为静态变量和静态函数,根据修饰的对象的不同,static表现出来的作用也不同. 1. C语言中的 static 在C语言中 ...

  10. docker 中搭建 mysql pxc 集群

      一.docker中创建pxc 容器 1.拉取PXC 镜像 pull docker pull percona/percona-xtradb-cluster:5.7.21 2.更改镜像名称为pxc t ...