环境
  centos 7

Kubernetes有三种安装方式:yum、二进制、kubeadm,这里演示kubeadm。

一、准备工作
1、软件版本

软件 版本
kubernetes v1.15.3
CentOS7.6 CentOS Linux release 7.6.1810(Core)
Docker docker-ce-19.03.1-3.el7.x86_64
flannel 0.11.0

2、集群拓扑

IP 角色 主机名
192.168.118.106 master node106 k8s-master
192.168.118.107 node01 node107 k8s-node01
192.168.118.108 node02 node108 k8s-node02

节点及网络规划如下:

3、系统设置
3.1 配置主机名-/etc/hosts

192.168.118.106    node106 k8s-master
192.168.118.107 node107 k8s-node01
192.168.118.108 node108 k8s-node02

3.2 关闭防火墙

[root@node106 ~]# yum install -y net-tools
#关闭防火墙
[root@node106 ~]# systemctl stop firewalld
#禁用防火墙
[root@node106 ~]# systemctl disable firewalld

3.3 文件权限相关-关闭SELinux
目的是允许容器访问主机文件系统。

[root@node106 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
[root@node106 ~]# setenforce

3.4 关闭swap
kubernetes的想法是将实例紧密包装到尽可能接近100%,所有的部署应该与CPU/内存限制固定在一起,所以如果调度程序发送一个pod到一台机器,它不应该使用交换。
设计者不想交换,因为它会减慢速度,所以关闭swap主要是为了性能考虑。当然为了一些节省资源的场景,比如运行容器数量较多,可添加kubelet参数 --fail-swap-on=false来解决

[root@node106 ~]# swapoff -a
[root@node106 ~]# sed -i 's/.*swap.*/#&/' /etc/fstab

3.5 配置转发参数
RHEL/CentOS7上由于iptables被绕过而导致流量路由不正确的问题,需要将net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1。
确保br_netfilter在此步骤之前加载了模块。这可以通过运行来完成lsmod | grep br_netfilter。要加载它显式调用modprobe br_netfilter。
(1)首先查看是否加载了模块br_netfilter

[root@node106 ~]# lsmod | grep br_netfilter
br_netfilter
bridge br_netfilter

(2)如果未加载,进行加载

[root@node106 ~]# modprobe br_netfilter

(3)配置net.bridge.bridge-nf-call-iptables

[root@node106 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables =
> net.bridge.bridge-nf-call-iptables =
> EOF
[root@node106 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/-system.conf ...
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
net.bridge.bridge-nf-call-arptables =
* Applying /usr/lib/sysctl.d/-default-yama-scope.conf ...
kernel.yama.ptrace_scope =
* Applying /usr/lib/sysctl.d/-default.conf ...
kernel.sysrq =
kernel.core_uses_pid =
net.ipv4.conf.default.rp_filter =
net.ipv4.conf.all.rp_filter =
net.ipv4.conf.default.accept_source_route =
net.ipv4.conf.all.accept_source_route =
net.ipv4.conf.default.promote_secondaries =
net.ipv4.conf.all.promote_secondaries =
fs.protected_hardlinks =
fs.protected_symlinks =
* Applying /etc/sysctl.d/-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
* Applying /etc/sysctl.conf ...

4、docker安装
(1)设置docker源。

[root@node106 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node106 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#禁用docker-ce-edge开发版本 不稳定

[root@node106 ~]# yum-config-manager --disable docker-ce-edge
[root@node106 ~]# yum makecache fast

(2)查看目前官方仓库的docker版本

[root@node106 yum.repos.d]# yum list docker-ce.x86_64  --showduplicates |sort -r
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
* extras: mirrors.aliyun.com
docker-ce.x86_64 :19.03.-.el7 docker-ce-stable
docker-ce.x86_64 :19.03.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 :18.09.-.el7 docker-ce-stable
docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
docker-ce.x86_64 18.06..ce-.el7 docker-ce-stable
docker-ce.x86_64 18.03..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 18.03..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.12..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.12..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.09..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.09..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.06..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.03..ce-.el7 docker-ce-stable
docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
docker-ce.x86_64 17.03..ce-.el7.centos docker-ce-stable
* base: mirrors.aliyun.com
Available Packages

(3)安装docker

[root@node106 ~]# yum install docker-ce-19.03.-.el7 -y

(4)配置国内镜像仓库加速器

[root@node106 ~]# mkdir -p /etc/docker
[root@node106 ~]# tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://qr09dqf9.mirror.aliyuncs.com"]
}
EOF

(5)启动docker

[root@node106 ~]# systemctl daemon-reload
[root@node106 ~]# systemctl enable docker
[root@node106 ~]# systemctl start docker

验证:

[root@node106 ~]# docker -v
Docker version 19.03., build 74b1e89

5、安装kubernetes相关组件
5.1设置国内kubernetes阿里云源。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#增量更新缓存

[root@node106 ~]# yum makecache fast -y

#查看kubectl kubelet kubeadm列表

[root@node106 ~]# yum list kubectl kubelet kubeadm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Available Packages
kubeadm.x86_64 1.15.- kubernetes
kubectl.x86_64 1.15.- kubernetes
kubelet.x86_64 1.15.-

#安装

[root@node106 ~]# yum install -y kubectl kubelet kubeadm

开启kubelet服务

[root@node106 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

6、加载IPVS内核

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。pod的负载均衡是用kube-proxy来实现的,实现方式有两种,一种是默认的iptables,一种是ipvs,ipvs比iptable的性能更好而已。
(1)加载ipvs内核,使node节点kube-proxy支持ipvs代理规则。

#检查有没有开启
[root@node106 ~]# cut -f1 -d " " /proc/modules | grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh
ip_vs_wrr
ip_vs_rr
ip_vs
nf_conntrack_ipv4 #如果没有开启 使用如下命令开启:
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4

(2)添加到开机启动文件/etc/rc.local里面

cat <<EOF >> /etc/rc.local
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
EOF

(3)ipvs还需要ipset

[root@node106 ~]# yum install ipset ipvsadm -y

参考:

k8s集群中ipvs负载详解
如何在kubernetes中启用ipvs

kubernetes的ipvs模式和iptables模式

二、安装master节点
1、初始化master节点
kubeadm init --kubernetes-version=v1.15.3

1)初始化遇到的问题
第一次init:

[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs is less than the required
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

分析:
警告1:[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
警告2:[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
版本警告
警告3:[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
解决:[root@node106 ~]# systemctl enable kubelet.service
错误1:[ERROR NumCPU]:设置虚拟机CPU核心数>1个即可

第二次init:

[root@node106 ~]# kubeadm init --kubernetes-version=v1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
[root@node106 ~]#

分析:
错误1:[ERROR ImagePull] 拉取Image失败,因为连接的是google服务器,可以根据报错中版本号使用docker拉取或者通过kubeadm config images list查看需要下载的版本

[root@node106 ~]# kubeadm config images list
W0906 ::52.841583 version.go:] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0906 ::52.841780 version.go:] falling back to the local client version: v1.15.3
k8s.gcr.io/kube-apiserver:v1.15.3
k8s.gcr.io/kube-controller-manager:v1.15.3
k8s.gcr.io/kube-scheduler:v1.15.3
k8s.gcr.io/kube-proxy:v1.15.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.
k8s.gcr.io/coredns:1.3.
[root@node106 ~]#

(2)准备镜像
mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的镜像,先从这儿下载,然后修改 tag 即可。
#拉镜像

[root@node106 ~]# kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x && docker pull coredns/coredns:1.3.

#修改tag,将镜像标记为k8s.gcr.io的名称

[root@node106 ~]# docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x && docker tag coredns/coredns:1.3. k8s.gcr.io/coredns:1.3.

#删除无用的镜像

[root@node106 ~]# docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x && docker rmi coredns/coredns:1.3.

最终:

[root@node106 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 weeks ago .4MB
k8s.gcr.io/kube-apiserver v1.15.3 5eb2d3fc7a44 weeks ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 weeks ago 159MB
k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 weeks ago .1MB
k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB
[root@node106 ~]#

(3)初始化

因为后面要安装网络插件flannel ,所有这里要添加参数, --pod-network-cidr=10.244.0.0/16,10.244.0.0/16是flannel插件固定使用的ip段,它的值取决于你准备安装哪个网络插件

[root@node106 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.15.3
[init] Using Kubernetes version: v1.15.3
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [node106 localhost] and IPs [192.168.118.106 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [node106 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.118.106]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.007081 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node106 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node node106 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: unqj7v.wr7yvcj8i7wan93g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
--discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337

后续操作:

[root@node106 ~]# mkdir -p $HOME/.kube
[root@node106 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node106 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl命令默认从$HOME/.kube/config这个位置读取配置,不做这个操作,使用kubectl会报错。

2、给pod配置网络

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。
Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。

#下载Flannel插件配置

[root@node106 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@node106 ~]# ll
total
-rw-------. root root Aug : anaconda-ks.cfg
-rw-r--r-- root root Sep : kube-flannel.yml

#kube安装kube-flannel.yml

[root@node106 ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

#查看Master状态

[root@node106 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-dwjfs / Running 3h57m
kube-system coredns-5c98db65d4-xxdr2 / Running 3h57m
kube-system etcd-node106 / Running 3h56m
kube-system kube-apiserver-node106 / Running 3h56m
kube-system kube-controller-manager-node106 / Running 3h56m
kube-system kube-flannel-ds-amd64-srdxz / Running 2m32s
kube-system kube-proxy-8mxmm / Running 3h57m
kube-system kube-scheduler-node106 / Running 3h56m

不是running状态,就说明出错了,通过以下操作来来排错:
查看描述:

[root@node106 ~]# kubectl describe pod kube-scheduler-node106 -n kube-system

查看日志:

[root@node106 ~]# kubectl logs kube-scheduler-node106 -n kube-system

参考:Flannel安装部署

三、安装node节点

1、下载需要的镜像
node107和node108节点只需要安装kube-proxy和pause镜像

[root@node107 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 weeks ago .4MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB

2、添加节点
在master上初始化节点成功时,最后有一个kubeadm join,就是用来添加节点的
在node107和node108上操作:

[root@node107 ~]# kubeadm join 192.168.118.106:6443 --token unqj7v.wr7yvcj8i7wan93g \
> --discovery-token-ca-cert-hash sha256:011f55be71445e7031ac7a582afc7a4350cdf6d8ae8bef790d2517634d93f337
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create生成一个新的token。如果忘记token,可以使用kubeadm token list查看。

四、验证集群
1、节点状态

[root@node106 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node106 Ready master 4h53m v1.15.3
node107 Ready <none> 101s v1.15.3
node108 Ready <none> 82s v1.15.3

2、组件状态

[root@node106 ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd- Healthy {"health":"true"}

3、服务账户

[root@node106 ~]# kubectl get serviceaccount
NAME SECRETS AGE
default 5h1m

4、集群信息

[root@node106 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.118.106:6443
KubeDNS is running at https://192.168.118.106:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5、验证dns功能

[root@node106 ~]# kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
[ root@curl-6bf6db5c4f-dn65h:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address : 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default
Address : 10.96.0.1 kubernetes.default.svc.cluster.local

五、案例验证
创建一个nginx的service试一下集群是否可用。

(1)创建并运行deployment

[root@node106 ~]# kubectl run nginx --replicas= --labels="run=load-balancer-example" --image=nginx  --port=
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

(2)把服务通过nodeport的形式暴露出来

[root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service/example-service exposed
#查看服务的详细信息
[root@node106 ~]# kubectl expose deployment nginx --type=NodePort --name=example-service
service/example-service exposed
[root@node106 ~]# kubectl describe service example-service
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.108.73.249
Port: <unset> /TCP
TargetPort: /TCP
NodePort: <unset> /TCP
Endpoints: 10.244.1.4:,10.244.2.2:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none> #查看服务状态
[root@node106 ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service NodePort 10.108.73.249 <none> :/TCP 91s
kubernetes ClusterIP 10.96.0.1 <none> /TCP 44h
[root@node106 ~]# #查看pod
应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod 时 API Server 会从 etcd 中读取这些数据。
[root@node106 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
curl-6bf6db5c4f-dn65h / Running 39h
nginx-5c47ff5dd6-hjxq8 / Running 3m10s
nginx-5c47ff5dd6-qj9k2 / Running 3m10s

(3)访问服务IP

[root@node106 ~]# curl 10.108.73.249:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>

访问endpoint,与访问服务ip结果相同。这些IP只能在 Kubernetes Cluster中的容器和节点访问。endpoint与service 之间有映射关系。service实际上是负载均衡着后端的endpoint。其原理是通过iptables实现的

[root@node106 ~]# curl 10.244.1.4:
[root@node106 ~]# curl 10.244.2.2:

访问节点ip,与访问集群ip相同,可以在集群外部访问

[root@node106 ~]# curl 192.168.118.107:
[root@node106 ~]# curl 192.168.118.108:

整个部署过程是这样的:
① kubectl 发送部署请求到 API Server。
② API Server 通知 Controller Manager 创建一个 deployment 资源。
③ Scheduler 执行调度任务,将两个副本 Pod 分发到 node01 和 node02。
④ node01 和 node02 上的kubelet 在各自的节点上创建并运行 Pod。
flannel 会为每个 Pod 都分配 IP。

参考:
yum安装Kubernetes
二进制安装Kubernetes
kubeadm安装Kubernetes
手把手教你在CentOS上搭建Kubernetes集群
官网Installing kubeadm

Kubernetes最新版本安装过程和注意事项

kubeadm安装kubernetes1.13集群

【Kubernetes学习之二】Kubernetes集群安装的更多相关文章

  1. kafka学习2:kafka集群安装与配置

    在前一篇:kafka学习1:kafka安装 中,我们安装了单机版的Kafka,而在实际应用中,不可能是单机版的应用,必定是以集群的方式出现.本篇介绍Kafka集群的安装过程: 一.准备工作 1.开通Z ...

  2. Etcd学习(二)集群搭建Clustering

    1.单个etcd节点(测试开发用) 之前我一直开发测试一直是用的一个Etcd节点,然后启动命令一直都是直接打一个etcd(我已经将etcd安装目录的bin目录加入到PATH环 境变量中),然后启动信息 ...

  3. hbase和ZooKeeper集群安装配置

    一:ZooKeeper集群安装配置 1:解压zookeeper-3.3.2.tar.gz并重命名为zookeeper. 2:进入~/zookeeper/conf目录: 拷贝zoo_sample.cfg ...

  4. Oracle 10G RAC集群安装

    一,基本环境配置 01,hosts cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.loc ...

  5. Kubernetes 深入学习(一) —— 入门和集群安装部署

    一.简介 1.Kubernetes 是什么 Kubernetes 是一个全新的基于容器技术的分布式架构解决方案,是 Google 开源的一个容器集群管理系统,Kubernetes 简称 K8S. Ku ...

  6. K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 前言 网上有很多关于k8s安装的文章,但是我参照一些文章安装时碰到了不少坑.今天终于安装好了,故将一些关键点写下来与大家共享. 我安装是基 ...

  7. 【Kubernetes学习之三】Kubernetes分布式集群架构

    环境 centos 7 一.Kubernetes分布式集群架构1.Kubernetes服务注册和服务发现问题怎么解决的?每个服务分配一个不变的虚拟IP+端口, 系统env环境变量里有每个服务的服务名称 ...

  8. [转帖]K8s集群安装--最新版 Kubernetes 1.14.1

    K8s集群安装--最新版 Kubernetes 1.14.1 http://www.cnblogs.com/jieky/p/10679998.html 原作者写的比较简单 大略流程和跳转的多一些 改天 ...

  9. kubernetes kubeadm部署高可用集群

    k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...

  10. hadoop学习之hadoop完全分布式集群安装

    注:本文的主要目的是为了记录自己的学习过程,也方便与大家做交流.转载请注明来自: http://blog.csdn.net/ab198604/article/details/8250461 要想深入的 ...

随机推荐

  1. 关于wordpress4.8中的Twenty Seventeen主题的主题选项增加章节的实现

    我这里的wordpress版本是4.8  默认的主题是 Twenty Seventeen 我想实现的事 主题选项的首页  多增加2个章节 默认是只有4个章节  我想在增加2个 到6个 看下实现后的效果 ...

  2. shell脚本初学者笔记

    概述 Shell 是指一种应用程序,这个应用程序提供了一个界面,用户通过这个界面访问操作系统内核的服务. Shell 脚本(shell script),是一种为 shell 编写的脚本程序. Linu ...

  3. iOS应用开发应遵循的10条设计原则

    转自:http://mobile.51cto.com/design-309719.htm 1.操控便捷 iOS应用的控制设计应该具有圆润的轮廓和程式化的梯度,操作便捷. 2.结构清晰.导航方便 充分利 ...

  4. 25. Apache Shiro Java反序列化漏洞

    前言: 最近在审核漏洞的时候,发现尽管Apache shiro这个反序列化漏洞爆出来好久了,但是由于漏洞特征不明显,并且shiro这个组件之前很少听说,导致大厂很多服务还存在shiro反序列化的漏洞, ...

  5. 2 Linux磁盘管理

    Linux磁盘管理:磁盘管理好坏直接关系到整个系统的性能问题常用三个命令:df.du.fdiskdf:列出文件系统的整体磁盘使用量 df 参数 目录或文件名 -a:理出所有文件系统,包括系统特有的 / ...

  6. CDH报错:PersistenceException: [PersistenceUnit: cmf.server] Unable to build EntityManagerFactory

    1.在启动CDH中master的服务cloudera-scm-server start并立刻挂掉了,提示如下错误 org.springframework.beans.factory.BeanCreat ...

  7. Mysql读写分离操作

    环境:两台centos环境,安装mysql(mariadb) web网站的优化: 缓存技术 数据库缓存 redis 文件缓存 图片 fastdfs 负载均衡 nginx 数据库主从备份,读写分离 图解 ...

  8. 建议各位亲使用LocalDateTime而不使用Date哦

    在项目开发过程中经常遇到时间处理,但是你真的用对了吗,理解阿里巴巴开发手册中禁用static修饰SimpleDateFormat吗 通过阅读本篇文章你将了解到: 为什么需要LocalDate.Loca ...

  9. nginx 配置相关解析

    nginx模块处理流程一般是这样的: 客户端发送HTTP请求 –> Nginx基于配置文件中的位置选择一个合适的处理模块 ->(如果有)负载均衡模块选择一台后端服务器 –> 处理模块 ...

  10. 任意精度计算器 bc (arbitrary precision calculator)

    2019/06/18 bc 学习之 https://www.runoob.com/linux/linux-comm-bc.html